Arc Forumnew | comments | leaders | submitlogin
3 points by shader 4311 days ago | link | parent

Organizations which separate those two functions tend to gradually suck

Hmm... Well, that's certainly a valid opinion, and it may even be true in a lot of cases, however I think the issue is largely due to two other related issues: 1) The requirements aren't specified clearly enough, and are dissociated from the tests as well, and 2) they just don't have good enough tools.

Tests can serve many purposes. The most basic, given the fundamental concept of a test, is to tell you when something works, or when it doesn't. TDD focuses on the former, unit testing and regression testing focus more on the later. Tests can be used at different points in the development cycle, and if you use the wrong kind at the wrong point, it's not really all that helpful.

My concern is that its too difficult to write the correct kind of test, so most developers use the wrong kind or don't use any at all. There's nothing really wrong with that, I think it's just an unfortunate inefficiency, like trying to debug without stacktraces. >.> Hmm. Something to look forward to going back to arc I suppose. Anyway, my goal is to make testing easy enough to do, either for developers who just want a quick way to check if they broke something after making a 'minor' change, or for larger companies that want to know that all their requirements have actually been met.

So, to solve the first problem I'm hoping to utilize a lot of reflection and code inspection so that at least the outline of the test cases can be generated automatically, if not more. Then it should be really easy for the programmer to just add the missing details, either as specific test vectors or by using a more general definition of requirements using something like QuickCheck's generators.

In the long run the plan is for the tool to be able to support working the other direction, from requirements to tests. Hopefully with better tool support, and more intelligent interaction with the system under test, it should be possible for the architects to specify the requirements, and the tool should be able to verify that the code works.

Yes, divorcing tests from code could mean that different people do them. Doesn't have to be the case, but it becomes a possibility. And that means that they could will have a different perspective on the operation of the system, but not necessarily a worse one. If it's the architects or BAs writing the tests, then they might actually have more information about how the system should be working than the programmers, especially in the case that the programmers are outsourced. At which point allowing someone else to write the tests is an improvement. When developers write the tests, it doesn't help if they use the same incorrect understanding of the requirements for both the tests and the code.

Hopefully by making an easy enough tool that supports rapidly filling in tests based on code analysis (which would help anyone that doesn't know much about the actual code base match it up with the requirements they have) reducing boiler plate and barriers to testing, making it a much easier to use tool for developing. Maybe if it gets easy enough, developers would find that testing actually saves enough time testing to be worth the few seconds specifying test vectors for each method. And if it can do a good enough job at turning requirements into tests in a way that is clear enough to double as documentation, it should save the architects and BAs enough time, as well as make implementation easier for developers, that I might actually be able to sell licenses :P



2 points by akkartik 4310 days ago | link

"If it's the architects or BAs writing the tests, then they might actually have _more_ information about how the system should be working than the programmers,"

Oh, absolutely. I didn't mean to sound anti-non-programmer.

I tend to distrust labels like 'architect' and 'programmer'. Really there's only two kinds of people who can work on something: those who are paid to work on it, and those who have some other (usually richer and more nuanced) motivation to do so. When I said, "Organizations which separate those two functions tend to gradually suck", I was implicitly assuming both sides were paid employees.

If non-employees (I'll call them, oh I don't know, owners) contribute towards building a program the result is always superior. Regardless of how they contribute. They're just more engaged, more aware of details, faster to react to changes in requirements (which always change). Your idea sounds awesome because it helps them be more engaged.

But when it's all paid employees and you separate them into testers and devs, then a peculiar phenomenon occurs. The engineers throw half-baked crap over to testers because, well, it's not their job to test. And test engineers throw releases back at the first sign of problems because, well, it's not their job to actually do anything constructive. A lot of shuffling back and forth happens, and both velocity and safety suffer, because nobody cares about the big picture of the product anymore.

(Agh, this is not very clear. I spend a lot of time thinking about large organizations. Another analogous example involves the patent office: http://akkartik.name/blog/2010-12-19-18-19-59-soc. Perhaps that'll help triangulate on where I'm coming from.)

(BTW, I've always wondered: what's that cryptic string in your profile?)

-----