Last week I conducted a little survey. The results I now would like to present and discuss here.
In total 17 people shared their opinion. Thank you for taking the time. Of course this cannot lead to statistically sound statements, but we can show tendendcies, opinions, and food for thought.
Do you use TDD in your current project?
More than three quarter said “yes, they do”. The comments in the earlier blog post allow the conclusion, that TDD does not mean the same for everybody, though. An additional blog post on that topic could be helpful.
Those who said to use TDD, were using it with varying experience: three participants started to use TDD less than a year ago, 4 participants work with TDD for over three years, the rest is in between. Reasons not to use TDD were interesting:
I’m extending others code which would require a complete rewrite to use tests.
Perceived time pressure
Could not convince the managers to adopt TDD and people could not get the concept [...]. They felt its an uncalculated risk which can have major repercussions as there are no proven metrics supporting the TDD methodology.
What programming languages do you use in you current project?
Little surprises here, except for the large ruby fraction, maybe:
Have you found (if you use TDD) or would you expect TDD to have an impact on
The question was which impact the usage of TDD has. While the impact of TDD on number of bugs, design quality and even joy of worklife was considered positive or very positive, opinions differed with respect to the effort per feature.
How would you rate TDD compared to other techniques?
Participants should rate various software development techniques. The diagram below shows the average position and variance of the results, which is very interesting in my opinion. In front of TDD, participants found “Clean Code” to be even more important. Then TDD, which is closely followed by Domain Driven Design. Who’d had thought of that?
In the midfield, opponents are close to each other and also the variance is high: Pair Programming, ATDD and good comments and docs can be considered to be equally important – in the opinion of the participants. Static code analysis and metrics fall off a little, and the clear loser is UML. Sure, nobody needs it, right. Besides, also the tools … can be improved.
What’s your position on the statement, that code that results from TDD is primarily designed for testability, but not necessarily for maintainability.
Finally I threw a statement in for discussion and harvested some very good positions. Let’s continue the discussion in the comments section below.
The “Pro”-Side can be summarized as such: TDD alone is not enough. You can write good and bad code with TDD. Main reason is the refactoring phase and design principles like SOLID and DRY. Who stops after tests are green, is doing only half the work. Also that is a point in the discussion: the effort for the second half, the refactoring, seems to be unappropriately high.
Good test-driven code is both, testable and maintainable. But I can also write bad (maintainable) code in a test-driven way, that’s the problem.
TDD-Code, when written well, aids maintenance. Test-Code, when written bad, hinders maintenance (that’s why test-after-test are generally a bad idea)
TDD paired with a good understanding of refactoring, the SOLID principles, YAGNI, DRY etc results in some of the most beautiful, reuseable and highly maintainable code a developer can write.
I agree with the statement 100%. I often find that developers are writing code specifically to make it testable, which leads to poor design, poor maintainability. Also, writing tests consumes so much additional development time that very little effort is spent on the refactoring steps, which again leads to poor design. Personally I would be interested to know what sort of impact code coverage targets have on the TDD process. Are developers simply writing tests for coverage or are they actually creating meaningful tests. We have a 90% target at my current shop and I have found that there is a ton of copy/paste coding going on here simply because the developers know there is already test code written that will cover the code.
true but maintainability is a side effect.
When maintaining a system written using TDD, the tests act as active living documentation on what important features the code actually has, as opposed to side-effects that aren’t required. Without TDD you have to rely on comments and documentation, which often become stale and outdated. In fact, that is why I ranked clean code lower than TDD, DDD, and Pair Programming. In my experience, these three practices heavily drive Clean Code.
If, for example, you refactor a system that was not written with TDD that is not fully tested, how do you know if you broke an existing assumption?
If, instead, you refactor a system that was written with TDD, you can determine what assumptions you may have broken, and choose to restore them or amend them. Often tests will fail as a part of refactoring, but knowing why the tests failed allows you to anticipate otherwise-unexpected system behaviour.
Don’t agree. TDD often lead to a very clean and maintainable code.
Testable code has usually fewer and looser dependencies, if done in a top-down manner it’s also simpler. All of these directly contribute to maintainability. Some people treat test code as “second class” code, which does not need to be DRY or refactored. I think it’s from such behavior that the misconceptions about TDD stem.
What’s your take on that statement? Does TDD primarily create testable code, and not necessary also maintainable code? Discuss!