> By definition, going through TDD is going to be a safe route, ...
Why is TDD safe? Most TDD advocates seem to be blind to the fact that testing is a terrible way to prove many important properties about software systems, security properties for example. If you're betting your ever-so-scarce programming resources on TDD, you're probably paying too much, getting a lower return than you could be getting, and leaving some serious holes in your software. As I wrote in [1]:
If all you know about getting your code right is TDD, you’ll never bet on types or proofs or constructive correctness because you don’t know how to place those bets. But those bets are often dirt cheap and pay in spades. If you’re not betting on them at least some of the time, whatever you are betting on probably costs more and pays less. You could be doing better.
So I don't think that TDD is a "safe" bet. I think it's an expensive bet that has relatively poor payoffs.
The safety of TDD comes from having a test suite that you trust. Given that suite, you can safely refactor the code. If you can refactor safely, you can improve the design safely. If you can improve the design, you can stop the inevitable slowdown that comes from making a mess.
What is the risk of _not_ doing TDD? The risk is that slowdown. We've all experienced it. What is the cost of TDD? You'd like to say that it takes time; but since the risk is a slowdown, the net is positive no matter how you look at it.
That's the irony of all these complaints. They assume, and sometimes they simply state, that TDD slows you down. And yet, the primary effect of TDD is to speed you up, and speed you up a lot.
Some folks suggest that it's a short-term slowdown for a long-term speedup. But in my experience the short-term is measured in minutes. Yes, it might take you a few extra minutes to write that test first; but by the end of the day you've refactored and cleaned the code so much that you've gone much faster _that day_ than you would have without TDD.
The benefits that you attribute to TDD are not exclusive to TDD. They are the benefits of having well-tested code, and TDD is only one of the ways to get there.
The problem with the TDD way of getting there, however, is that it's expensive: It makes programmers see their code through the pinhole of one failing test at a time, blinding them to larger concerns, which are important. As a result, a lot of avoidably crappy code gets written at first and then must be reworked later, when its flaws are finally allowed to come into view.
If you're a new programmer who hasn't learned how to reason about larger units of logic and the relationships between them, maybe that pinhole restriction proves helpful. But for more seasoned programmers, it's constraining and wasteful.
The idea that TDD involves some kind of blind faith that the tests will generate grand designs and beautiful code is both silly and wrong. You are right about that. Good design and good code require skill, thought, and knowledge irrespective of whether you are using TDD. So as I practice the discipline, I am thinking all the time about larger scale issues, and I am _not_ being blinded to those concerns.
However, the act of writing tests first has a powerful benefit: the code you write _must_ be testable. It is hard to understate this benefit. If forces a level of decoupling that most programmers, even very experience programmers, would not otherwise engage in.
It also has a psychological impact on the programmer. If every line of production code you write is in response to a failing test, you will _trust_ your test suite. And when you trust your test suite, you can make fearless changes to the code on a whim. You can _clean_ it and improve the design without trepidation.
Gaining these benefits without writing tests first is possible, but much less reliable. And yet the cost of writing the tests first is no greater than writing the tests second.
> However, the act of writing tests first has a powerful benefit: the code you write _must_ be testable.
No, the act of writing well-tested code at all has that benefit. Whether you write the tests before or after the code, one at a time or in module-sized groups, writing code that's hard to test has immediate and obvious penalties when you test it (e.g., tedious rework), and you'll quickly learn to avoid those penalties. So just having the discipline to write well-tested code at all forces you to write code that's not only testable but easily testable. This benefit is not unique to TDD.
> It also has a psychological impact on the programmer. If every line of production code you write is in response to a failing test, you will _trust_ your test suite.
It's not enough to trust that your tests actually test your code. You also need to trust that your tests express your desired semantics. And that's harder to do when the semantics is not designed in whatever form and grouping is most natural to its representation but rather is extruded, one test at a time, through the pinhole view that TDD imposes upon programmers.
> And yet the cost of writing the tests first is no greater than writing the tests second.
What you seem to be overlooking is that TDD not only forces you to write tests first but also in tiny baby-steps that cause programmers to focus only on satisfying one test at a time. As a result, the initial code that is written satisfies only a small portion of the system's overall semantics (the portion that's been expressed as tests so far), and a lot of that code ends up having to be reworked when later tests finally uncover other requirements that affect it. This leads to rework that would have been avoidable had the programmers not been blinded to those requirements earlier on.
The problem with TDD isn't so much that it's test first but that it promotes a pinhole view of subjects that are not narrow.
"However, the act of writing tests first has a powerful benefit: the code you write _must_ be testable. It is hard to understate this benefit. If forces a level of decoupling that most programmers, even very experience programmers, would not otherwise engage in".
I'm sorry but I really find that sentiment to be totally inaccurate. I've never seen a good experienced programmer write hard-to-test or coupled code, regardless of whether they are using TDD or not. A hallmark of when makes them good is that they all have some testing methodology that enforces this and works for them. TDD is one, but there are many others (and yes, that includes good manual-only testing).
I also don't see why you believe TDD is the only way to successfully refactor code, or that only developers who use TDD continually refactor their code to eliminate technical debt and increase productivity. Again, every good programmer does this. TDD is one way to get there. It is not the only way.
It's Test Driven Development - not Proof Of Correctness.
The worst thing about TDD having 'test' in the name is that people think it is about preventing bugs, or catching them.
It is not.
It is about enabling easy refactoring. The other stuff - preventing regressions, catching bugs, etc. is gravy.
Just curious: If you believe that TDD isn't supposed to help you write code that does what it's supposed to do, what do you do in addition to TDD to make sure that your code actually works correctly?
I write a specification (using RSpec) of some behavior. It fails.
Then I write code to make that specification pass.
Now the code is working correctly, as I have defined it.
Then I refactor, using my specs (tests) as a safety net to ensure that everything after the refactor still works as intended.
This is a _VERY_ different approach than coming up with some solution in my head, implementing it (most likely with bugs), and then using tests to find and eliminate as many bugs as possible (but usually not all of them).
Any errors that make it through to a commit, when I am doing TDD, are errors in how I have specified (or failed to specify) the behavior. Any errors in the design or implementation of the solution are caught by building to a spec in very small steps.
That's the key difference between properly done BDD/TDD and other testing. Writing the tests prevents bugs instead of catching them, and it ensures that behavior does not change after refactoring.
It may be a subtle distinction, but in practice it makes a huge impact.
1. What do you about security? How, for example, do you make sure you don't introduce XSS vulnerabilities into your code? To use your words, how does "writing the tests prevent bugs" when we're talking about bugs that create XSS vectors?
2. Don't you think you're paying a penalty by defining and implementing your system's semantics through the pinhole-sized view of one failing test at a time? That is, why wouldn't you be better off defining the semantics in whatever-sized units make the most sense, not necessarily one spec's worth at a time, and then deriving your tests and implementation from the semantics accordingly?
Why is TDD safe? Most TDD advocates seem to be blind to the fact that testing is a terrible way to prove many important properties about software systems, security properties for example. If you're betting your ever-so-scarce programming resources on TDD, you're probably paying too much, getting a lower return than you could be getting, and leaving some serious holes in your software. As I wrote in [1]:
If all you know about getting your code right is TDD, you’ll never bet on types or proofs or constructive correctness because you don’t know how to place those bets. But those bets are often dirt cheap and pay in spades. If you’re not betting on them at least some of the time, whatever you are betting on probably costs more and pays less. You could be doing better.
So I don't think that TDD is a "safe" bet. I think it's an expensive bet that has relatively poor payoffs.
[1] http://blog.moertel.com/posts/2012-04-15-test-like-youre-bet...