I think what Kent is saying there is that he knows which tests he can skip writing because they provide little value. If you're as experienced and knowledgable as Kent then that's probably true.
For the rest of us it's better to write too many tests than too few. You might waste some time, but that a smaller problem than accidentally skipping a test that was actually useful and pushed your code in a better direction than you'd have gone without it.
> For the rest of us it's better to write too many tests than too few. You might waste some time
That's a very bad attitude that is all but enforced by TDD proponents.
It's very easy to see which tests are useless and which are not. So you end up with people writing thousands of unit tests and little-to-no integration or functional tests because:
- "TDD told me so", and
- Testing frameworks are written by people who adhere to the same philosophy
Whereas there's very little utility in those "too many tests" while they give you a very false sense of security.
It's very easy to see which tests are useless and which are not.
For trivial things, maybe, but then people start applying the 'rule' to non-trivial things "because it's easy" and they get it wrong. All I'm saying is that until you are a Kent Beck level expert it's safer to err on the side of caution. When you have a ton of experience and knowledge you can do what you like.
Also note that I said its better to write too many tests than too few - that still doesn't mean you should test everything, or that you have to test trivial things. It just means when you're not 100% sure it's safer to write a test.
> For trivial things, maybe, but then people start applying the 'rule' to non-trivial things "because it's easy" and they get it wrong. All I'm saying is that until you are a Kent Beck level expert it's safer
Most of the stuff we do is trivial [1]. Unfortunately no one really teaches testing or shows what needs ot be tested. Hence the prevalent "write hundreds of tests perhaps some of them will actually be useful". And most available examples, and most available advice is that: write many many useless tests.
I have a perfect example from the Java + Spring world. It's common to have a Controller + Service + Facade + external service client definition.
So I've seen countless times when tests are written as follows:
- external service client is mocked, multiple unit tests for the Facade to make sure it returns data
- Facade is mocked, multiple unit tests for Service, to make sure data is returned
- Service is mocked, mutiple unit tests for Controller, to make sure ata is returned
They are all the same tests, and can be easily deleted and replaced by a single suite of:
- external service is mocked, including invalid responses and timeouts. The actual rest service provided by the app is tested for all the scenarious that were quadruplicated in the unit tests above
You don't need to be a "kent Beck level expert" to do that. However, almost literally nothing teaches you to do that or helps you write those tests. Almost literally everything is hardwired to write small useless unit tests.
[1] Except UIs. I have no idea how to test UIs, and I don't think anyone does :D