I would say something like 95% of the code I have been paid to write as a software engineer has 0% test coverage. Like, literally, not a single test on the entire project. Across many different companies and several countries, frontend and backend.
I wonder if I'm an anomaly, or if it's actually more common that one might assume?
Once you realize that automated testing gives you a level of confidence throughout iteration that you can't replicate through manual interaction (nor would you want to), you never go back.
It's just a matter of economics. Where the costs of bugs in production is low, which is probably the vast majority of the software out there, extensive test coverage simply doesn't make economic sense. Something breaks in some niche app, maybe someone is bothered enough to complain about it, it gets fixed at some point, and everybody moves on.
Where the costs are high, like say in safety critical software or large companies with highly paid engineers on-call where 9s of uptime matters, the amount of testing and development rigor naturally scale up.
This is why rigid stances like that from "Uncle Bob" are shortsighted: they have no awareness of the actual economics of things.
Way more common. Tests are at best overrated. And doing them properly is big PITA. The first thing is that the person writing the tests and the person writing the code should be different. And our languages are not really suited for the requirements of testing. They can and do save your ass in certain situation, but the false security they provide is probably more dangerous.
This sounds very "the perfect is the enemy of good". Tests don't need to be perfect, they don't need to be written by different people (!!!), they don't need to cover 100% of the code. As long as they're not flakey (tests which fail randomly really can he worse than nothing) it really helps in development and maintenence to have some tests. It's really nice when the (frequent) mistakes I make show up on my machine or on the CI server rather than in production, and my (very imperfect, not 100% "done properly") tests account for a lot of those catches.
Obviously pragmatism is always important and no advice applies to 100% of features/projects/people/companies. Sometimes the test is more trouble to write than it's worth and TDD never worked for me with the exception of specific types of work (good when writing parsers I find!).
From my experience though I often do make logical errors in my code but not in my tests and I do frequently catch errors because of this. I think thats a fairly normal experience with writing automated tests.
Would having someone else write the tests catch more logical errors? Very possibly, I haven't tried it but that sounds reasonable. It also does seem like that (and the other things it implies) would be a pretty extreme change in the speed of development. I can see it being worth it in some situations but honestly I don't see it as something practical for many types of projects.
What I don't understand is saying "well we can't do the really extremely hard version so let's not do the fairly easy version" which is how I took you original comment.
I wonder if I'm an anomaly, or if it's actually more common that one might assume?