Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think Uncle Bob is way off the mark here.

In Clean Code he hammered in the idea that unit tests form an automated specification of some software system under development. This was a decent idea at the time. So why on earth would he tell people not to use type-safe languages or tools for defining better, formal specifications?

Probably because he has something to sell?

We need better tools, processes, and "disciplined" methods to writing software if we're going to tackle complexity and produce systems that can be considered robust, reliable, and safe. Unit tests alone are never going to cut it. There's a reason Microsoft Research has been heavily investing in formal methods: imagine the only specifcation for CosmosDB existed as a unit test suite. It'd have data-losing critical errors in it for years to come.

I liken Bob's post on discipline to be much like the snake oil used to sell Christian religions: you were born sick, but if you believe in me only I can make you whole. What a crock. There are plenty of smart, capable people out there writing perfectly good software. The problem in the industry isn't that developers aren't writing unit tests the way Bob Martin prescribes: it's that a great deal of hand-waving is done to following the state of the art and establishing processes for developing robust, reliable, and safe code.

The entire aerospace, space, and safety-critical systems disciplines in software development have been making huge investments in tools and languages to make managing the complexity of these requirements on software systems to be manageable, correct, and reason about. TLA+ is only one such tool and I think Hillel is right to point it out: you need more than one tool. Your entire process has to be built around avoiding errors.

There's a reason why engineers designing roads stopped calling vehicle collisions "accidents." Once you start looking for the real root of the problem, the system itself, you start to optimize for different goals in order to reduce the negative outcomes in the design of the system. Vehicle collisions still happen because of human error but a great deal more happen because they were enabled by the system: the roads, the vehicles, the by-laws.

In a similar fashion errors in software systems are enabled to happen by different choices we make: stakeholders, deadlines, requirements, time to market, etc. The ISO/IEC/IEEE 29148 guidelines on requirements engineering encourage you to consider these factors as part of your specifications. If the operation of your system will only cause nuisance or interference with the business' goals then the risk is quite low and you might put more priority on time to market. However if there's more risk to harm people like say, losing their personal information and putting them and the insurance industry at greater risk, then I'd say you need to put in the effort to avoid those risks into your processes: use more formal specifications, use statically typed languages with sound type systems, write property-based as well as declarative tests, etc, etc.

If you start treating software errors like we started treating auto-accidents then you'll start to see where you can reduce the negative outcomes. Stop calling software errors as bugs just like we stopped calling collisions, "accidents." See errors as risk and look for ways to reduce your risk. Know where some risk is acceptable.

Uncle Bob is only calling programmers "undiscplined" because he has a few more books to sell to help them get better.



Well, that's not the _only_ reason. ;-)


Hah!

I have read your books and used some of that material to train several teams over the years to practice test-driven development. So thank you for the inspiration and drive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: