Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is intractable in practice since there's no outside force putting pressure to do so. Actual engineering disciplines seem to manage to have such standards, and it's in part because there are real legal and monetary liabilities at stake, unlike in software.


What do you mean by such standards? What would example ones be for software?


Easy. Runtime / performance. Peak memory consumption. Correct behaviour for injected faults, latency, or invalid input. Fault free operation under simulation. The list goes on.


There's not much about the user here. We build tools for human beings. Ideally, our goals are towards improving the users' life (btw, if you believe your work makes people's life worse, maybe you should quit and do something else). Like any engineer actually.

This technical stuff your mention can be important (I'm the first to think we should build efficient software), but are often also a small part of what matters. It might be that the vast majority of stuff that matter are not stuff you measure like this (unless you do user studies, which you should depending on the task at end). Rather, your work will be to understand what the user really needs / wants - which can be different from what they express, and it's your work to detect that), and provide a reasonable solution with potential compromises wrt means, time, budget, and what your team will like. There are few measures to do but most important decision might come from this.

If we make the users happy with this also measure-free process, the job is well done, isn't it?

Now maybe that proves a developer is mostly not an engineer and I would be okay with this actually :-)


Engineers build cars and buildings for humans too. But before we worry about ergonomics and how good it makes the people feel, we focus on cars not exploding and bridges not buildings not falling down.

We haven't even gotten to that level with software, forget improving people's lives.


Note that I was answering to a list of measurements software developers could make. You seem to argue on the topic of reliability. I have fewer things to say against this.

The software equivalent of your car not exploding or your stuff falling down would be the bugs. You don't usually avoid bugs by measuring stuff, you avoid them through robust processes (including testing). I assume that's also the case for avoiding engineered things to collapse.

Now, software does often improve people's lives despite the bugs, and most bugs are not life threatening. Less reliability in software compared to many engineered stuff for cheaper production is apparently a common (and most certainly reasonable) tradeoff we are willing to make. That's not the case for software involved in critical stuff usually btw.

We haven't gotten to that level of reliability in software in general, and we probably never will, because in most cases, it's not worth the cost. It's likely not a question of rigor or maturity of the field: the human species knows how to produce reliable software when it decides it's worth the cost.


> Note that I was answering to a list of measurements software developers could make. You seem to argue on the topic of reliability.

Those are not distinct categories. I was making the point that many of the things you mentioned referred to how well they improve the lives of users. These things are analogous to ergonomics, comfort, aesthetic requirements for many objects. I'm saying that those concerns come after you have built something that does not fail often.

> and most bugs are not life threatening.

Neither are most physical defects in many products, but they are still taken care of before being pushed out the door. Tesla may wish to use its users as paid beta testers, but most automakers actually test their stuff before selling it.

> Less reliability in software compared to many engineered stuff for cheaper production is apparently a common (and most certainly reasonable) tradeoff we are willing to make.

This is a tradeoff made by the companies, not be the people using it. I am forced to use Teams at work regardless of how shitty a piece of software it is. Most software companies simply don't have the kind of competition required to require them to make good software. Why spend engineering dollars on improving teams when you can bundle it and have employers force it on their employees.


> Runtime / performance

This is notoriously difficult to quantify even in the simplest of examples. How do you even express goals here? What if the software is targeting a variety of platforms?

> peak memory consumption

Is this software running on a server or a users computer? What if it’s cheaper to buy more ram? How do you even measure this accurately when garbage collectors are so prevalent and there’s performance wins from using more virtual memory?

> Correct behaviour for injected faults

A standard set of faults? What about combinations of faults?

I can go on….


* Time to execute

* Time to build/compile

* Resource cost at execution time

* Clock time to deliver from requirements gathering to feature completion

* Developer time to complete an effort in money

* Defect quantity

* Defect severity

* Training/learning time to ramp up a new technology or feature

* Cost to produce documentation. Industry expects are amazed that the $84 million cost to produce the Air Force One documentation is actually so astonishingly low

The bottom line is that measures are a form of evidence, which is defense against arguments from bullshit. Developers typically measure absolutely nothing, so it comes to software performance developers tend to invent bullshit assumptions on the spot. They are wrong about 80% of the time and most of the time they are wrong they are wrong by several orders of magnitude.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: