As history has shown repeatedly, there is no secure software - just software that folks have not yet discovered how to exploit widely and effectively yet.
Then why bother? I'm sorry, but where did this meek, defeatist attitude come from? It pervades software now. Sure, you're right, I guess I could get hit by a bus today, but that won't stop me from crossing the street, because there are a lot of things I can do to minimize my risk, like looking both ways, listening, and crossing at a signal. Software is similar. "Nothing means anything, all is chaos" might poll well on Reddit, but it's not good engineering.
Who says it’s defeatist? It’s realism. You might as well say noting mild steel only has a 60-80kpsi yield strength ‘defeatist’.
That attitude allows practical risk management and effective engineering. Pretending software can be secure or mild steel has infinite yield strength cannot.
There is no lock that can’t be picked either, which is why no one leaves millions in cash protected just by a lock without guards and a surveillance system. And why they insure large amounts of cash.
At this point it should be pretty obvious - don’t put important secrets on computers without a way to expire/revoke them. If it’s a secret that can’t be expired/revoked, think long and hard about if you need it on a computer - and if you do, use a SCIF.
Monitor any connected computer systems for compromise. Use encryption extensively, preferably with hardware protection, because software is insecure, etc.
Same with controlling dangerous equipment - don’t rely on pure software or someone will get killed. Use hardware interlocks. Use multiple systems with cross checking. Don’t connect it to the internet. Etc.
This is all industry best practice for decades now.
Exactly. I don't believe that the argument that some software somewhere at some point could have some vague security flaw in it is usually good enough to justify not running the kinds of software most of us here work on. It's solipsistic, and honestly seems a little in bad faith.
But it's also moot: if you're that afraid of vague security threats, then just don't expose your software to the internet. It's not difficult.
the whole point in context was that exposing software to the internet is high risk, no matter how secure you think it is, because no software is truly ever secure given enough exposure.
Talk about exhausting bullshit. But then what to expect from a green throw away?
Haha, pot calling kettle black. I don’t need to do a damn thing different. Cars are still dangerous 100 years after they were invented, and the world still turns.
You’re the one trying to turn this into some kind of existential emergency. What are you going to do differently?
Nothing! That's my entire point! Because I'm not afraid of the internet, and I trust in my ability to secure the software I host. You're the one struggling with the fact that no software is a platonic ideal, while the rest of us still have jobs to do.
Because software is fun, and I get to work with cool things. There is a joy in programming in and of itself.
I guess your question doesn't make sense to me. Just because it will eventually be broken, does that automatically mean there's no value in software? I don't think that's true, it just probably means you should have an analog backup process if possible, especially for critical things like government services.
The main source of my confidence is extrapolation from the results of successful initiatives to improve security. Rust is one such initiative: at relatively low cost, it drastically improves the security of "systems software" (defined for our purposes as software in which the programmer needs more control over resources such as compute time and latency than is possible using automatic memory management). Another data point is how much Google managed to improve the security of desktop Linux with ChromeOS.
There's also the fact that even though Russia has enough money to employ many crackers, Starlink's web site continued operating as usual after Musk angered Russia by giving Starlink terminals to Ukraine -- and how little damage Russia has managed to do to Ukraine's computing infrastructure. (It is not credible to think that Russia has the ability to inflict devastating damage via cracking, but is reserving the capability for a more serious crisis: Russia considers the Ukrainian war to be extremely serious.)
Sufficiently well-funded organizations with sufficiently competent security experts can create and maintain a software-based system that is central to the organization's process for delivering on the organization's mission such that not even well-funded expert adversaries can use vulnerabilities in that system to prevent the organization from delivering on its mission.
You seem to be saying ‘secure’ == ‘compromises are able to be fixed’.
Which doesn’t fit any definition of secure I’m aware of.
Every one of those things you mention has been compromised, and then fixed, at various times. Depending on specific definitions of course.
And that is what we see publicly. Typically figure on an order of magnitude more ‘stealth’ compromises.
For a compromise to be fixed, someone has to notice it. Exposing machines to the Internet increases attack surface dramatically. Allowing machines to talk to the Internet unmonitored and unrestricted increases their value to attackers dramatically.
Without careful monitoring, many of the resulting compromises will go undetected. And hence unfixed.
You made a universal statement, namely, "there is no secure software".
If you had written, "99% of software used in anger is insecure," or, "most leaders of most organizations don't realize how insecure the software is that their organizations depend on," or, "most exploits go undetected", I would not have objected.
That is quite explicitly not what I wrote. You might want to re-read my comment.
My point not only stands, but is reinforced by your comments.
If software is eventually compromised, it was not secure. I have yet to see any software that does not eventually get compromised when it gets enough exposure.
That those compromises can get fixed after the fact doesn’t change that.
And ignoring the explicit cases where your examples were disproven doesn’t help your case either.
Most people doesn't give a shit, they pull down or introduce dependencies and think "wauw that was easy and fast".
Of course there is secure software, otherwise we wouldn't be able to live as we do.