Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly. And to overcome this you as a user of that software has to be aware of that specific software.

Most people doesn't give a shit, they pull down or introduce dependencies and think "wauw that was easy and fast".

Of course there is secure software, otherwise we wouldn't be able to live as we do.



As history has shown repeatedly, there is no secure software - just software that folks have not yet discovered how to exploit widely and effectively yet.


Then why bother? I'm sorry, but where did this meek, defeatist attitude come from? It pervades software now. Sure, you're right, I guess I could get hit by a bus today, but that won't stop me from crossing the street, because there are a lot of things I can do to minimize my risk, like looking both ways, listening, and crossing at a signal. Software is similar. "Nothing means anything, all is chaos" might poll well on Reddit, but it's not good engineering.


Who says it’s defeatist? It’s realism. You might as well say noting mild steel only has a 60-80kpsi yield strength ‘defeatist’.

That attitude allows practical risk management and effective engineering. Pretending software can be secure or mild steel has infinite yield strength cannot.

There is no lock that can’t be picked either, which is why no one leaves millions in cash protected just by a lock without guards and a surveillance system. And why they insure large amounts of cash.

At this point it should be pretty obvious - don’t put important secrets on computers without a way to expire/revoke them. If it’s a secret that can’t be expired/revoked, think long and hard about if you need it on a computer - and if you do, use a SCIF.

Monitor any connected computer systems for compromise. Use encryption extensively, preferably with hardware protection, because software is insecure, etc.

Same with controlling dangerous equipment - don’t rely on pure software or someone will get killed. Use hardware interlocks. Use multiple systems with cross checking. Don’t connect it to the internet. Etc.

This is all industry best practice for decades now.


But the initial dialog was more like

  Q: this is good steel still, why not use it?
  A: steel is never ideal, that's the problem.
Oh really.

Risk manage us nginx please. At least write out the steps, you must have a checklist or something, right?

Let's be honest, we just apt install it and read vulnerability reports when they hit /news.


Exactly. I don't believe that the argument that some software somewhere at some point could have some vague security flaw in it is usually good enough to justify not running the kinds of software most of us here work on. It's solipsistic, and honestly seems a little in bad faith.

But it's also moot: if you're that afraid of vague security threats, then just don't expose your software to the internet. It's not difficult.


Literally never said that. Speaking of bad faith.

the whole point in context was that exposing software to the internet is high risk, no matter how secure you think it is, because no software is truly ever secure given enough exposure.

Talk about exhausting bullshit. But then what to expect from a green throw away?


> Who said it's defeatist?

Uh, me, I did. I thought I was pretty clear. Please refer to my previous comment.

> It's realism.

Okay. How are you going to change your behavior?

I'm not sure what point you're trying to make. If you want to put your recipe website behind a SCIF, be my guest. Some of us aren't quite so afraid.


Haha, pot calling kettle black. I don’t need to do a damn thing different. Cars are still dangerous 100 years after they were invented, and the world still turns.

You’re the one trying to turn this into some kind of existential emergency. What are you going to do differently?


Nothing! That's my entire point! Because I'm not afraid of the internet, and I trust in my ability to secure the software I host. You're the one struggling with the fact that no software is a platonic ideal, while the rest of us still have jobs to do.


Then you may want to look into defense in depth - or at least not store any valuable secrets on the same machine, or accessible to that machine.

Which is my point.

Or yolo it because you don’t care about a compromise. It’s your life, not mine.

Hopefully you aren’t storing any medical records, financial records, etc. for me or anyone I care about if that is the case though.


> Then why bother?

Because software is fun, and I get to work with cool things. There is a joy in programming in and of itself.

I guess your question doesn't make sense to me. Just because it will eventually be broken, does that automatically mean there's no value in software? I don't think that's true, it just probably means you should have an analog backup process if possible, especially for critical things like government services.


It's not defeatist, it's called defense in depth


That gives the misleading impression that it is impossible to create and maintain a truly secure software system.


I have yet to find any such system - given enough time and exposure.

What makes you think such a thing is possible? In reality, not theoretically.

I also have yet to find an unpickable lock, given the same constraint. Locks still have utility.

But only fools protect something very valuable with just a lock.


>What makes you think such a thing is possible?

The main source of my confidence is extrapolation from the results of successful initiatives to improve security. Rust is one such initiative: at relatively low cost, it drastically improves the security of "systems software" (defined for our purposes as software in which the programmer needs more control over resources such as compute time and latency than is possible using automatic memory management). Another data point is how much Google managed to improve the security of desktop Linux with ChromeOS.

There's also the fact that even though Russia has enough money to employ many crackers, Starlink's web site continued operating as usual after Musk angered Russia by giving Starlink terminals to Ukraine -- and how little damage Russia has managed to do to Ukraine's computing infrastructure. (It is not credible to think that Russia has the ability to inflict devastating damage via cracking, but is reserving the capability for a more serious crisis: Russia considers the Ukrainian war to be extremely serious.)

Sufficiently well-funded organizations with sufficiently competent security experts can create and maintain a software-based system that is central to the organization's process for delivering on the organization's mission such that not even well-funded expert adversaries can use vulnerabilities in that system to prevent the organization from delivering on its mission.


‘Secure’ == unable to be compromised.

You seem to be saying ‘secure’ == ‘compromises are able to be fixed’.

Which doesn’t fit any definition of secure I’m aware of.

Every one of those things you mention has been compromised, and then fixed, at various times. Depending on specific definitions of course.

And that is what we see publicly. Typically figure on an order of magnitude more ‘stealth’ compromises.

For a compromise to be fixed, someone has to notice it. Exposing machines to the Internet increases attack surface dramatically. Allowing machines to talk to the Internet unmonitored and unrestricted increases their value to attackers dramatically.

Without careful monitoring, many of the resulting compromises will go undetected. And hence unfixed.

[https://www.cvedetails.com/vulnerability-list/vendor_id-1902...]

[https://www.cvedetails.com/product/47/Linux-Linux-Kernel.htm...]

[https://purplesec.us/security-insights/space-x-starlink-dish...]

[https://www.pcmag.com/news/account-hacking-over-starlink-spa...]


You made a universal statement, namely, "there is no secure software".

If you had written, "99% of software used in anger is insecure," or, "most leaders of most organizations don't realize how insecure the software is that their organizations depend on," or, "most exploits go undetected", I would not have objected.


That is quite explicitly not what I wrote. You might want to re-read my comment.

My point not only stands, but is reinforced by your comments.

If software is eventually compromised, it was not secure. I have yet to see any software that does not eventually get compromised when it gets enough exposure.

That those compromises can get fixed after the fact doesn’t change that.

And ignoring the explicit cases where your examples were disproven doesn’t help your case either.


I find it obnoxious to correspond with you.


The feeling is mutual, apparently.


Is that impression not accurate? Everything is possible to exploit imo. Its why the us government spends a mountain on cyber defense and offense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: