Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am not sure why this should keep anyone from hosting their own servers and services.

I find it positive to know that whatever and whomever expose anything on the Internet someone will try to exploit it.

For 443 and 80, why the concern ? Outsiders can try all they want bit if you are certain the software you use is secure, there will be no cigar.

I'd much rather have these things out in the open than hiding things away with some obscure thought about that should help anything.

If something is difficult do more of it. The same goes for understanding security.



  > if you are certain the software you use is secure
This is the problem right here. You can be certain that the software you use has security issues.


And who will fire a 10k+ exploit on your server? So you could record it and resell? In the early days, surfing shady sites with Internet Explorer, you could net a lot of interesting js that exploited the browser.


My server is an attack vector for my 10k+ users, and all their contacts. A 1% ransomware infection rate could net them $1 million USD worst case, and potentially an order of magnitude more if one of my users is browsing from a work machine in their network.

Don't underestimate the security value of people hitting your servers, even if all you think you're serving is emojis.


I'm not underestimating. All I'm saying if someone pays 10k or more for an exploit against ssh/nginx/whatever, nobody is gonna pepper your server with it. They will sell it to a broker and pocket the money, end of story.

You will be targeted if your server seems to be the lowest hanging fruit or most easily exploitable or the target is most easily reachable through your site. Otherwise noone will bother with your setup.


This is very much so sticking your head in the sand. Some attacks are sold to highest bidder, others are deployed wide and fast. Some of us are responsible for securing high-sensitivity systems where such a shoot from the hip and trust everything will be okay attitude isn't acceptable.


Yeah, this is also a huge concern of mine. There's also nearly no standardization / information as to how to harden just a bit more than is commonly suggested by web devs / bad tutorial sites.


Seriously. When you find something, please let me know too!


Reading the manual.


The question isn't does the software I run have some sort of yet-undetected security issues, but am I a valuable enough of a target for someone to waste their yet-undetected exploits specifically targeting me?

If the answer's no, then your only job is to keep up with software updates.


If you’re exposing your software to the external internet, you’re potentially valuable enough to get a drive by.


Assuming your software is fairly up to date and/or you haven't badly misconfigured it, they're not gonna do anything. There are a ton of routers and IoT devices that are a much easier catch than a machine run by someone that actually gave a thought or two about securing their server.


Sure. And so what ? Should I stop using it ?


> if you are certain the software you use is secure

The entirety of the problem is that you can't be certain the software you use is secure.


Exactly. And to overcome this you as a user of that software has to be aware of that specific software.

Most people doesn't give a shit, they pull down or introduce dependencies and think "wauw that was easy and fast".

Of course there is secure software, otherwise we wouldn't be able to live as we do.


As history has shown repeatedly, there is no secure software - just software that folks have not yet discovered how to exploit widely and effectively yet.


Then why bother? I'm sorry, but where did this meek, defeatist attitude come from? It pervades software now. Sure, you're right, I guess I could get hit by a bus today, but that won't stop me from crossing the street, because there are a lot of things I can do to minimize my risk, like looking both ways, listening, and crossing at a signal. Software is similar. "Nothing means anything, all is chaos" might poll well on Reddit, but it's not good engineering.


Who says it’s defeatist? It’s realism. You might as well say noting mild steel only has a 60-80kpsi yield strength ‘defeatist’.

That attitude allows practical risk management and effective engineering. Pretending software can be secure or mild steel has infinite yield strength cannot.

There is no lock that can’t be picked either, which is why no one leaves millions in cash protected just by a lock without guards and a surveillance system. And why they insure large amounts of cash.

At this point it should be pretty obvious - don’t put important secrets on computers without a way to expire/revoke them. If it’s a secret that can’t be expired/revoked, think long and hard about if you need it on a computer - and if you do, use a SCIF.

Monitor any connected computer systems for compromise. Use encryption extensively, preferably with hardware protection, because software is insecure, etc.

Same with controlling dangerous equipment - don’t rely on pure software or someone will get killed. Use hardware interlocks. Use multiple systems with cross checking. Don’t connect it to the internet. Etc.

This is all industry best practice for decades now.


But the initial dialog was more like

  Q: this is good steel still, why not use it?
  A: steel is never ideal, that's the problem.
Oh really.

Risk manage us nginx please. At least write out the steps, you must have a checklist or something, right?

Let's be honest, we just apt install it and read vulnerability reports when they hit /news.


Exactly. I don't believe that the argument that some software somewhere at some point could have some vague security flaw in it is usually good enough to justify not running the kinds of software most of us here work on. It's solipsistic, and honestly seems a little in bad faith.

But it's also moot: if you're that afraid of vague security threats, then just don't expose your software to the internet. It's not difficult.


Literally never said that. Speaking of bad faith.

the whole point in context was that exposing software to the internet is high risk, no matter how secure you think it is, because no software is truly ever secure given enough exposure.

Talk about exhausting bullshit. But then what to expect from a green throw away?


> Who said it's defeatist?

Uh, me, I did. I thought I was pretty clear. Please refer to my previous comment.

> It's realism.

Okay. How are you going to change your behavior?

I'm not sure what point you're trying to make. If you want to put your recipe website behind a SCIF, be my guest. Some of us aren't quite so afraid.


Haha, pot calling kettle black. I don’t need to do a damn thing different. Cars are still dangerous 100 years after they were invented, and the world still turns.

You’re the one trying to turn this into some kind of existential emergency. What are you going to do differently?


Nothing! That's my entire point! Because I'm not afraid of the internet, and I trust in my ability to secure the software I host. You're the one struggling with the fact that no software is a platonic ideal, while the rest of us still have jobs to do.


Then you may want to look into defense in depth - or at least not store any valuable secrets on the same machine, or accessible to that machine.

Which is my point.

Or yolo it because you don’t care about a compromise. It’s your life, not mine.

Hopefully you aren’t storing any medical records, financial records, etc. for me or anyone I care about if that is the case though.


> Then why bother?

Because software is fun, and I get to work with cool things. There is a joy in programming in and of itself.

I guess your question doesn't make sense to me. Just because it will eventually be broken, does that automatically mean there's no value in software? I don't think that's true, it just probably means you should have an analog backup process if possible, especially for critical things like government services.


It's not defeatist, it's called defense in depth


That gives the misleading impression that it is impossible to create and maintain a truly secure software system.


I have yet to find any such system - given enough time and exposure.

What makes you think such a thing is possible? In reality, not theoretically.

I also have yet to find an unpickable lock, given the same constraint. Locks still have utility.

But only fools protect something very valuable with just a lock.


>What makes you think such a thing is possible?

The main source of my confidence is extrapolation from the results of successful initiatives to improve security. Rust is one such initiative: at relatively low cost, it drastically improves the security of "systems software" (defined for our purposes as software in which the programmer needs more control over resources such as compute time and latency than is possible using automatic memory management). Another data point is how much Google managed to improve the security of desktop Linux with ChromeOS.

There's also the fact that even though Russia has enough money to employ many crackers, Starlink's web site continued operating as usual after Musk angered Russia by giving Starlink terminals to Ukraine -- and how little damage Russia has managed to do to Ukraine's computing infrastructure. (It is not credible to think that Russia has the ability to inflict devastating damage via cracking, but is reserving the capability for a more serious crisis: Russia considers the Ukrainian war to be extremely serious.)

Sufficiently well-funded organizations with sufficiently competent security experts can create and maintain a software-based system that is central to the organization's process for delivering on the organization's mission such that not even well-funded expert adversaries can use vulnerabilities in that system to prevent the organization from delivering on its mission.


‘Secure’ == unable to be compromised.

You seem to be saying ‘secure’ == ‘compromises are able to be fixed’.

Which doesn’t fit any definition of secure I’m aware of.

Every one of those things you mention has been compromised, and then fixed, at various times. Depending on specific definitions of course.

And that is what we see publicly. Typically figure on an order of magnitude more ‘stealth’ compromises.

For a compromise to be fixed, someone has to notice it. Exposing machines to the Internet increases attack surface dramatically. Allowing machines to talk to the Internet unmonitored and unrestricted increases their value to attackers dramatically.

Without careful monitoring, many of the resulting compromises will go undetected. And hence unfixed.

[https://www.cvedetails.com/vulnerability-list/vendor_id-1902...]

[https://www.cvedetails.com/product/47/Linux-Linux-Kernel.htm...]

[https://purplesec.us/security-insights/space-x-starlink-dish...]

[https://www.pcmag.com/news/account-hacking-over-starlink-spa...]


You made a universal statement, namely, "there is no secure software".

If you had written, "99% of software used in anger is insecure," or, "most leaders of most organizations don't realize how insecure the software is that their organizations depend on," or, "most exploits go undetected", I would not have objected.


That is quite explicitly not what I wrote. You might want to re-read my comment.

My point not only stands, but is reinforced by your comments.

If software is eventually compromised, it was not secure. I have yet to see any software that does not eventually get compromised when it gets enough exposure.

That those compromises can get fixed after the fact doesn’t change that.

And ignoring the explicit cases where your examples were disproven doesn’t help your case either.


I find it obnoxious to correspond with you.


The feeling is mutual, apparently.


Is that impression not accurate? Everything is possible to exploit imo. Its why the us government spends a mountain on cyber defense and offense.


Haveibeenpwned paints a pretty good picture. Breaches, breaches everywhere. The average piece of software cannot be trusted with keeping any data secure for any notable amount of time.

It's funny that password managers and random generated single use passwords are so popular now, because the greatest risk to one's credentials isn't direct attacks, but having them leaked by someone's half assed backend. It gets even funnier when the service that gets breached has some arcane password security rules with two symbols or whatever, the ultimate hypocrisy.


Almost all stories you read about data leaks are some variation of "I installed XXX database and forgot to limit access" or even "and I wrongly supposed it wasn't listening to an internet exposed port". Breaches are just queries.


To be blunt, those breaches are the result of software written by people I wouldn't trust to bag my groceries. I've never had a database get leaked, because I'm not a hack, and I know how to do the bare minimum above professional negligence to secure internet-facing services. I wish I could say the same about most of the industry.


A “breach” usually means they got access to the database, which is much different to access to the underlying server. We aren’t talking about databases, we are talking about servers.


It really depends on the architecture. At least I think it's fairly common for people to have some sort of database proxy running beside the static serve, so there isn't any direct public access and to do some caching, but once you're there it should be pretty wide open.


In my experience, it is much more likely someone forgets to escape some input and opens the database up (via SQL injection) than it is for someone to break in via ssh or gain access to the shell.


Common the web servers like Nginx, Caddy are not secure? If they found a zero day in these application whole Internet will go up in flames.


The whole internet keeps patching those flaws as they are found. The problem with self-hosting is patching.


This is a non-problem since the invention of unattended updates. This whole subthread spreads uncertainty and doubt over simple things like nginx or ssh. Service providers don’t patch their software by hand either.

20 years ago, when I was still young and naive, I took these concerns way too serious, remapped ports, believed in pwn, set up fail2ban and knocking, rotated logs. Later I realized it was all just FUD, even back then. You run on 22, 80 and 443 like a chad, use pw-based auth if you’re lazy, ignore login attempts and logs in general and never visit a server until it needs reconfiguration. Just say f* it. And nothing happens. They just work for years, the only difference is you not having tremors about it.

The only time a couple of my vpses were pwned in decades was a week after I gave a sudoer ssh key to some “specialist” that my company decided to offload some maintenance to.

What changed from back then is that software became easier to set up and config and less likely to do something stupid. Even your dog can run a vps with a bunch of services now.


> And nothing happens.

Good luck. Some people have different experiences.


Some people install every php plugin they can find. Recently I gave my coworker an access to a gui server and next day he complained he can't install some chinese malbloatadware on it. People have different experiences due to different paradigms. My message is about not being anxious, not about being clueless.

With opensource and how code works in general, we are all in the same boat with bigcorps and megacorps. And they receive the same updates at the same rate (maybe minutes faster cause they host repos).

This quote, "you can't be certain the software you use is secure", is technically true but is similar to the "you can't be certain you won't die buying groceries". Perfectly useless fearoid for your daily life.


I get what you are saying, and if anything all the "attacks" in the logs should build you some confidence. Oh, so 98% of all attacks assume I haven't changed the root password? I must be ahead in the game then.

But the way you phrase it isn't really convincing, and for singling out 443 and 80 ports. As the subthread of breaches hint towards. You might not need to be worried about nginx, but whatever you host on nginx might be a problem and being "certain the software you use is secure" is also pretty darn useless as guidance.


How do you run software? Or if you are using managed hosting or a platform for running software, how exactly they solve this “security strictly < 1, have to run somehow” dilemma?


For systems exposed on the internet?

  * Try to avoid it in the first place.
  * Do research, minimize risk and make whatever compromises you are willing/able to make
  * Isolate it
  * Maintain, update and monitor it
At no point am I certain the software is secure.


You seem to include some absolute security, which is obviously nonexistent in this world (p!=0 for any event according to some models), into your internet exposure formula, when "minimize risk, make whatever compromises, update" is sufficient (to me) and everything above that is just worrying too much without having control. I think that's where we fundamentally disagree.


I really don't.

Be aware of your threat model and the risks associated.


>pw-based auth

better off using key only logins and forgetting IMO


Even OpenSSH almost got a fatal backdoor recently.


What planet are you on? Nginx had a 0 day as recently as April 2022 https://www.accuknox.com/blog/nginxday-2022-nginx-ldap-zero-...

This happens _all_ _the_ _time_


A very specific one that doesn't affect 99.99% of nginx servers.


"If they found a zero day in these application whole Internet will go up in flames."

Don't move the goalposts. I'm certainly not saying that nginx is insecure. I'm saying that if you think any piece of software written after the 80s has reached the point where it won't have 0 days anymore you just haven't been paying attention


This seem hopelessly naive just after the windows php bug bit?

https://arstechnica.com/security/2024/06/thousands-of-server...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: