Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Replace "cities" with "any organization that is not tech first" and you'll still find hundreds of win 7/vista/xp machines that have never been patched, and ad-hoc network closet/cloud hybrid rigged solutions for everything.

There is literally no way to fix all this dumb fragile infrastructure without a massive government program that accepts responsibility for doing so. You need thousands of smart people going through every machine, all the software, all the systems. These people are never going to work for Baltimore or for Maersk, not in a million years.

Instead let's create a new government agency or pivot the NSA from it's dumb paranoid reactionary posture to more of a proactive NIST-style advisory role on best practices, have them hack everything domestically and start fixing things as their core mission. Make sure nobody at state or DHS or justice can subvert this new agency, they need to stand on equal footing with any company or agency.

Then hopefully pillage all the miserable smart people who are currently working at mega corps and agencies who actually want to do positive, meaningful work for a change.

Problem solved someone hire me to advise on their political campaign.



Advising companies that they can and should fix things is actually the easy part. Getting things fixed in a way that makes companies happy is actually incredibly difficult. You're proposing a government agency get its hands dirty fixining thousands upon thousands of bizarro line-of-business applications and mission-critical excel macros. Convincing companies to update what they see as systems that "work just fine" tends to be a Herculean task even when you can make a business case for taking on the expense and risk.

Telling a company "The government says you have to patch and is offering to do it for you" seems like it might not go over quite as well as you might hope. I can already see the first thought - "Do they actually care if all my systems work the way I need them to afterwards?". Having worked in Information Security and offered to fix things for people, my experience is that entities going for this is extremely rare, even when it's just the next department over.

As for the NSA, well, getting them into a proactive posture is a wonderful idea! It's such a good idea that the US government decided you were right decades ago. And acted accordingly. This tends not to make the news, so many people are understandably ignorant. For example, the NSA publishes information assurance best practices: https://apps.nsa.gov/iaarchive/library/ia-guidance/ia-standa...


>Convincing companies to update what they see as systems that "work just fine" tends to be a Herculean task even when you can make a business case for taking on the expense and risk.

>Telling a company "The government says you have to patch and is offering to do it for you" seems like it might not go over quite as well as you might hope.

I think a better idea is to have the new agency play an advisory / supplemental role but otherwise place the burden of fix on the company itself. It just needs teeth for entities unwilling to adequately resolve their IT failures.

The EPA will bring suit to companies polluting illegally. Why shouldn't a government agency bring suit to companies or cities risking a leak of hundreds of millions of social security numbers, for example?


> The EPA will bring suit to companies polluting illegally. Why shouldn't a government agency bring suit to companies or cities risking a leak of hundreds of millions of social security numbers, for example?

Maybe at first we could try an in-between solution. I hate to water things down but maybe a scheme like a USDA Prime Beef label[0] would be more likely to actually pull off?

If there was a NIST Certified logo on one bank/app/merchant/site that asks for personal info, and not another, I would be much more likely to go with the NIST one. Obviously credit agencies and gov systems need to go first.

>In the United States, the United States Department of Agriculture's (USDA's) Agricultural Marketing Service (AMS) operates a voluntary beef grading program that began in 1917. A meat processor pays for a trained AMS meat grader to grade whole carcasses at the abattoir. Such processors are required to comply with Food Safety and Inspection Service (FSIS) grade labeling procedures. The official USDA grade designation can appear as markings on retail containers, individual bags, or on USDA shield stamps, as well as on legible roller brands appearing on the meat itself.

[0] https://en.wikipedia.org/wiki/Beef_carcass_classification


"When a measure becomes a target, it ceases to be a good measure."

This sounds nice, but I can't help but feel like this could end up being abused...somehow.


I completely sympathize with that line of thought, but if I take that position to its logical ends then I find myself nihilistic.


A hypothetical regulatory regime to mandate and enforce patching and other good practices?

It's worth thinking about. It might also be worth considering if we think there's a good way to get there without doing more harm than good. Congress is not always known for their high-quality technical regulatory work.


Agreed. HIPPA is not exactly a promising precedent.

Regulation would almost certainly lag at least a couple years behind and end up making software and IT maintenance much more expensive without making it that much more secure. I don’t think I like this idea, but imagine if marketers for more robust, secure IT solutions were legally allowed to show you how they can spy on you as a part of an ad. That opens up a huge can of worms that is probably best left closed, but I think it’d act like steroids for getting people to upgrade their insecure stuff.


What is your concern with HIPAA? There have been occasional breaches in covered entities, but overall the rules have significantly improved security and privacy in healthcare.


I spent years as a infosec consultant specialized in major healthcare companies, and my experience is completely the opposite. It is absurdly easy to be 'compliant' with the HIPAA security rule yet still have abysmal security.

The biggest issue IMO with the HIPAA SR is that it is first and foremost a legal matter that involves legal teams, and is not very good at being a technology matter that effectively prescribes security to security teams. Most of the HIPAA-motivated companies I worked with spent more effort getting their legal counsel to build a HIPAA litigation shield (via intercepting and carefully massaging the wording of security assessments) than they did getting their security teams to actually improve anything.

I did have some clients that saw HIPAA as only a foundation and guidelines for truly improving their security, but that was more a matter of the company actually caring about security, and not because the HIPAA security rule is actually effective.


There will always be some organizations that do the minimum necessary to check some sort of "compliance" checkbox. However you can't deny that overall the healthcare industry as a whole has better security and security controls than they would if HIPAA had never been enacted.


I absolutely do deny that. Of the many healthcare companies I worked at, small 50-200 people shops and massive F500 companies and everything in between, I don't think HIPAA* made any kind of material difference in their security maturity.

The companies that were actually good at security merely used HIPAA as a starting point, and sometimes had to divert resources away from actual security efforts just to meet redundant HIPAA audits. They would just as easily get by with any of the other myriad of security frameworks out there.

The companies that were bad at security either: 1) mostly ignored HIPAA because in many cases it's easier to just buy insurance to cover the cost of a breach, 2) viewed HIPAA as a legal matter and got lawyers involved, who many times actively impeded security infrastructure efforts (fines are less for a HIPAA breach if you "weren't aware" you were doing anything wrong, which leads to companies intentionally avoiding security assessments or altering them to read "everything is fine!" even when they know it's not), or 3) viewed HIPAA as a checklist and once they achieve HIPAA compliance, they think their security is good enough and stop investing in it (hint: achieving HIPAA compliance does not mean you have good security. not even close).

I certainly do contend that HIPAA has not benefited the security of the healthcare industry as a whole. IME, it may have very well hurt it.

* - I'm speaking specifically of the HIPAA security rule and it's effect on organizations' security maturity. In other areas, like patient privacy and disclosure rules, it does seem to have had an effect closer to what is intended.


I'm not sure doing anything different or better would have a material difference in how much a breach will cost let alone the need to have insurance companies to cover them. Yes it's a lot of buggyman auditing and such, but in the end a breach is a breach and companies will do anything they can do downplay the cost. At least with the rules there is a workflow and process to go through when the breach happens.

When all is said and done it's really the organization. I don't know how many bigcorps I've been at that were just totally inept. The existence or not of HIPAA would not change their ineptness.


I’m not sure if we can really confirm HIPPA’s effectiveness that readily; we don’t live in a non-HIPPA world, so we can’t compare the outcomes.

If hospitals faced zero consequences for losing customer data, then yeah, things would probably be worse. But HIPPA is two things: a set of mandatory requirements and a grounds for suing hospitals that lose/misuse data. I think the latter thing is effective, but the former is not.


High-quality regulatory regimes are ones where bare-minimum checkbox-driven compliance yields good-enough results. Low-quality ones resemble kabuki, which I've personally witnessed in companies doing PCI self-certification.


With respect to the EPA, its worth pointing out they'll only punish significant point sources.

For example a sewage treatment plant dumping raw untreated sewage will get punished. However a city with a major homeless problem where many thousands poop on the street will not be punished for a larger release of untreated raw sewage.

Its kinda similar with major organizations and IT. If there's a policy with the correct checkboxes and strong sounding speeches and firmly worded emails were produced by executives, it doesn't matter if there's some individual unpatched Win95 machine running mission critical tasks, even if there's thousands of those supposedly isolated individual case systems.


Because we as a society have yet decided that it's bad. Just like we used to not think environmental pollution was bad, or at least with stunting businesses.


The SEC (before it was neutered) may be a better model. It's a group of hackers that investigates government and industry infrastructure for problems. They can warn parties if they find an issue, and if the issue isn't fixed, this group could bring civil, and maybe even criminal, proceedings against the parties.


> It's such a good idea that the US government decided you were right decades ago.

Perhaps, but it is hardly a universal belief that they have the balance right.

It is hard to pick a starting point to get in to this discussion, because it has been going on for a long time and is really complicated, not to mention largely classified. Perhaps one that dovetails into the encryption debate will be as good as any:

https://www.lawfareblog.com/good-defense-good-offense-nsa-my...


Hack things apply crypto locker and unlock after the user does a a free cyber security training course.


"any organization that is not tech first" - thats pretty optimistic looking at a number of the tech first companies that have being breached.


I don't know why you got downvoted. I know plenty of companies with modern tech that absolutely suck at security. Security is just hard, and it's not easier just because you're a tech company.

By comparison, if you spend billions of dollars on a modern building, I can still probably break into it with just a can of compressed air. I doubt the design plans for the building included "mitigate compressed air attacks", and it's the same with every other kind of organization.


> Security is just hard, and it's not easier just because you're a tech company.

We're not talking about everyone having Red Teams here. We're talking about keeping up to date with regards to Patch Tuesday, or even just having an OS that still actually gets patches. That'll get us 80-90% of the way to decent security:

> “Almost two months passed between the release of fixes for the EternalBlue vulnerability and when ransomware attacks began,” Microsoft warned. “Despite having nearly 60 days to patch their systems, many customers had not. A significant number of these customers were infected by the ransomware.”

* https://krebsonsecurity.com/2019/06/report-no-eternal-blue-e...


Do you know how many versions of how many operating systems across how many different platforms and products my company uses? Hundreds of variations, maybe thousands. Only a few groups have a solid handle on regular patching, and that's because of how hyper-standardized their systems are.

Even if an OS has automatic patching, you can't just immediately apply patches without going through an SDLC and QC process. And not every group even has those processes defined. Even if they do, you still need to address critical business problems before security ones.


> Do you know how many versions of how many operating systems across how many different platforms and products my company uses?

What OSes besides Windows, macOS, Linux, Solaris, AIX, HP-UX, z/OS, mobile (Andriod, iOS)? SCADA stuff perhaps?

And how many of those operating systems are targeted by worms and ransomware?

I know when I used to admin Solaris and IRIX machines we were worried a lot less about attacks than the Windows desktop folks. An nmap of the systems showed SSH open and one or two other services, which meant very few vectors for attack.

The fact of the matter is that by securing desktops, one probably takes care of 80% of a company's attack surface. Next take care of your Windows servers, which is another 10%. Then go after Unix-y servers and things like printers, HVAC, IPMI, etc (which should be VLANed off).


Let's imagine just one example of patching a remote hole in a Windows server. First, you have to stage a duplicate of an old server with a new patch, which can take days. A production environment may need significant development effort just to integrate the patch, which takes days. Then run all tests and QC processes against it, which can take days. Then you can deploy it during a maintenance window. This is 1-2 business weeks.

Now multiply that times 1,000 different combinations of versions of Windows, applications, networks, platforms, and so on.

You're not just patching "servers", anyway. You're patching bare metal machines, hypervisors, AMIs, container images, software packages, plugins, network applications, security policies. Often vendor platforms don't even have a patch available so you have to implement a custom workaround, if one exists.

One could write an entire book about this subject. Please believe me, it's not simple.


Perhaps the city of Baltimore should have considered this before deploying thousands of different server configurations.


But having so many configs is security in and of itself! /s


That approach might have made sense 10 years ago but it's no longer tenable now that the threat environment has escalated. Organizations will now have to roll out patches immediately even at the risk of disrupting mission critical operations.


The attacker isn’t going to follow your SDLC and QC processes.


There is literally no way to fix all this dumb fragile infrastructure without a massive government program that accepts responsibility for doing so. You need thousands of smart people going through every machine, all the software, all the systems. These people are never going to work for Baltimore or for Maersk, not in a million years.

Why not? Just 80 years ago, people would have laughed at you if you told them that computer techs would have stores everywhere, every 1st world household would have more than one, and that most office jobs would require some form of basic computer literacy. Just 150 years ago, cars everywhere, owned by most everybody, with everybody capable of taking a 100 mile trip on a whim, would have sounded like Utopian pie in the sky fiction. I'm sure someone said there's no way the everyday Joe and Suzy would be able to maintain a car. In the Ford Model A days, some people would hang a bulb of garlic under the hood to "cure" their car.

A few things could happen, analogous to the progress made by cars and also analogous to what's happened so far with computers: 1) The "packaging" will change, so that higher levels of security maintenance will be greatly simplified and more accessible. (Which might mean that everything is administered centrally to an even greater extent. i.e. Stadia and O365. Maybe O365 over something like Stadia?) 2) Security tools will advance. (SSH vs. Telnet, HTTPS vs. HTTP, and TFA have raised the bar for an exploit.) 3) The culture will become more computer savvy.

It's understandable that you're frustrated, because this sort of progress is going to have a generational component, which is orders of magnitude slower than technological progress.


That just perpetuates the problem.

Smaller cities don't have the financial wherewithal to competently run internet-facing services. Usually the best administered parts of a city are in police departments where sworn officers are filling IT roles, aided by injections of grant-driven projects done by consultants. That's not a good situation for anyone. The winning move is not to play.

I regularly hire people from cities and school districts due to some unique aspects of my workplace and benefits that makes it a smart move for them. We routinely take folks in senior tech or director roles and drop them into entry level titles -- and they are very happy to get significant raises.

End of the day, the "fix" is to dump money into rolling out modern solutions. Every user-facing city IT function should be delivered on an iPad or Chromebook.


"...dump money into rolling out modern solutions."

Yep. Ongoing maintenance and pro-active replacement is a cost. A cost that needs to be solidified as an ongoing expense. A lot of the people in leadership positions see technology as a one-time cost. ("I still have the computer I bought 10 years ago at home! It works just fine. Why do we need to buy new computers?")


Absolutely.

I volunteer at my son's school and the overall security/integrity of the place is 10x better than it was a few years ago. That's because of Chromebook, and Google's management model of paying a fixed cost to manage the device for the life of the device.


Usually the best administered parts of a city are in police departments where sworn officers are filling IT roles, aided by injections of grant-driven projects done by consultants. That's not a good situation for anyone. The winning move is not to play.

How about turnkey police department SaaS, delivered over a separate network over low orbit satellite connections? That will be separate from the public-facing police SaaS apps.


> Then hopefully pillage all the miserable smart people who are currently working at mega corps and agencies who actually want to do positive, meaningful work for a change.

Oof, if you think being a smart technical person working at a megacorp is worse than being a smart technical person working for a government agency... I have no idea what your model of the world and labor market is.


The one where he is from the government and is here to help.


> Instead let's create a new government agency or pivot the NSA from it's dumb paranoid reactionary posture to more of a proactive NIST-style advisory role on best practices

It's similar to working in infosec though. You do the pen tests, you find and identify the vulnerabilities and write up your report.

Then its up to the municipal entity to put whatever your recommendations are in place to fix what they found. I have a large number of friends in the community who say they can do the work and identify issues, but often times, they come back six months or a year later and stuff they highlighted as critical fixes were still not taken care of.

It's the old, "You can lead a horse to water. . " saying, right?

The real issue is how you implement these fixes on a continuous basis to keep the network safe?


"There is literally no way to fix all this dumb fragile infrastructure without a massive government program that accepts responsibility for doing so."

Regulation and/or software liability. So far, they can ignore security with it rarely costing them anything. In a few industries, ignoring safety will cost a lot. So, they spend a fraction of that cost on preventing the overall cost. It might also be a requirement of even selling the product. Basic stuff like memory safety, login practices, updates, and so on being a requirement could get the bar way up. It was done before under TCSEC with DO-178C doing it now for safety. A whole market of safe products formed.

Alternatively, people do a strong push in courts to hold companies liable for any time their computers are used to attack a 3rd party. The folks suing and experts testifying focus on the core practices that prevent most problems. The argument is professional negligence. We stay on them until the risk-reward analysis for information security has executives making sure it gets done with specific stuff in the lawsuits addressed. Since that stuff is 80/20, then it solves about 80% of the problems. The new incentives might also make it easier to convince them to partly or wholly use systems like OpenBSD, QubesOS, and Genode.

Although I favor regulation, I think the lawsuit strategy should get a lot of experimentation first. It doesn't require a change in government. Just good lawyers. :)


> You need thousands of smart people going through every machine, all the software, all the systems. These people are never going to work for Baltimore or for Maersk, not in a million years.

I love this assumption that anyone smart or anyone good would automatically be working for another company or at another job. Not only is that just screwy off the top (assumes that smart people automatically can move and relocate to the most desirable job - even geographically) but it also assumes that anyone with any skills would never ever work in that type of situation to begin with. [1] Maybe there are good people that are working there but a government situation like the city of Baltimore is not chock full of the type of money required to actually fix a problem like that or ever Maersk management does not view it as a priority in any way. You know not every job is in a startup that has been VC funded and can afford to lose money ditto for a traditional company such as Maersk. Noting of course that the 'best and the brightest' that work for some of the 'top companies' are kind of screwing up frequently. Not to mention MSFT 'top' designed much of this hackable code at one point.

[1] Attorneys are often like this as well the halo of a top firm means if you are operating out of a storefront you must be stupid in some way otherwise you'd be working at one of the top shiny law firms.


> Make sure nobody at state or DHS or justice can subvert this new agency, they need to stand on equal footing with any company or agency.

Interestingly, a service similar to what you described is already offered through DHS:

https://www.us-cert.gov/resources/ncats https://www.dhs.gov/cisa/cybersecurity-assessments


Have the NSA attack domestic systems and make the ransom fixes for the the vulnerabilities they just exploited! haha, might just be crazy enough to work


This will only be a solution if it addresses the "business critical application, vendor has gone out of business, no source code available" case.

Which ultimately comes down to "Who's going to pay for a more secure replacement?" & "Who's going to assess heavy-enough fines to force the replacement risk scales in favor of doing something?"


You just described where I work (small manufacturing company) when I started.

It's taken me 18mths to significantly improve our security posture and I still have a bunch of stuff I need to do (I was hired as a programmer but I couldn't in good conscience leave it as it was).


> government program that accepts responsibility for doing so.

We already have that.[0] But it doesn't do any good, because it's purely advisory, but they need regulatory and enforcement power. We need an SEC for cybersecurity. Obama put Rod Beckstrom in charge of the National Cybersecurity Center, and that was great, but he resigned after a year because there was no funding behind it. It had been limping along since, but Trump deleted the position about a year ago.

The point is, if we want to fix this problem, we need the political will to hold people accountable instead of just telling people to not do stupid things. IT and Legal are cost centers in 99% of organizations, the difference is that if Legal and IT tell the C-suite "We need to do X or else bad things will happen" Legal gets listened to but IT doesn't. This is because if Legal's "do X" fails, the outcome is an expensive lawsuit, but the outcome of IT's "do X" is a blog post about their continuing commitment to the safety and security of their customer's privacy.

[0] https://en.wikipedia.org/wiki/National_Cybersecurity_Center_...


Can we partially blame IBM?

Every municpality I've worked for runs a majority of their systems on the IBM System i (iSeries, AS/400)

IBM is very slow to update any of the tools for Windows that are included with these systems. Ditch the green screens, use the IBM EasyAccess or whatever they call it on Windows, you just saved some $.

Now, there are database tools and admin utilities that are also included in this. Most of them don't work with anything after Windows XP, so you're in a position where you can't upgrade to securable versions of Windows, because you'll lose IBM access.


Oh let me rush to defend my favorite platform, the iSeries.

The platform, regardless of which, is not to blame. It is the laziness of most IT shops which either don't have any process in place or only pay it lip service.

iSeries machines (AS/400) serve many different client interaction methods, from green screen, web services, ODBC, NodeJS via Qshell, and more. If employed properly the iSeries has some of the best security in the industry, reason why many are used by banks all over the world, hospitals, the gambling industry, and more. Failure occurs for the same reason it does anywhere else, not having a process in place and following it.

As for currency with what is available today, iSeries access is facilitated through a JAVA based client which works on Windows, OS X, and Linux. It is the same java application throughout and even provides ODBC access through java drivers and for windows you can opt into a subset of windows exe/dlls. There is a full blown web service hooked to it as well that runs on the server as needed. It is up and down fully SSL too.


Simple stuff like copy-paste or saving exported files is broken on 64 bit windows


>> SSL

Who manages those certificates?


We can partially blame every software vendor that’s ever existed. In 10 years we will be blaming Google for applications that only run on outdated versions of Chrome because the API the developer used only existed in Chrome and wasn’t accepted into the standard and then was removed a few years later.

Everyone does it and everyone will do it.


I don't think much of anyone makes stuff for old Chrome versions given how aggressive Chrome is about auto-updating. Chrome doesn't have any official options to disable auto-updating as far as I know.


“Make sure nobody at state or DHS or justice can subvert this new agency, they need to stand on equal footing with any company or agency.”

That’s going to be a problem. It’s a zero sum with power in dc and if you can solve that you will be fixing more problems than domestic info sec weakness.


No need for a new government agency or program.

We just need to start holding all organizations, and specifically their leaders, personally liable for security incidents.

Once people's freedoms are at stake, everyone fall in line so quickly that we will all be amazed.


Why am I not surprised that the comment saying no need for government intervention is the one comment that's downvoted?

While I typically believe smaller government is the answer, I would personally welcome a regulatory framework that gives me confidence in both my own organization and every other one as well.

It wouldn't ruin my business, it'd just be another line item in my budget.


I'll happily work for Baltimore if I can implement something that will finally put all their corrupt cops and judges on the gallows.

Some distributed logs that can't be deleted would surely help with accountability.

But we all know that will never happen.


Most enterprise hacking come in through emails. Should NSA deliver SPAM to make this happen? idk


"any organization that is not tech first"

Then why do all the tech first companies keep getting hacked too?


They're not that much harder to hack, and they have more tech to hack. It's just that your dusty old city desk is even more hopeless.


(Forgot to respond to part about a government organization to get secure products out. Here's response to that.)

It's been done before. It was the Walker Security Initiative. It resulted in some of the most secure products the market ever produced. A combination of lobbying for insecure products to be bought and NSA's actions destroyed what little there was to the market. Bell describes it:

http://lukemuehlhauser.com/wp-content/uploads/Bell-Looking-B...

Just found a link with examples of what they were doing. I haven't read this one fully, though. Linking it mainly because it talks about CSI and how market was responding.

https://csrc.nist.gov/csrc/media/publications/conference-pap...

Here's some of the designs that came out of commercial sector of high-assurance security:

http://www.cse.psu.edu/~trj1/cse443-s12/docs/ch6.pdf

http://lukemuehlhauser.com/wp-content/uploads/Karger-et-al-A...

https://cryptosmith.com/mls/lock/

https://www.researchgate.net/publication/3504794_The_Army_Se...

http://cap-lore.com/CapTheory/upenn/

Note: I don't think KeyKOS itself came from that community. It was from capability-security field. KeySAFE extension was driven by TCSEC requirements, though.

http://webapp1.dlib.indiana.edu/virtual_disk_library/index.c...

Note: Although not first attempt, Trusted Xenix was first attempt at securing UNIX that made it to market. Available from 1990-1994 I think. Coincidentally, OpenBSD starts in 1994 to go even further.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: