Replace "cities" with "any organization that is not tech first" and you'll still find hundreds of win 7/vista/xp machines that have never been patched, and ad-hoc network closet/cloud hybrid rigged solutions for everything.
There is literally no way to fix all this dumb fragile infrastructure without a massive government program that accepts responsibility for doing so. You need thousands of smart people going through every machine, all the software, all the systems. These people are never going to work for Baltimore or for Maersk, not in a million years.
Instead let's create a new government agency or pivot the NSA from it's dumb paranoid reactionary posture to more of a proactive NIST-style advisory role on best practices, have them hack everything domestically and start fixing things as their core mission. Make sure nobody at state or DHS or justice can subvert this new agency, they need to stand on equal footing with any company or agency.
Then hopefully pillage all the miserable smart people who are currently working at mega corps and agencies who actually want to do positive, meaningful work for a change.
Problem solved someone hire me to advise on their political campaign.
Advising companies that they can and should fix things is actually the easy part. Getting things fixed in a way that makes companies happy is actually incredibly difficult. You're proposing a government agency get its hands dirty fixining thousands upon thousands of bizarro line-of-business applications and mission-critical excel macros. Convincing companies to update what they see as systems that "work just fine" tends to be a Herculean task even when you can make a business case for taking on the expense and risk.
Telling a company "The government says you have to patch and is offering to do it for you" seems like it might not go over quite as well as you might hope. I can already see the first thought - "Do they actually care if all my systems work the way I need them to afterwards?". Having worked in Information Security and offered to fix things for people, my experience is that entities going for this is extremely rare, even when it's just the next department over.
As for the NSA, well, getting them into a proactive posture is a wonderful idea! It's such a good idea that the US government decided you were right decades ago. And acted accordingly. This tends not to make the news, so many people are understandably ignorant. For example, the NSA publishes information assurance best practices: https://apps.nsa.gov/iaarchive/library/ia-guidance/ia-standa...
>Convincing companies to update what they see as systems that "work just fine" tends to be a Herculean task even when you can make a business case for taking on the expense and risk.
>Telling a company "The government says you have to patch and is offering to do it for you" seems like it might not go over quite as well as you might hope.
I think a better idea is to have the new agency play an advisory / supplemental role but otherwise place the burden of fix on the company itself. It just needs teeth for entities unwilling to adequately resolve their IT failures.
The EPA will bring suit to companies polluting illegally. Why shouldn't a government agency bring suit to companies or cities risking a leak of hundreds of millions of social security numbers, for example?
> The EPA will bring suit to companies polluting illegally. Why shouldn't a government agency bring suit to companies or cities risking a leak of hundreds of millions of social security numbers, for example?
Maybe at first we could try an in-between solution. I hate to water things down but maybe a scheme like a USDA Prime Beef label[0] would be more likely
to actually pull off?
If there was a NIST Certified logo on one bank/app/merchant/site that asks for personal info, and not another, I would be much more likely to go with the NIST one. Obviously credit agencies and gov systems need to go first.
>In the United States, the United States Department of Agriculture's (USDA's) Agricultural Marketing Service (AMS) operates a voluntary beef grading program that began in 1917. A meat processor pays for a trained AMS meat grader to grade whole carcasses at the abattoir. Such processors are required to comply with Food Safety and Inspection Service (FSIS) grade labeling procedures. The official USDA grade designation can appear as markings on retail containers, individual bags, or on USDA shield stamps, as well as on legible roller brands appearing on the meat itself.
A hypothetical regulatory regime to mandate and enforce patching and other good practices?
It's worth thinking about. It might also be worth considering if we think there's a good way to get there without doing more harm than good. Congress is not always known for their high-quality technical regulatory work.
Agreed. HIPPA is not exactly a promising precedent.
Regulation would almost certainly lag at least a couple years behind and end up making software and IT maintenance much more expensive without making it that much more secure. I don’t think I like this idea, but imagine if marketers for more robust, secure IT solutions were legally allowed to show you how they can spy on you as a part of an ad. That opens up a huge can of worms that is probably best left closed, but I think it’d act like steroids for getting people to upgrade their insecure stuff.
What is your concern with HIPAA? There have been occasional breaches in covered entities, but overall the rules have significantly improved security and privacy in healthcare.
I spent years as a infosec consultant specialized in major healthcare companies, and my experience is completely the opposite. It is absurdly easy to be 'compliant' with the HIPAA security rule yet still have abysmal security.
The biggest issue IMO with the HIPAA SR is that it is first and foremost a legal matter that involves legal teams, and is not very good at being a technology matter that effectively prescribes security to security teams. Most of the HIPAA-motivated companies I worked with spent more effort getting their legal counsel to build a HIPAA litigation shield (via intercepting and carefully massaging the wording of security assessments) than they did getting their security teams to actually improve anything.
I did have some clients that saw HIPAA as only a foundation and guidelines for truly improving their security, but that was more a matter of the company actually caring about security, and not because the HIPAA security rule is actually effective.
There will always be some organizations that do the minimum necessary to check some sort of "compliance" checkbox. However you can't deny that overall the healthcare industry as a whole has better security and security controls than they would if HIPAA had never been enacted.
I absolutely do deny that. Of the many healthcare companies I worked at, small 50-200 people shops and massive F500 companies and everything in between, I don't think HIPAA* made any kind of material difference in their security maturity.
The companies that were actually good at security merely used HIPAA as a starting point, and sometimes had to divert resources away from actual security efforts just to meet redundant HIPAA audits. They would just as easily get by with any of the other myriad of security frameworks out there.
The companies that were bad at security either: 1) mostly ignored HIPAA because in many cases it's easier to just buy insurance to cover the cost of a breach, 2) viewed HIPAA as a legal matter and got lawyers involved, who many times actively impeded security infrastructure efforts (fines are less for a HIPAA breach if you "weren't aware" you were doing anything wrong, which leads to companies intentionally avoiding security assessments or altering them to read "everything is fine!" even when they know it's not), or 3) viewed HIPAA as a checklist and once they achieve HIPAA compliance, they think their security is good enough and stop investing in it (hint: achieving HIPAA compliance does not mean you have good security. not even close).
I certainly do contend that HIPAA has not benefited the security of the healthcare industry as a whole. IME, it may have very well hurt it.
* - I'm speaking specifically of the HIPAA security rule and it's effect on organizations' security maturity. In other areas, like patient privacy and disclosure rules, it does seem to have had an effect closer to what is intended.
I'm not sure doing anything different or better would have a material difference in how much a breach will cost let alone the need to have insurance companies to cover them. Yes it's a lot of buggyman auditing and such, but in the end a breach is a breach and companies will do anything they can do downplay the cost. At least with the rules there is a workflow and process to go through when the breach happens.
When all is said and done it's really the organization. I don't know how many bigcorps I've been at that were just totally inept. The existence or not of HIPAA would not change their ineptness.
I’m not sure if we can really confirm HIPPA’s effectiveness that readily; we don’t live in a non-HIPPA world, so we can’t compare the outcomes.
If hospitals faced zero consequences for losing customer data, then yeah, things would probably be worse. But HIPPA is two things: a set of mandatory requirements and a grounds for suing hospitals that lose/misuse data. I think the latter thing is effective, but the former is not.
With respect to the EPA, its worth pointing out they'll only punish significant point sources.
For example a sewage treatment plant dumping raw untreated sewage will get punished. However a city with a major homeless problem where many thousands poop on the street will not be punished for a larger release of untreated raw sewage.
Its kinda similar with major organizations and IT. If there's a policy with the correct checkboxes and strong sounding speeches and firmly worded emails were produced by executives, it doesn't matter if there's some individual unpatched Win95 machine running mission critical tasks, even if there's thousands of those supposedly isolated individual case systems.
Because we as a society have yet decided that it's bad. Just like we used to not think environmental pollution was bad, or at least with stunting businesses.
The SEC (before it was neutered) may be a better model. It's a group of hackers that investigates government and industry infrastructure for problems. They can warn parties if they find an issue, and if the issue isn't fixed, this group could bring civil, and maybe even criminal, proceedings against the parties.
> It's such a good idea that the US government decided you were right decades ago.
Perhaps, but it is hardly a universal belief that they have the balance right.
It is hard to pick a starting point to get in to this discussion, because it has been going on for a long time and is really complicated, not to mention largely classified. Perhaps one that dovetails into the encryption debate will be as good as any:
I don't know why you got downvoted. I know plenty of companies with modern tech that absolutely suck at security. Security is just hard, and it's not easier just because you're a tech company.
By comparison, if you spend billions of dollars on a modern building, I can still probably break into it with just a can of compressed air. I doubt the design plans for the building included "mitigate compressed air attacks", and it's the same with every other kind of organization.
> Security is just hard, and it's not easier just because you're a tech company.
We're not talking about everyone having Red Teams here. We're talking about keeping up to date with regards to Patch Tuesday, or even just having an OS that still actually gets patches. That'll get us 80-90% of the way to decent security:
> “Almost two months passed between the release of fixes for the EternalBlue vulnerability and when ransomware attacks began,” Microsoft warned. “Despite having nearly 60 days to patch their systems, many customers had not. A significant number of these customers were infected by the ransomware.”
Do you know how many versions of how many operating systems across how many different platforms and products my company uses? Hundreds of variations, maybe thousands. Only a few groups have a solid handle on regular patching, and that's because of how hyper-standardized their systems are.
Even if an OS has automatic patching, you can't just immediately apply patches without going through an SDLC and QC process. And not every group even has those processes defined. Even if they do, you still need to address critical business problems before security ones.
> Do you know how many versions of how many operating systems across how many different platforms and products my company uses?
What OSes besides Windows, macOS, Linux, Solaris, AIX, HP-UX, z/OS, mobile (Andriod, iOS)? SCADA stuff perhaps?
And how many of those operating systems are targeted by worms and ransomware?
I know when I used to admin Solaris and IRIX machines we were worried a lot less about attacks than the Windows desktop folks. An nmap of the systems showed SSH open and one or two other services, which meant very few vectors for attack.
The fact of the matter is that by securing desktops, one probably takes care of 80% of a company's attack surface. Next take care of your Windows servers, which is another 10%. Then go after Unix-y servers and things like printers, HVAC, IPMI, etc (which should be VLANed off).
Let's imagine just one example of patching a remote hole in a Windows server. First, you have to stage a duplicate of an old server with a new patch, which can take days. A production environment may need significant development effort just to integrate the patch, which takes days. Then run all tests and QC processes against it, which can take days. Then you can deploy it during a maintenance window. This is 1-2 business weeks.
Now multiply that times 1,000 different combinations of versions of Windows, applications, networks, platforms, and so on.
You're not just patching "servers", anyway. You're patching bare metal machines, hypervisors, AMIs, container images, software packages, plugins, network applications, security policies. Often vendor platforms don't even have a patch available so you have to implement a custom workaround, if one exists.
One could write an entire book about this subject. Please believe me, it's not simple.
That approach might have made sense 10 years ago but it's no longer tenable now that the threat environment has escalated. Organizations will now have to roll out patches immediately even at the risk of disrupting mission critical operations.
There is literally no way to fix all this dumb fragile infrastructure without a massive government program that accepts responsibility for doing so. You need thousands of smart people going through every machine, all the software, all the systems. These people are never going to work for Baltimore or for Maersk, not in a million years.
Why not? Just 80 years ago, people would have laughed at you if you told them that computer techs would have stores everywhere, every 1st world household would have more than one, and that most office jobs would require some form of basic computer literacy. Just 150 years ago, cars everywhere, owned by most everybody, with everybody capable of taking a 100 mile trip on a whim, would have sounded like Utopian pie in the sky fiction. I'm sure someone said there's no way the everyday Joe and Suzy would be able to maintain a car. In the Ford Model A days, some people would hang a bulb of garlic under the hood to "cure" their car.
A few things could happen, analogous to the progress made by cars and also analogous to what's happened so far with computers: 1) The "packaging" will change, so that higher levels of security maintenance will be greatly simplified and more accessible. (Which might mean that everything is administered centrally to an even greater extent. i.e. Stadia and O365. Maybe O365 over something like Stadia?) 2) Security tools will advance. (SSH vs. Telnet, HTTPS vs. HTTP, and TFA have raised the bar for an exploit.) 3) The culture will become more computer savvy.
It's understandable that you're frustrated, because this sort of progress is going to have a generational component, which is orders of magnitude slower than technological progress.
Smaller cities don't have the financial wherewithal to competently run internet-facing services. Usually the best administered parts of a city are in police departments where sworn officers are filling IT roles, aided by injections of grant-driven projects done by consultants. That's not a good situation for anyone. The winning move is not to play.
I regularly hire people from cities and school districts due to some unique aspects of my workplace and benefits that makes it a smart move for them. We routinely take folks in senior tech or director roles and drop them into entry level titles -- and they are very happy to get significant raises.
End of the day, the "fix" is to dump money into rolling out modern solutions. Every user-facing city IT function should be delivered on an iPad or Chromebook.
"...dump money into rolling out modern solutions."
Yep. Ongoing maintenance and pro-active replacement is a cost. A cost that needs to be solidified as an ongoing expense. A lot of the people in leadership positions see technology as a one-time cost. ("I still have the computer I bought 10 years ago at home! It works just fine. Why do we need to buy new computers?")
I volunteer at my son's school and the overall security/integrity of the place is 10x better than it was a few years ago. That's because of Chromebook, and Google's management model of paying a fixed cost to manage the device for the life of the device.
Usually the best administered parts of a city are in police departments where sworn officers are filling IT roles, aided by injections of grant-driven projects done by consultants. That's not a good situation for anyone. The winning move is not to play.
How about turnkey police department SaaS, delivered over a separate network over low orbit satellite connections? That will be separate from the public-facing police SaaS apps.
> Then hopefully pillage all the miserable smart people who are currently working at mega corps and agencies who actually want to do positive, meaningful work for a change.
Oof, if you think being a smart technical person working at a megacorp is worse than being a smart technical person working for a government agency... I have no idea what your model of the world and labor market is.
> Instead let's create a new government agency or pivot the NSA from it's dumb paranoid reactionary posture to more of a proactive NIST-style advisory role on best practices
It's similar to working in infosec though. You do the pen tests, you find and identify the vulnerabilities and write up your report.
Then its up to the municipal entity to put whatever your recommendations are in place to fix what they found. I have a large number of friends in the community who say they can do the work and identify issues, but often times, they come back six months or a year later and stuff they highlighted as critical fixes were still not taken care of.
It's the old, "You can lead a horse to water. . " saying, right?
The real issue is how you implement these fixes on a continuous basis to keep the network safe?
"There is literally no way to fix all this dumb fragile infrastructure without a massive government program that accepts responsibility for doing so."
Regulation and/or software liability. So far, they can ignore security with it rarely costing them anything. In a few industries, ignoring safety will cost a lot. So, they spend a fraction of that cost on preventing the overall cost. It might also be a requirement of even selling the product. Basic stuff like memory safety, login practices, updates, and so on being a requirement could get the bar way up. It was done before under TCSEC with DO-178C doing it now for safety. A whole market of safe products formed.
Alternatively, people do a strong push in courts to hold companies liable for any time their computers are used to attack a 3rd party. The folks suing and experts testifying focus on the core practices that prevent most problems. The argument is professional negligence. We stay on them until the risk-reward analysis for information security has executives making sure it gets done with specific stuff in the lawsuits addressed. Since that stuff is 80/20, then it solves about 80% of the problems. The new incentives might also make it easier to convince them to partly or wholly use systems like OpenBSD, QubesOS, and Genode.
Although I favor regulation, I think the lawsuit strategy should get a lot of experimentation first. It doesn't require a change in government. Just good lawyers. :)
> You need thousands of smart people going through every machine, all the software, all the systems. These people are never going to work for Baltimore or for Maersk, not in a million years.
I love this assumption that anyone smart or anyone good would automatically be working for another company or at another job. Not only is that just screwy off the top (assumes that smart people automatically can move and relocate to the most desirable job - even geographically) but it also assumes that anyone with any skills would never ever work in that type of situation to begin with. [1] Maybe there are good people that are working there but a government situation like the city of Baltimore is not chock full of the type of money required to actually fix a problem like that or ever Maersk management does not view it as a priority in any way. You know not every job is in a startup that has been VC funded and can afford to lose money ditto for a traditional company such as Maersk. Noting of course that the 'best and the brightest' that work for some of the 'top companies' are kind of screwing up frequently. Not to mention MSFT 'top' designed much of this hackable code at one point.
[1] Attorneys are often like this as well the halo of a top firm means if you are operating out of a storefront you must be stupid in some way otherwise you'd be working at one of the top shiny law firms.
Have the NSA attack domestic systems and make the ransom fixes for the the vulnerabilities they just exploited! haha, might just be crazy enough to work
This will only be a solution if it addresses the "business critical application, vendor has gone out of business, no source code available" case.
Which ultimately comes down to "Who's going to pay for a more secure replacement?" & "Who's going to assess heavy-enough fines to force the replacement risk scales in favor of doing something?"
You just described where I work (small manufacturing company) when I started.
It's taken me 18mths to significantly improve our security posture and I still have a bunch of stuff I need to do (I was hired as a programmer but I couldn't in good conscience leave it as it was).
> government program that accepts responsibility for doing so.
We already have that.[0] But it doesn't do any good, because it's purely advisory, but they need regulatory and enforcement power. We need an SEC for cybersecurity. Obama put Rod Beckstrom in charge of the National Cybersecurity Center, and that was great, but he resigned after a year because there was no funding behind it. It had been limping along since, but Trump deleted the position about a year ago.
The point is, if we want to fix this problem, we need the political will to hold people accountable instead of just telling people to not do stupid things. IT and Legal are cost centers in 99% of organizations, the difference is that if Legal and IT tell the C-suite "We need to do X or else bad things will happen" Legal gets listened to but IT doesn't. This is because if Legal's "do X" fails, the outcome is an expensive lawsuit, but the outcome of IT's "do X" is a blog post about their continuing commitment to the safety and security of their customer's privacy.
Every municpality I've worked for runs a majority of their systems on the IBM System i (iSeries, AS/400)
IBM is very slow to update any of the tools for Windows that are included with these systems. Ditch the green screens, use the IBM EasyAccess or whatever they call it on Windows, you just saved some $.
Now, there are database tools and admin utilities that are also included in this. Most of them don't work with anything after Windows XP, so you're in a position where you can't upgrade to securable versions of Windows, because you'll lose IBM access.
Oh let me rush to defend my favorite platform, the iSeries.
The platform, regardless of which, is not to blame. It is the laziness of most IT shops which either don't have any process in place or only pay it lip service.
iSeries machines (AS/400) serve many different client interaction methods, from green screen, web services, ODBC, NodeJS via Qshell, and more. If employed properly the iSeries has some of the best security in the industry, reason why many are used by banks all over the world, hospitals, the gambling industry, and more. Failure occurs for the same reason it does anywhere else, not having a process in place and following it.
As for currency with what is available today, iSeries access is facilitated through a JAVA based client which works on Windows, OS X, and Linux. It is the same java application throughout and even provides ODBC access through java drivers and for windows you can opt into a subset of windows exe/dlls. There is a full blown web service hooked to it as well that runs on the server as needed. It is up and down fully SSL too.
We can partially blame every software vendor that’s ever existed. In 10 years we will be blaming Google for applications that only run on outdated versions of Chrome because the API the developer used only existed in Chrome and wasn’t accepted into the standard and then was removed a few years later.
I don't think much of anyone makes stuff for old Chrome versions given how aggressive Chrome is about auto-updating. Chrome doesn't have any official options to disable auto-updating as far as I know.
“Make sure nobody at state or DHS or justice can subvert this new agency, they need to stand on equal footing with any company or agency.”
That’s going to be a problem. It’s a zero sum with power in dc and if you can solve that you will be fixing more problems than domestic info sec weakness.
Why am I not surprised that the comment saying no need for government intervention is the one comment that's downvoted?
While I typically believe smaller government is the answer, I would personally welcome a regulatory framework that gives me confidence in both my own organization and every other one as well.
It wouldn't ruin my business, it'd just be another line item in my budget.
(Forgot to respond to part about a government organization to get secure products out. Here's response to that.)
It's been done before. It was the Walker Security Initiative. It resulted in some of the most secure products the market ever produced. A combination of lobbying for insecure products to be bought and NSA's actions destroyed what little there was to the market. Bell describes it:
Just found a link with examples of what they were doing. I haven't read this one fully, though. Linking it mainly because it talks about CSI and how market was responding.
Note: I don't think KeyKOS itself came from that community. It was from capability-security field. KeySAFE extension was driven by TCSEC requirements, though.
Note: Although not first attempt, Trusted Xenix was first attempt at securing UNIX that made it to market. Available from 1990-1994 I think. Coincidentally, OpenBSD starts in 1994 to go even further.
In the process of using a town's court website to try to pay a parking ticket, I practically-accidentally found a security vulnerability in it. The vulnerability immediately showed me many people's personal information. I closed the page when I realized what had happened. I didn't report the issue because I was worried that the people running a small town's buggy court website might be more interested in figuring out what laws I broke than understanding the issue. I'd rather have nothing to do with it. It's the only time I haven't reported a security vulnerability I've found. I'm probably over-thinking it, but when there's other groups that invite vulnerability reports and even give bug bounties, it just feels like an unnecessary risk reaching out to ones that don't.
35 years of jail time has a powerful chilling effect.
How this article could talk about the 'surprising' lack of ethical hackers without covering this law and it's abuse is beyond me.
It's like talking about the 'surprising lack of research into clinical MDMA studies' and not talk about the war on drugs. It's like they are intentionally ignoring the HUGE elephant in the room.
There is a big industry starting to spring up around this, data insurance. Go to any big insurance conference and all they are talking about right now is cyber insurance. Construction companies are asking for it for example; they've always had to insure their employees, but now they are seeing things like their offices being hit by cryptolockers and being extorted for bitcoin by Russians. They can't afford to lose productivity over something like that so they are getting insured. Those insurance agencies are working with security consultants to help harden the networks too. So yes, this is definitely a big problem, but the wheels are already moving to start addressing this issue, because there is money to be made.
It's not clear how valuable cybersecurity insurance will be if there is no coverage for "acts of war" or if the insurer claims the insured didn't do enough to protect/defend against it. [1]
How many $40K ransoms would an org have to pay before it was cheaper to have a security team? The demands might be small to make it cheaper in the short term to pay instead of try to fix the problem. Obviously there are large costs external to the ransom payment, but you don't have to get those funded via political process.
Also, in my experience working for state government, engineers were considered a waste of money. Hiring was difficult because they wouldn't pay anywhere close to the market rate, and techies weren't allowed to earn more then managers. There was a parade of sales people pitching the director to lay off the devs and outsource everything, and then pat each other on the back for "slashing government waste." It seems like most of their apps should in principal should not need to have been completely reinvented from scratch, but having people who don't work here responsible for security causes an agent-principle problem; from the point of view of the contractors who don't care about your security and the bureaucrats who don't understand it, everything except management is just a cost center.
If you do manage to fix anything, the new director will throw it away and start over with a new vendor contract next election. Also, if you are paid by the gvt and are not a cop or a politician, you will be despised as a "useless feeder" and face the risk or furlough, de-funding, re-org, hiring freezes, etc, that make it hard to reliably get anything done.
If the public doesn't want a public sector, why fight them by trying to work there?
I think Schneier was right to point out that security is an economic externality, and that a high level political solution is likely necessary.
EDIT: A couple of people here have pointed out that a "cyber" insurance industry is emerging. I find this encouraging because it at least seems possible for that to be a politically acceptable mechanism for pricing security; your premiums could be contingent on compliance as determined by the insurer, who has skin in the game to understand security and hire real professionals as auditors. I'm not sure how that translates to actually fixing security, but it seems like a start.
100% agreed. I've always found it very ironic that governments want the best and brightest when they never pay market rates. Not only do they want the best and brightest, they want them to selflessly serve their country at the cost of financial advancement. Is it surprising that they end up getting the lower end of the crop?
https://18f.gsa.gov/ was a fantastic move and exactly what we need. Unfortunately, it took a group of extremely successful private sector individuals to give up their careers temporarily in pursuit of fixing something.
Thanks, that's an interesting story, and a cool idea. My experience was before I got to SF, so there was really not a lot of local talent around.
I was an intern back then, and I liked my boss and my team, and what we were working on. I got an offer for a mid-level position, but I went elsewhere partly because of the apparent instability.
These type of organizations probably need to be running all chromebooks with a G Suite enterprise account (configured to require all employees to use 2FA). Something that has way less attack surface than what they have now.
CompTIA certs are a mixed bag, some are decent, but many are very surface level. CISSP is broad, and more managerial level, but certainly worthwhile. But if you'd like some in the trenches stuff, I'd suggest the OSCP.
AP style should really push journalists to use the term "cybercriminals" over "hackers".
I'm not the first to say it but the issue is growing, and it's only going to make the public more leery of any tech-minded but innocent kid or professional pentesting adult who uses the term "hacker".
Everyone in the industry thinks that anything with "cyber" in the word is either a joke, or is made up by government types that don't really understand computers.
I wonder how William Gibson feels about coining "cyberspace" only to live in a world where "cyber" has been reduced to chat room banter and low-brow humor.
I'm not sure I agree. I think 'hacked' is widely enough known with its negative connotation. If Google posted a blog titled 'We got hacked', everyone would immediately click on it and their heart probably skip a beat. And in English it only makes sense that who hacks, but a hacker.
Regardless of original meaning, as happens with language, words definitions change based on usage and common understanding. I think the 'hacker' battle has been lost, and those examples you listed should use a different term that would be more easily adopted.
Fwiw, I started reading Hacker News ~8 years ago, fresh out of a CS degree and working in tech, and it was my first exposure to a word sense of hacking other than illegal access. I've used the term pretty differently in the years since, but I think the battle for the mainstream meaning of the word was lost long ago.
Probably worth noting I've been in journalism going on a decade now.
It's a bit too easy to simply write off ALL journalists (no pun intended) as "not that bright".
You may as well say most people aren't that bright. Which would be equally unfair.
Likewise, how can you stick a small-town reporter who covers highschool basketball in the same list as a New York Times reporter with an Ivy League background in economics? Make no mistake, I'm not implying the latter is somehow superior to the former, but there is a huge range between the people who make up the journalism industry.
Some are brilliant (e.g., Ronan Farrow), but the real point is that they’re speaking to an audience which they’re essentially trying to coddle because the audience isn’t comprised of experts.
You're on to something but it's a tiny bit more complicated than that.
Take general assignment reporters for example. They have to learn how to learn.
What I mean is, they're experts on digesting new information. Because they have to write about ANYTHING at a moment's notice, and can't be expected to be experts on everything. THEN they have to write about that topic using only 500 words (or so) to an audience who also probably knows nothing about the topic.
That's a tall order and you shouldn't be surprised reporters get it wrong sometimes.
If there's a workaround and it's not hard for people to find it and paste it into the comments, then it's not hard for people submitting articles to post the non-paywall version in the first place.
It's important that the original URL be used so that readers can see what the domain is. It's for that same reason that HN doesn't allow link shorteners.
I'm painting with a broad brush here, but a lot of government employees do as little as possible. They are union protected, so they can stay in their jobs for a very long time. So you get a lot of the thing in IT where someone has 20 years of 1 year experience. I'm sure the budgets aren't great and the rest of the government isn't pushing tech, but you end up with a lot of 'it works fine just leave it as is'
> I'm painting with a broad brush here, but a lot of government employees do as little as possible.
Welcome to humanity. How many people actually devote their lives to the improvement of the human condition, or have a devotion to even the jobs that they are working for? I fear that treating your job with the devotion necessary to do it truly effectively means that you limit yourself and future opportunities, because you must devote time to your own growth.
And how many people are interested in devotion to any cause or personal growth? Personal growth is hard, and devotion to a cause, a serious one especially, is equally painful. Understanding climate change, for instance, is so painful that people reject it in its entirety; it could be because it is a direct refutation to their world view, or because the idea is just so uncomfortable on its face.
I feel as if Ada Palmer had it right when she suggests that the first rule of Utopians must be "I hereby renounce the right to complacency, and vow lifelong to take only what minimum of leisure is necessary to my productivity, viewing health, happiness, rest, and play as means, not ends".
I think that's too much weight for the vast majority of humanity, and understandably so. In America it feels as if the way society is structured is designed to sap willpower (for example: the Atomic Family). I can think of a number of reasons why this might actually be the planned outcome of its current design.
On the other hand, we want digital public services, to be able to pay taxes electronically, to be able to vote electronically etc.
I don't think the "don't put sensitive information on the Internet" idea really holds any water unless we expect our public services to be done with pen and paper for evermore, while everything else goes digital. (Yes, machines could be disconnected from the network and so on... but that's arguably just saying "an airgap is all you need")
Well, I would prefer it, but at least at the scale of towns and small cities, I simply don't trust them.
Furthermore, if they do so many things that they need a public-facing website to manage all of the sensitive information they keep on me, I probably don't want to live there.
It's all well and good wanting to access a municipal government's private records through the internet, but try to think of how likely they are to get that right, and adjust your desires accordingly.
Instead of such general and frankly unhelpful statements, would you mind explaining to the previous poster why electronic voting is such a bad idea? It may even generate further discussion instead of just downvotes.
Wouldn't increasing voter turnout be an upside? Reducing the loss of productivity of those participating? Voting electronically is dangerous today, but I still see it as something to work towards. However, I imagine that would require incredible innovation towards multiple layers of robust identity verification protocols.
> "Wouldn't increasing voter turnout be an upside?"
There are better, less drastic, ways of making voting more accessible. Washington state's system of mailing everybody a ballot works pretty well. I usually don't even remember when elections are coming up until I get a ballot in the mail, without ever asking for it. In Washington, people who care to vote don't have any trouble finding the time. (Of course many people just don't give a shit, and electronic voting won't change that.)
But those things can be done without putting sensitive information on the internet. Crypto currency transactions can keep everything sensitive off the network. A one way airgap is indeed all you need (your device should be able to push data to the network but never receive any data from the network). The same scheme works for voting.
This is more of a "don't let various government agencies neglect their infrastructure while hosting extremely sensitive information about you" sort of situation. Really the bigger concern is that a ton of infrastructure is at risk of getting held hostage.
Stop building the infrastructure, it is worse than nothing. They clearly don't have any interest in taking responsibility for securing these systems, which is why the friendly headline is "U.S. Cities Strain to Fight Hackers" and not "U.S. Cities divulge sensitive information to criminals through a pattern of routine incompetence.".
Unfortunately this information was expected to be divulged and collected before the internet was widely used. Policies and infrastructures were not put in place (or ignored, as I've personally witnessed) to protect this information once it was digitized.
At the same time, there should be a real discussion over why so much information is being demanded, why SSN's are being overused as they are, etc.
There is literally no way to fix all this dumb fragile infrastructure without a massive government program that accepts responsibility for doing so. You need thousands of smart people going through every machine, all the software, all the systems. These people are never going to work for Baltimore or for Maersk, not in a million years.
Instead let's create a new government agency or pivot the NSA from it's dumb paranoid reactionary posture to more of a proactive NIST-style advisory role on best practices, have them hack everything domestically and start fixing things as their core mission. Make sure nobody at state or DHS or justice can subvert this new agency, they need to stand on equal footing with any company or agency.
Then hopefully pillage all the miserable smart people who are currently working at mega corps and agencies who actually want to do positive, meaningful work for a change.
Problem solved someone hire me to advise on their political campaign.