One of the things I'm taking from this episode is just how worthless a lot of prognostication about security by a load of "experts" on the internet is, especially of the "trust us to get it right" variety.
Frankly, their whining about how hard crypto is is partly responsible for the monoculture we have. Yes, it's difficult (more so in protocol than implementation), but they are so offputting to new people coming into the field it is insane.
Clearly OpenSSL dev is broken, at least partly because everyone assumes everyone else is auditing all 300k lines of it, but also I can't help wondering if this calls for stronger component isolation within cryptosystems. For example, protocol implementation, encoding and decoding seem like they should all be totally isolated, so a disaster like this doesn't mean you could be leaking information from the rest of the system. I imagine many a HSM vendor has been quite pleased by this news.
Surely, Nigel, if you'd just implemented your own encrypted transport, everything would have worked out great. Look how well that worked for Cryptocat.
The most consistent complaint about OpenSSL is that it is developed amateurishly (I think it's more complicated than that, but "amateurish" is a fair summary of the critique). Your response seems to be, "double down on amateurish!"
PS: The only reason you know about heartbleed is that one of the self-proclaimed "experts" found it for you. Neel Mehta is an insider's insider.
Heartbleed, and "goto fail" are bugs that any competent C developer could have found. You, me, the muffin man, the NSA, a bright undergraduate student. Anyone. These aren't vulnerabilities that you read about in a crypto journal. They are basic implementation bugs.
So why weren't they found by amateurs? (Well, perhaps the premise is false. Perhaps they were found by amateur attackers.) However I submit to you if true it is because people who are actually experts in cryptography do not generally spend their time looking for dumb implementation bugs. I'm sure some do, but many don't. Which is understandable--we want those people looking for weaknesses in AES or curve25519 or something.
Meanwhile we have a culture of abstinence-only crypto education, where everything crypto touches is a forbidden land that we shame any ordinary software developer for venturing into. Nobody found this bug because ordinary developers aren't there to find it. They're not there because they're told to leave.
You suggest "double down on amateurish" facetiously but I claim that amateur hour would have actually caught these two bugs, because any amateur can see them. Maybe it would have created more bugs, maybe not, but I think it would have stopped these two early.
What I think we need to do, and what we should have done a long time ago, is recognize that abstinence-only crypyotgraphy doesn't work. We need to define levels of risk. Writing your own crypto algorithm is Threat Level Red. Implementing a strong one from a spec is Threat Level Yellow. Implementing a high-level authentication scheme like TLS from a spec is Threat Level Green. Porting OpenSSL to a new architecture is Threat Level Blue.
Abstinence-only crypto education doesn't recognize that some people are going to do crypto anyway. It also doesn't recognize that "not enough eyes" is a threat model that objectively produces THE most serious vulnerabilities we have faced in recent years. Putting all our eggs in OpenSSL and telling ordinary developers to use it and not ask questions literally burned down the internet. I don't know what kind of bad scenario we were trying to avoid, but the one that actually happened was worse.
Some type of risk-aware model encourages people to get involved in a skill-appropriate way. That needs to be the goal of the crypto community right now; to capture the energy from software developers interested in cryptography (particularly due to recent world events) and do something useful with it; not turn them away and say "nothing to see here".
"Not enough eyes"? OpenSSL is among the most aggressively reviewed codebases in the industry. Everyone reads OpenSSL. The problem with OpenSSL isn't that it didn't get "enough eyes". The problem is that the code is bad. It's disorganized, serves too many interests, and (particularly in the TLS portion of the tree) is a grab bag of functionality.
OpenSSL implements twenty two TLS extensions. NSS, the library used by Firefox and Chrome, implements 11. Why was it possible for some dude in Germany to specify a TLS extension that nobody really needed and then implement it in OpenSSL as the client default? Because OpenSSL is a bazaar, not a cathedral, and nobody really claims ownership of what the code in openssl-1.0.1e/ssl is supposed to do.
Also: check your availability bias. "Burned down the Internet"? This was a bad bug, but it wasn't unique; nginx had the same bug just a couple years ago (I know this because we found it). But shove all that aside, because these are all codebases that have hosted remote code execution flaws.
This is a cross-vendor, cross-platform vulnerability that requires one to assume that almost any password used over https, any SSL session, or almost any SSL certificate used in the last year, hell, anything in server memory in the last year is compromised. I'm quite confident there are many systems that relied solely on SSL to gate immediate, root access, and some of those are still right now, as we speak, exploitable.
If that's not "burned down the internet" nothing possibly qualifies.
Anyone would say you stand to benefit from security remaining some sort of priesthood activity or something.
Diversity of implementations is a good thing, for the simple reason the number of people an individual vulnerability could expose would be that much smaller.
People have weird ideas of how we stand to benefit from things. Check out http://matasano.com/careers for a comprehensive rebuttal.
Diversity of SSL implementations is a complicated proposition. When the Bleichenbacher e=3 attack happened, the monoculture helped: the most popular TLS clientsides weren't vulnerable, but the oddballs were. When research labs fuzz X.509, they find X.509 parsing bugs in the oddball libraries, not in OpenSSL.
Be careful, because you're operating from a pretty serious availability bias. You vividly remember this recent vulnerability, because the researchers did a great job of publicizing it. But this isn't the worst vulnerability that's been found, even in TLS stacks, by far.
Computer security is like a genepool, and you can breed the strongest most brilliant thing you want in it, but when you cut down on diversity and then encounter some new virus that wipes everyone out, what happens?
Diversity alone does not solve the situation, you need a diversity of competent products, but having one loudly supported implementation that isn't nearly as good as people are led to believe is epicly stupid.
The fact so many people know that OpenSSL is a disaster area is even more damning for the security industry. Yes, Google are paying people to fix it at the edges, but it's clear from the likes of TdR that their whole process is broken, and yet in all the time it's been visibly broken to many of your insiders very little has been done to attempt to fix the structural problems, and a lot of effort has been made to shout loudly about how difficult fixing them would be, putting off anyone that might be trying to help. It's nonsense.
This is time for a bit of humility from the security industry, not for strutting around proudly.
There are no testable assertions in your comment, nor any specific facts. I took the time to respond to your comment with citations to previous vulnerabilities and assertions about the impact of "monocultures" on them.
It's a little uncharitable of me to say this, but: I anticipate you won't actually respond to those assertions. So I guess I'd then ask: who has the more trustworthy argument? The person appealing to emotion and sinister (and, in this specific case, silly) motives, or the person who can refute an argument with specific examples?
If you think OpenSSL is "damning" of the whole "security industry" (whatever that is), you are of course welcome to start your own "security industry". If you can find the kinds of things "priests" like Neel Mehta can, I assure you, you'll be richly rewarded.
If we had multiple equally used SSL implementations of equal quality and used in equal proportions then a vulnerability in one would clearly not have the impact of a vulnerability that affects everything. (Do I seriously have to spell that out? Is it not obvious?) You are arguing from the position that it would be impossible to approach OpenSSL quality if resources were split across multiple implementations, but this event has demonstrated that the 1000 eyes theory is nonsense. That bug wasn't there a short time before being found, it was there for years.
It seems obvious that this is going to spawn a Rust/Ocaml/similar reimplementation of SSL to me. The reason it's damning is this is only going to happen because the wider community now deems it necessary, when really it should have been a measure that the eternally loud security contingent took on proactively.
"If we had multiple equally used SSL implementations of equal quality and used in equal proportions"
Then either 1) each would be of substantially lower quality than if the same development effort had been focused on one, or 2) you've pulled a bunch of smart developers off of other projects.
Biological metaphors are interesting, but software faces different constraints.
The big truism of software development is that motivated small teams can do proportionately much more than the same people would if they're all thrown in one big group on one project. The whole mythical man month at work.
It's not even true that open source SSL is a monoculture. There are two critically important open source TLS implementations, not one (NSS and OpenSSL), and that's not counting SecureTransport (Apple's open-source implementation).
I don't think there's a single part of your argument that really survives scrutiny. Your central point is false, as are its premises. To the extent that a large portion of the Internet uses one TLS implementation, that's often been as helpful as harmful. And the "monoculture" you're decrying is counterfeit: you seem to think OpenSSL is the only credible option, but Chromium and Firefox disagree with you.
Aren't two of the three implementations you're mentioning mainly used in clients though? Most Linux boxes are not used as clients, and so there is a monoculture of Linux server security.
You, rightly, mentioned NSS elsewhere, but do people actually use this on servers in any great number? I guess you could argue that Apache and Nginx shipping OpenSSL as the default option for https is the problem, in which case shouldn't we change that, or is there something else about OpenSSL that prevents people from using NSS?
NSS implements both the client and server side of TLS. And OpenSSL and NSS aren't the only options; if X.509 bugs are more your style, you could try PolarSSL or MatrixSSL:
Fill in the blank: 50% of the things being compromised is ______% as bad as all the things being compromised.
Sometimes a partial compromise is easy to deal with, and the number in the blank is way less than 50. Lots of diverse things is good.
Sometimes half the units being compromised is almost as bad as all the units being compromised. Lots of diverse things is a bad thing in this environment.
It's absolutely true. To the degree that your objection holds (which isn't negligible), it places some limits on the return possible from 1. 2 still fully applies. I'm also not sure what the limits in 1 are, when what we need is more careful code and better analysed code, rather than more code. "Find problems" shouldn't involve tremendous amounts of communication overhead.
The very reason HSM vendors will be shouting about this for years as well, as anyone using those will at least have ensured their private key hasn't gone AWOL.
And that is the same recommendation you'd have made as a week ago? (In fairness, I can believe it might be).
I am honestly struggling to see how you don't think having all Linux servers running incredibly similar crypto stacks is a bad thing. For sure HSMs and Windows boxes add to the diversity of the world, but Linux boxes form such a massive proportion of servers connected to the net that any common vulnerability there is a major problem.
Just assuming you've found all vulnerabilities is not the way to go, so mitigating the effect of a vulnerability happening seems like a reasonable thing to do. After all, this is a good reason for things like process separation.
The good news (sort of) is that it's an ordinary buffer overflow attack. We've known that these are a problem for decades and we've known how to check for them automatically for decades. Even in C's niche there are research prototypes for safe C-like languages. And yet there seems little effort to make them production-ready and deploy them; what kind of market failure is that?
I think this is the final nail in the coffin for the ideas that "many eyes make all bugs shallow" or that code review alone by "experts" is enough to eliminate even the simplest of bugs. Use static analysis or go home.
There are plenty of other ways to screw up with crypto but they don't pass out the server's private keys just like that.
The main technical point of Theo de Raadt's "exploit mitigation countermeasures" post is that even when the infrastructure it's running on is trying to add safety checks, OpenSSL will often neuter them. The specific example was the exploit mitigations in OpenBSD's malloc, which are neutered for OpenSSL because (for dubious stated reasons) it insists on wrapping the system malloc with its own caching variant. The same would apply, of course, to more straightforward measures like a malloc() which just cleared out the returned memory before turning it over to the app.
And this isn't the only thing about the OpenSSL codebase which seems likely to frustrate attempts at analysis. (Heck, the whole "forest of #ifdefs" thing has got to be at least a bit of a stumbling block.)
Yes, it's certainly not production ready. I just mean that there is one contender that is coming that addresses this.
> is it possible to write a library in Rust, and have it callable from C?
Yup. The third production deployment of Rust is actually a Ruby extension, written in C, that is a thin wrapper over Rust. Ruby -> C -> Rust. The reason it was done this way is that Ruby's C interface is incredibly macro heavy, so it was easier to use them in a tiny C layer than to try to port them to Rust itself. For more: https://air.mozilla.org/sprocketnes-practical-systems-progra... (you'll have to scroll through to wycats' part.)
I still think that now would be the ideal time to start working on a TLS/SSL implementation in Rust. First, it would mean that there would be less time between the release of Rust 1.0 and a usable library. It would also be a good project to give feedback to the Rust compiler writers.
What really bothers me about HN and the startup echo chamber at large is the idea that newness makes something inherently better.
It's like an endemic of "not invented here" fever.
A language is not inherently better simply because it is newer. Sometimes the old ways are best. Rust will not magically solve problems like this. Only more disciplined coding and auditing practices can.
You are greatly misrepresenting the comment you're replying to. It is not suggesting that Rust is better because it's newer. It's suggesting that Rust would avoid this vulnerability because Rust is designed to avoid this vulnerability. It is absolutely possible to "magically solve problems like this" at a language level. Buffer overruns are possible because of specific decisions in the design of C. They simply do not exist in, for example, Java programs. There will still be errors that can occur, but this particular bug is 100% an artifact of C.
> Rust will not magically solve problems like this.
I agree that it won't magically solve problems like this, but it _will_ solve problems like this. Rust guarantees protection from data races, buffer overflow, stack overflow or access to uninitialized or deallocated memory. At compile time.
That's because C++ is an approximate superset of C, an so it inherits all of that baggage as well. This is why we need new systems programming languages like Rust, designed for the 21st century with the benefit of hindsight.
This is (I hope) a once-in-a-lifetime incident, so we have to be careful not to extrapolate it too hard. On the other hand, it's seriously big deal. Overall, it seems to strongly challenge a couple of important assumptions.
As you say, it's a strong challenge to the mantra of "never implement your own crypto". I think (think) it still holds for the crypto primitives. If you're reimplementing AES you're probably doing it wrong. But the protocols? I'm not so sure now. Common wisdom seemed to be that if you implement the protocols yourself you'd screw them up, and you should stick with tried-and-true existing implementations. Now it's apparent that "tried" doesn't have to imply "true". Something being used by millions of people for years doesn't prevent it from having a huge vulnerability for years. Are your odds better or worse rolling your own? I'm not so sure now.
Consider Apple's "goto fail" bug, for example. Among a lot of other stuff, they caught some criticism for reimplementing TLS instead of just using OpenSSL. Well, if they had used OpenSSL instead, it turns out that they would have been shipping an even more serious bug for even more time.
It's also interesting to me how it challenges the idea of encrypting stuff by default. For years, people have been saying that as much traffic as possible should be encrypted, even unimportant stuff that nobody cares about. By doing that, the idea goes, using encryption isn't suspicious and you force attackers to spread out their resources. If only a small amount of traffic is encrypted, attackers can focus just on that traffic. Accordingly, a lot of sites that didn't really need it enabled SSL or even required it, including my own. By doing this, a lot of them inadvertently made things much much worse. A site that's only accessible over HTTP is much better off than a site that's accessible over HTTPS but vulnerable to heartbleed. I don't think the general idea is wrong, but it certainly gives me pause, and I think more consideration has to be given to the increase in attack surface you take on when you enable encryption.
In any case, I really hope we see some new crypto projects come out of this, or more resources put into existing OpenSSL alternatives.
Let's surmise that the reason two distinct researchers (Mehta and Codenomicon) found this same bug in a short timeframe is that the recent Apple & GnuTLS bugs have caused many teams to begin a fresh review of long-ignored shared codebases.
If so, is this the first major bug discovered, with many more to come as they are flushed out by the new level of vigilance? Or, is it the only/last one, being revealed now because the deep dive has now wrapped up?
A few minutes' thought doesn't turn up a counterexample, so I concede the point.
I still wouldn't bet on it, though, as we haven't had a lifetime's experience with such widespread use of relatively homogeneous software. The failure of Enigma wasn't worldwide, but for those who used it, it was a security calamity.
If anything, the limited message I'll take away from this is that defense in depth is good. It's only the web's reliance upon OpenSSL that makes this particular bug so bad.
Well, the internet is only ~20 years old. I'm not strong enough in statistics to actually model this, but if something "this bad" happened once within 20 years, it might be reasonable to expect it would happen again within another 20. It's possible it is a every-hundred-years event, but unlikely. Perhaps even more often in the future, as software complexity continues to grow?
There are a surprising number of crypto-related implementation difficulties in all the language ecosystems (even if all you want to do is wrap OpenSSL) that have nothing to do with the "hard" parts of the algorithms.
Even though this hole is huge, the ratio of attack-surface-exposed to security-gained from SSL is still in general worth it; you'd be throwing the baby out with the bathwater to disable encryption entirely IMO
For comparison, would you avoid having a web server (or roll your own) because of CodeRed?
I would say that something like Code Red is an argument against putting up a web server when you don't need it and it provides no direct advantages to you, just out of some idea that increasing the overall amount of web traffic is good.
There's a few different "just trust us" mentalities, but I've become more comfortable with the "don't roll your own if you missed the first 10 minutes of the lecture" worldview.
Separating algorithm from implementation seems like it shouldn't be so difficult, but retrospect is a great teacher. So now we have approaches like NaCl, where the algorithms are designed to be hard to implement badly. And we have projects like LibTom, which aims to implement existing algorithms clearly. Both seem to be enjoying varying degrees of success.
NaCl is designed to be user-proof. Part of its appeal is that it uses state-of-the-art primitives and constructions, but the reason it's so widely recommended, and the thesis behind its design, is that it's hard to misuse.
Hmm. I was definitely oversimplifying, but if I need AES-256-GCM, how do I choose between OpenSSL and LibTom? (If you have particular insight into the strength of LibTom's GCM table acceleration implementation, I'm quite curious.)
What multiplication factor on "very" careful do I need to be with OpenSSL? :)
edit: Maybe I should've posted this link[0], because tables vs CLMUL is a little more complicated than that.
"Catastrophic" is the right word. On the scale of 1 to 10, this is an 11.
No, it's not. If you asked me — I was a CISO not many years ago — I'd call this an 8. Schneier means well, but he has a tendency to exaggerate. (Here is an example of him suggesting that SOAP and other web services never be used because they "sneak" through HTTP and are therefore inherently insecure: https://www.schneier.com/crypto-gram-0006.html#SOAP)
A 10 would be a case where a bug was not easily patched, and gave complete control of servers to any interested script kiddie. Thousands or millions of web users would have had enough information for credit card and identity theft to be acted on before the hole could be plugged.
This is not that case. And it's certainly not an "11". Most vital websites have either already been patched or are about to be.
I'm going to have to go ahead and sort of disagree with you. A security library, specifically doing crypto, that is installed/embedded/used everywhere that trivially leaks plaintext data remotely to anyone who comes knocking (including passwords, keys, CC numbers, etc) is a compete failure.
Sure, it could hand out shells into a remote system, or hell it could launch a bunch of nuclear rockets as well...that would be very bad. But you seem to miss the point that perhaps someone's password to their shell (or maybe a nuclear launch code) is going over the wire and is intercepted...by a hostile government agency, or a 13 year old playing with a python script. There are endless, devastating scenarios one can think up caused by such a critical bug in the very fabric of the secure communication of the internet.
Heartbeep did, in theory, allow for millions of web users to have their credit cards, passwords, addresses, social security numbers, tax filings and more to be compromised.
Worse than "only in danger until patched", retroactively stored traffic is vulnerable if, like many sites, Perfect Forward Security wasn't used.
If the NSA took advantage of this at all, their logged traffic has become very useful...
"Even 11 is an understatement. Remember the servers involved have potentially been leaking their private key for their certificate! This means anyone can 'fake' being them.
It is not enough to do new certificates. All of the old certificates could now be used for man in the middle attacks! 2/3rds of the Internets certificates potentially need to be blacklisted! This is a MAJOR disaster.
It is unfeasible to blacklist such a large amount of certificates - as every device requires a list of all blacklisted certificates. This means all of the major CA's are going to have to black list their intermediate certificate authorities, and start issuing all new certificates under new CA's. This means even people who weren't effected will probably have to have their certificates blacklisted.
In short EVERY existing CA used on the internet may have to be black listed, and every single SSL certificate re-issued.
IMO SSL/TLS is now completely broken. The number of potential certificates that have been exploited and that could now be used for man in the middle attacks could be in the millions..... the list of black listed certificates will be in the millions and/or the number of blacklisted sub certficate authorities is probably going to be 10,000+. Vendors already hate just including one or two items on the blacklist, let alone this number of items.... "
It's not actually 2 thirds of the Internet, but it can be that the effects can be really bigger than most people imagine on the moment. Either the massive blacklisting or ignoring the period of potential exposure.
Hopefully all this can result in the push to change some of the principles of certificate verification. And maybe a different approach to OpenSSL development.
Eh, I remember how about a month ago when that recent GnuTLS bug was found the almost dominant sentiment on HN was along the lines of "how come anyone is using GnuTLS instead of OpenSSL", "GnuTLS codebase is horrible, use OpenSSL", "the guy maintaining GnuTLS is an idiot, use OpenSSL", "OpenSSL has more expert eyes on it", etc. Although I prefer OpenSSL (for no particular reason), this all seemed so obviously stupid and shortsighted, not to mention some of it factually wrong. And what do you know, a month later we get an order of magnitude worse bug in OpenSSL which was also probably an order of magnitude easier to detect. I made a comment[0] then along that line, thinking to myself that I'd really hate if I got to say "told you so" but that unfortunately I probably will get the chance. I didn't think it would be this bad though.
Even if we generate a new key pair and replace our certificate, aren’t we still vulnerable to MIM attacks if someone had downloaded the old private key and use the old certificate?
That's why it's so vital for everyone to implement Perfect Forward Secrecy. Yes, it's a little late for that now in regards to this bug, but who knows what others bugs like this will be discovered in the future. Let's at least not make the same mistake twice, by not taking advantage of PFS, which could've prevented most of the damage from Heartbleed.
Troy Hunt: ”The Heartbleed bug itself was introduced in December 2011, in fact it appears to have been committed about an hour before New Year’s Eve (read into that what you will). The bug affects OpenSSL version 1.0.1 which was released in March 2012 through to 1.0.1f which hit on Jan 6 of this year. The unfortunate thing about this timing is that you’re only vulnerable if you’ve been doing “the right thing” and keeping your versions up to date! Then again, for those that believe you need to give new releases a little while to get the bugs out before adopting them, would they really have expected it to take more than two years? Probably not.”
There is virtually no useful software vulnerability for which you can't conjure up a compelling-sounding narrative of deliberate introduction (or "bugdoor"). It's like numerology. So you should be wary of people insinuating about bugs.
What makes Dual_EC so compelling to experts is the "Nobody but us" nature of the flaw: the bug is cryptographically limited to a small number of actors. FedGov buys hundreds of millions of dollars of COTS gear with OpenSSL embedded, and this bug is so simple that middle-schoolers are exploiting it. You shouldn't even need to ask if it was deliberate.
Schneier.com Has Moved
As of March 3rd, Schneier.com has moved to a new server. If you've used a
hosts file to map www.schneier.com to a fixed IP address, you'll need to
either update the IP to 204.11.247.93, or remove the line. Otherwise,
either your software or your name server is hanging on to old DNS
information much longer than it should.
Ok, how should I "authenticate" that the site at the new address is the "real" one?
Forgive my naivety here, but is there any way to tell which sites over the last 2 years I/we have used that may require new passwords, and whether they've been fixed?
I'm kinda looking at a site that lists the major sites (Banks, Socials, Shops, etc) that shows a status on whether you're fine / should change password / await fix before change password)
Seriously, it's easier to just change all of your passwords than to hunt down a list (that will be incomplete and give you a false sense of security), cross-match against servers you might have an account on, then change their passwords.
Just change them all and be done with it.
The "whether they've been fixed" part is a little tougher, because that lets you know when you should change your password. General sentiment I've been seeing is give it a week for everybody to fix their stuff (even this might be a little long) and then change your passwords. If a given site says either "we weren't affected, here's why" or "we've patched our stuff, we're all good" then you should change your password on that site ASAP.
When would be the optimal time to perform these password changes? I am assuming that not every affected site has been patched yet, and it would be pointless to change the password, and log in, before they have fixed the problem.
Actually, it turns out that LastPass (which I use) has incorporated most of what's discussed in this sub-thread into its security checker tool, so it automatically tells me which sites need to have passwords changed for them and when.
It's rather unreasonable to expect sites that know they were not impacted will update their certificate. So unless you want to write off your bank's website for the next year or three until the date expires & they renew it then, (banks seem to have avoided this- suddenly dawdling behind the bleeding edge doesn't look so bad!) scorched-earth policies are a bit much.
Actually I might even say the opposite; if the site is secure and the certificate is older than 4/7/2014, that suggests the site was not impacted. If the certificate is newer than 4/7/2014, that pretty much guarantees the site was impacted. It is possible the site patched openssl and did not renew the cert, but in general people are not going to do one without the other.
I'm 100% positive that many targets have had their keys extracted, but it's hard-to-impossible for the attacker to choose what fragment of memory the server returns, and it depends heavily on the server in question. What works against nginx won't work against lighttpd or apache.
I hit a site I control repeatedly yesterday and couldn't even get any common byte-arrays in common across hundreds of connections.
Of course, as good practice, all organizations should treat their keys as compromised and issue new ones.
Also, his "it leaves no trace" is a problem. It's trivial to recognize the traffic pattern.
And that's why every single login system should have two-factor auth -
I started using google's authenticator app for my google and github account and it works just great.
I wish I could use it for every account I have.
Sorry, I didn't mean to say that TFA was the solution, but I think login info leak is one of the consequences of heartbleed, right? So my point was just, at least with TFA, the risk of having your accounts stolen is reduced.
Just a question (and I don't know too much about this). Is there any chance that certificate authorities who give out warranties could actually have to pay out on them now? Do any of them use OpenSSL?
Without looking at the specifics, the CA can't be held responsible for you leaking the key yourself.
Or do you mean if the CA companies themselves were compromised? That's a big separate issue. Even if the web process is the one that generates the keys (I'm skeptical, but it's possible), any keys made that way would quickly be moved out memory, unless they were made that day.
I think the warranty only covers losses that occurred during the use of the certificate. If it wasn't limited liability, Heartbleed could have caused a "Lehman Brothers" style default for all CAs.
Is there really a point in changing keys and passwords? It seems to me that if an attacker got the passwords, I should assume they already installed a root kit on my server?
I'm honestly not sure how to react. I'm not really a sysadmin, but I have a server online.
I suppose I could start a new server, but how can I be sure that the provider has already patched all their holes? If they've been hacked, maybe the images they use for preparing new servers have been compromised, too? Might be better to wait a little before restarting everything from scratch?
Request for clarification from those who understand the bug's workings:
The memory it can expose is limited to that visible to the process using openssl, right? Or does the bug reside low enough in the kernal stack to disregard memory protections?
One question - this keeps talking about attackers being able to "read all of memory." Does anyone know whether that's limited to the process that is running OpenSSL code?
If you're not using SSL right now, there's no rush to upgrade, but do it anyway while this is in the forefront because when you do use SSL one day on your server, you might forget that you had this old version of OpenSSL.
And there may be other things besides web servers using OpenSSL that you didn't think of or aren't aware of.
For example, I believe that using curl to fetch an https URL leaves you open to this vulnerability if you connect to a malicious server. The odds of the server being bad and the odds of curl containing anything of value are low, but it still counts for something.
At this point it's safer to say that an intelligence agency is responsible than that they aren't responsible. This is precisely what Schneier, Greenwald, et al. mean when they say that the NSA tactics degrade the security of the overall internet architecture. It's incredibly dangerous.
How can you make such a claim? Do you have any proof that they were involved with this specific bug?
I get that the NSA is after us but when you consider that the bug is of the exact same class as a bug every C programmer has ever made in their career, it seems probable that it could have happened on accident. Where do you see the malicious intent?
We know from the NSA's own files that they pour lots of money into programs that specifically introduce bugs like this [1]. Do the current batch of leaked documents outline this exact vulnerability? No. Does it perfectly fit the model of what the NSA's own files say they're doing and therefore make it a high-likelihood explanation? I think so. At the very least the assumption that an intelligence agency, such as NSA or GCHQ, is responsible is a useful way of thinking about the kind of adversaries one faces with this type of software.
I don't think the NSA had to create this bug. I do think that static analysis could find this bug, and if they did not have the static analysis tools to do so, they probably will soon enough. It is far better for them to find existing bugs than introduce new ones, because the former is untraceable. The latter, inevitably, leaves a paper trail. I'd need a lot of convincing to believe that OpenSSL is so darned secure that it has no bugs in it until the NSA adds them, just based on the software engineering practices of the product (using C, little test code, etc).
The NSA has two mandates. First, it is to ensure that Americans are using secure communication channels. Second, it is to collect data. When these two things come into conflict they have the authority to make a decision. For centralized communication channels, for example, they will often help beef up security in exchange for the ability to wiretap.
If this bug was not caught by the NSA, then they are incompetent, something that I've rarely seen levelled at them as of late, but it is possible. If this bug was perpetrated by the NSA, then they are evil because they are exposing Nato and other allied countries to foreign attacks and corporate espionage.
Given the stakes, what they've said in the PRISM slides, and their history, I'd say evil is more likely than incompetent.
Nobody caught this bug over two years (supposedly). Stranger things have happened.
Also, while the NSA might have wanted to create this bug to exploit it, you still haven't shown that they created this bug. They might have known about it and exploited it, but saying they put the bug in the first place is a very strong claim.
I'm not totally convinced that they even knew about it. If they had known about it, why bother going to the courts to coerce Lavabit to give up their SSL keys?
Any intelligence agency will prioritize source protection. Recall the efforts of the Ultra program in WWII: they sent out dummy reconnaissance planes to prevent the Germans from suspecting that Enigma had been broken. It's possible that the government already had the communications that Lavabit was trying to protect, but were putting on a public show so as not to reveal the brokenness of OpenSSL. See also "parallel construction".
Wasn't that the FBI? If there were other means (blank-cheque warrants) to achieve their goals, it seems unlikely that the NSA would loop those agents in.
Plus, would evidence obtained that way, without a warrant, even be admissible in any court? I mean, I guess this all went down after Snowden had left the country? So I'm not sure a court date was the end goal anyway. I'm fuzzy on the details/chronology.
It doesn't matter whether they created it. If your operations would suffer from the NSA having had access to this anywhere between 1 day and two years ago, then you have to assume they took advantage of it whether they created it or not, and you have to execute any damage control and mitigation procedures that you've created. I.e. assume your data has been read and used, and act accordingly.
We have no proof, but as such we have to assume that it's an NSA-directed compromise and take appropriate precautions, unless and until we have proof otherwise. That's why the NSA's actions really hurt.
This isn't a criminal proceeding . . . "preponderance of evidence" might be a more appropriate criterion. People are weighing whether they believe NSA malice or coder error are more likely in this situation.
That's an interesting thought. Some terrorist is in the US and planning something, but they don't want to give away their intel. So do a bit of parallel construction and tip off the local cops to some relatively small crime he's committed in the course of everything....
Or substitute "whistleblower" or "inconvenient politician" for "terrorist" if you prefer.
I didn't downvote you, but I expect those who did aren't reading it as "a question" as in "a request for information" - which I would think should rarely be downvoted - but as "a rhetorical question" whose purpose was to serve as a point of argument. Interpreted that way it seems to be attacking a strawman, poorly - such a comment would be deservedly downvoted.
I personally incline toward - memory safety in C is hard. We have had enough of those bug pop on their own to need encouragement. Whether interested parties knew of it and use it as a key towards all you can eat intelligence buffet is another story.
Memory safety in C is hard, but this bug, a memcpy with a user-supplied, unchecked length? I mean this is stuff that I learned about in my first serious class that involved C, and it wasn't even security related. C is a language where you code defensively at almost all times, yet this was ignored in the SSL implementation, a project which is based around communicating with a user? This is the situation where you really can't trust things like lengths. Either its incompetence or shilling, both of which are harrowing.
Incompetence on the part of the website companies that didn't pay the money to hire people to make sure that a piece of their critical infrastructure was up to the task? Yes, I agree.
Nah, I am extremely pro-Snowden & extremely anti-NSA... but I'm also a person that enjoys programming in C.
C is hard. I really think this was just a bug. What _is_ possible though is that the NSA knew about this bug since awhile back and kept it secret. But then if they knew about the bug, why was nasa.gov vulnerable? I would not expect any .gov domains to be vulnerable unless to create plausible deniability - but this kind of conspiracy logic has no end.
What's critical about nasa.gov? It's just a marketing facade. Not patching it means nothing.
I'm not saying that NSA did/didn't do X. Just that the above about .gov domains is not a valid argument. Intelligence gathering trumps trivial service to citizens.
"Safer" is an interesting adjective: its meaning depends on your threat model. Are you more worried about underestimating the already-tarnished honor of a secretive federal agency, or about being screwed over by that same agency? I personally care more about the latter.
Windows doesn't use OpenSSL, and Schneier's host is using Apache. I assume that he must not have direct control of his host or for some other reason cannot immediately upgrade/restart.
Interesting point for me is that those who serve over HTTP only are not vulnerable, which I think is a good case study in the risk of complexity. A lot of security experts have been calling for HTTPS everywhere, on the basis that it is low cost. Clearly there is a cost to the extra complexity, and in this case a bug in the security layer that results in a worse situation than if there had been no security layer at all.
> Interesting point for me is that those who serve over HTTP only are not vulnerable
...yes they are. You don't even need remote private key disclosure to MITM an http-only server. The way HTTP digest is written means you either store all passwords in a retrievable form or drop down to basic auth where anyone capable of base64 decoding can read all passwords.
This situation is by no means worse than if everyone had just used plain HTTP.
It is worse than plain HTTP, actually. Heartblead allows an attacker anywhere on the internet to read out memory from your server. This is worse than plain HTTP in two ways:
- With plain HTTP, the attacker would have to be in a MITM position to intercept traffic. With Heartblead, he can read traffic he wouldn't normally have access to from the server's memory.
- There may be secrets in memory that would never even be sent over the network that are now accessible. For example, if running a web app in the same process doing SSL termination, private keys such as Django's SECRET_KEY may be available. Under certain situations, knowledge of the SECRET_KEY can effect remote code execution.
In short, Heartbleed gives the entire world the ability to read memory from your server. This is much worse than an HTTP MITM.
Here's an example where Heartbleed is much, much worse.
I run an HTTP-only server that's read-only for the public. However, I have an admin interface that lets me log in and make changes to the site. For example, a WordPress blog.
Now, let's say that I run the server at home and I'm careful only to log in as an admin when on the home LAN. This is perfectly secure even though I'm only using plain HTTP.
If I decided to instead serve HTTPS, the heartbleed vulnerability means that anybody could potentially hijack my sessions, steal my passwords, and edit the site. Depending on how much password reuse is going on, they could own the entire box, or just put malicious code on the site for all my visitors to run into.
MITM is only one type of attack, which is largely irrelevant to public sites serving non sensitive information. But server memory is always private and could assist in escalation of a further attack.
That comparison is off. HTTPS secures communications; this bug exposes memory, you still want the first without the second and HTTP doesn't offer it. Minimizing complexity is a worthwhile goal but definitely not the only one since otherwise we could simply opt not to have communication that needs to be secured at all.
Sometimes you don't actually care about the first but still care about the second. In cases like that, HTTPS with heartbleed makes you much worse off than plain HTTP.
Yes if you add that qualification your point is true but still not very compelling. The number of gratuitous HTTPS installations is low, I expect, given the hassle required to properly install it, so the ones that are there are probably there because secured communications were deemed necessary.
On the other hand, there is nothing inherent in HTTPS which makes it vulnerable to remote exploits such as these, anything in the stack could have such a vulnerability. Therefore I think it is misleading to argue that we're worse off with HTTPS--the same argument could be made for Apache, PHP, Linux, Windows, etc., as they've all contained vulnerabilities.
Lots of places use "gratuitous" HTTPS. This is particularly striking to me because I enabled HTTPS on my own site despite not needing it at all, and subsequently got caught up in this bug. Google.com could be another example. It's not strictly gratuitous because you can log in there, but in theory the search functionality could be partitioned off, and it did use plain HTTP for a very long time. For a more fitting example, DuckDuckGo uses HTTPS but doesn't, as far as I know, offer accounts or anything else that strictly needs security. They do it for the privacy of your searches, but if they were vulnerable to Heartbleed then they could have ended up exposing much more than they protected.
I don't think the argument is that you're worse off with HTTPS in general, only that in certain circumstances you were worse off in this particular case, and that should cause at least some consideration for the increased attack surface incurred in enabling HTTPS when you don't otherwise need it.
Frankly, their whining about how hard crypto is is partly responsible for the monoculture we have. Yes, it's difficult (more so in protocol than implementation), but they are so offputting to new people coming into the field it is insane.
Clearly OpenSSL dev is broken, at least partly because everyone assumes everyone else is auditing all 300k lines of it, but also I can't help wondering if this calls for stronger component isolation within cryptosystems. For example, protocol implementation, encoding and decoding seem like they should all be totally isolated, so a disaster like this doesn't mean you could be leaking information from the rest of the system. I imagine many a HSM vendor has been quite pleased by this news.