STARTTLS was never intended to thwart MITM however. We need to keep that in mind. It allows a way to start a secure channel that is backwards compatible under the assumption that an attacker can eavesdrop but not manipulate the contents of the channel. In this regard it is some measure of an improvement.
For the record I do not think it is a final solution (what is). I do often have mixed feelings about 'the perfect being the enemy of the good'. With STARTTLS my feelings aren't as mixed. A measurable improvement to passive surveillance for minimal changes and no new infrastructure. Swell.
Again, not going to condone it as a panacea - but it's never advertised itself as one.
Let's keep using it until there's something better. And let's get furious at ISPs that strip it (or modify our traffic in any significant way).
TLS is just a new name for SSL from version 3.1 onwards. It's much more secure then those older SSL versions.
STARTTLS, a protocol used to negotiate SSL/TLS in some plain text protocols, is problematic if it isn't enforced. Some software stupidly abbreviates STARTTLS to TLS in the GUI, which is a source of constant confusion.
The window from disclosure of patches to duplication is narrowing and it appears from the bulletin that client connections are affected as well. Furthermore any computer you take anywhere outside your home router (and can you really trust your home router as security boundary nowdays?!) will be easy to manipulate into an SChannel connection. Inside your home network, clients are still vulnerable to attack - any javascript/flash ad/referer can point a computer behind a router at an attacker server and serve up malicious SChannel packets. That is to say your home computer can be attacked on outgoing connections which your router will be happy to allow.
> Timing attacks are often the result of optimisations within the crypto library which inadvertently give away information, for example a loop which breaks on X != Y, instead of setting a failed = false bool and continuing to iterate through the rest of the array.
I would say this is false. Simple differences in time caused by cache line ejection in table-lookup implementations of AES provide a very strong timing attack. (http://cr.yp.to/antiforgery/cachetiming-20050414.pdf)
In RSA (and in fact DL based cryptosystems), modular exponentiation without extreme care leak tons of timing information about private exponents. 'Blinding' is one way to handle this, but performant solutions typically fiddle at the bit level and exploit CPU guards and features to minimize branch prediction/cache line/etc leaks.
In higher level languages absolute control and care of crypto implementations can not be taken and the JIT layer adds another layer of obfuscation (though I know of no attack employing that...).
The out for memory safe languages is to provide built in crypto operations that have been implemented at a lower level.
I'm curious now: how do the AES implementations nowadays avoid the timing attack explained in that paper? From what I understood, its very hard to write an efficient AES implementation without using input-dependent table lookups.
For the most part by bitslicing. Some implementations calculate the S-box explicitly using the algebraic relationships in the finite field but doing so is awfully slow.
I should add here that I met an incredibly intelligent young man named Julian from Dartmouth and doing some work with MIT who is proving with COQ and a model of a CPU that his implementation of cache lookups for (various) crypto algorithms results in exactly the same line patterns and the number of cpu ticks is similarly invariant. Some people go the extra mile.
You say it's "false" but then fail to explain why. None of your examples offer that, and your whole explanation boils down to "managed languages are more complex, therefore worse."
Please point me to the specific native features which mitigate timing attacks. Because the majority of fixes I have seen are purely in altering the libraries themselves using high level constructs to remove hot paths and make it so both failure and success state take a constant time to execute (which has nothing to do with managed/unmanaged code).
The issue isn't that native code has special features that mitigate timing attacks. It's that you can look at native code and predict its side effects more easily than you can with high-level code.
Another important difference between native code and high-level code is that timing leaks in high-level code tend to be larger. For instance, it's very difficult to exploit a memcmp timing leak in practice. But Java's string comparison, depending on your JVM, is exploitable over the Internet.
For what it's worth: I wouldn't select C over Java simply to avoid timing attacks. Side channels in JVM code are a legit concern, but not a dispositive one.
If you're just talking about me then fair enough - the problems I've publicly identified in TLS are pretty much edge cases. Thomas on the other hand has a pretty good track record if you'd care to check.
> whole explanation boils down to "managed languages are more complex, therefore worse."
I hope that's not what I said...
> Please point me to the specific native features which mitigate timing attacks.
How am I supposed to implement bitslicing to vectorize operations in Java? I can't. Fine grained control of code is important for implementations of ciphers that are both fast and side-channel free. Fine grained control isn't something Java can give you, by definition.
I count exactly two countermeasures that apply to high level languages. Of the first they say "We conclude that overall, this approach (by itself) is of very limited value" and of the second "beside the practical difficulties in implementing this, it means that all encryptions have to be as slow as the worst case... neither of these provide protection against prime+probe/etc".
The rest of the countermeasures suggest bitslicing, use of direct calls to hardware instructions, memory alignment tricks, invocation of hardware modes (i.e. to disable caching), forcing cache ejections, normalizing cache states on interrupt processing, etc.
It is purely the case that high level languages do not offer you the flexibility and control to implement side-channel free crypto.
Crypto is brittle. High level languages are awesome for so many things. But bitslicing isn't one of them. The entire premise of high level languages is that you are freed from working directly on the innards pertinent to the specific target architecture. The entire premise of side-channel free crypto is that you need visibility and control of exactly these things.
The overall question is whether bindings or language features that expose direct control of the underlying architecture (such as D) can still be used to implement crypto. The answer is likely yes, though it is uncharted territory that only someone who knows what they are doing should attempt.
> Net neutrality isn't a blanket term for "anything the government does relating to the Internet", it's focused on a specific issue.
Of course.
> I guess you could argue that net neutrality would make it easier to pass pro-surveillance laws, but I'm having a difficult time connecting the two.
The argument wouldn't be that net neutrality itself would make it easier. The argument would be that the regulation - especially if the folds ISPs under telecommunication laws or equivalents - could result in the import of large portions of legislation pertaining to communications access programs.
Much of this has actually already been done at the ISP level and the (surveillance) struggle seems mostly focused on the application layer.
So I guess I might agree that it may not bolster surveillance capabilities - if only for the fact that ISPs have already been mostly captured.
Some here may know me as a critic of overreaching and aggressive cyber enforcement (and related surveillance).
First, I'm quite happy that this activity does not appear to be the result of wide scale infrastructure sabotage.
And I am quite happy that the FBI is doing its job to combat crime that is facilitated using (abusing) the technologies that are bastions for free speech, privacy and whistleblowing.
Of course the flipside is that this means that there are capabilities in place to disrupt anonymizing technologies - the technologies make investigation more expensive but ultimately are merely an inconvenience to the powers that be. So when it comes down to it, anonymizing services and Tor can't be trusted to secure you if you have something to say where your life is in danger.
The FBI (/others) wants the court system to replace technology as the gatekeeper to investigation. The court system, however, is brittle. It takes time, it fails, and it responds to external pressure - there are repeated studies that show that the length of time persons in US court systems are convicted to serve is highly correlated with how long it has been since the precising judge has eaten his last meal. There are also extralegal rights that law enforcement are given by legislature and evolving interpretations of what both these legal and extralegal rights entail.
But law enforcement also is justified from their perspective. They don't want there to be criminals that get away with crimes simply because criminals load up some software that obfuscate their identities, locations and accounts. If you look at this published list there are criminal organizations that you and I as taxpayers do want taken down. (I recognize that the sale and consumption of drugs is a greyer area of morality as drug use is sometimes victimless).
I think that for the most part law enforcement is capable of taking down these services and organizations other ways - ordering assault rifles and monitoring the drops - and that this provides opportunities for the government to enforce the law without sabotaging communications infrastructure. Taking down some .onion addresses doesn't do too much besides annoy the services for a time anyway unless the services operationally are not capable of standing up a new address and communicating with customers anonymously.
All in all it's a blurry line but I feel safer with places that are anonymous and secure than I do by trusting a court system and legal process that can only see, process, and be accountable for so much.
On the bright side it leaves us in the same position as man has always been- rather charted territory.
I've been skeptical of Tor et-al from day one. I didn't have provable reasons why, but the court has always served as the gatekeeper to investigation, and the Tors of the world seemed like the sort of hubris we techies are so prone to- "Age-old social justice problems man has struggled with for thousands of years can be trivially fixed with my technology!"
It is my opinion that we (techies) overestimate ourselves. Tor is useful, but it would have to be perfect (which no technology can be) to protect you from the flawed judicial system. Which is why I think we are destined for heartbreak, and the longer we forestall that realization the worse off we will be, for we will ignore the judicial system and allow it to become ever more broken.
As a sidenote I find it bitter satire; people who cannot accept the will of others seeking tools to forcefully impose their own morality on the world instead
I'm rather partial to your comment. Though as a cipherpunk of my own generation, fully knowledgeable of rubberhose cryptanalysis and the rest, I do hold out hope that some of these technologies balance power and push them into the hands of the benign individual more than they magnify the power of a select or chosen few.
If all technology provides more power to everyone, but unevenly to where more is added at the top than the bottom, then the only thing that is left to defend against power inequality are court systems and forms of mass unrest. I distrust the completeness of the former (we've seen them go bad) and rather dislike the latter.
The pendulum could of course swing too far the other direction into anarchy. This, too, leaves my mouth bitter.
Ultimately I think technologies like Tor aren't so bad. Certainly it is nothing compared to nuclear weapons or personal firearms. Information and communication, while they can aid criminal behavior, are not criminal in themselves. Like has always been the case - long before it was possible to monitor and store information and communication for later introspection - criminal acts are acts in the physical world and they can be investigated there.
> As a sidenote I find it bitter satire; people who cannot accept the will of others seeking tools to forcefully impose their own morality on the world instead
By this do you mean those that can't accept the will of others comprise the judicial system or those not prepared to submit to it and pursuing alternate avenues? Your comment works either way, but if you're talking about those attempting to place themselves outside the judicial system that's less them imposing their own morality on the world and simply not allowing the world to impose its morality on them.
Mostly the former. Some think they are the latter, but a lot of outright criminals will explain to you how they are actually justified using their own carefully-crafted moral code that always conveniently allows for their behavior. That's what I mean by forcing their own morality on the world.
Interesting point. Nobody thinks they're the bad guy, but some people are only considered the bad guy by the state, rather than almost everybody. That's the latter group to which I referred.
>It is my opinion that we (techies) overestimate ourselves.
What everyone is forgetting is this:
The FBI, NSA, CIA etc all have techies. And since they would be extremely well paid it is logical to assume that they are very good at what they do. So anything that we can do they can do only (a) arguably better and (b) with the constraint of having to comply with the law.
So? They could be arguably much worse. It doesn't matter. You only need to be good enough to notice asymmetries (such as your (b), or certain mathematical asymmetries as another example) and set them up in your favor. There are plenty in existence for both sides. When the asymmetries are powerful enough, it doesn't matter how well-equipped or intellectually superior your opponent is.
January 30 to July 4, 2014 someone set up 115 tor nodes on fdcservers.net (total cost maybe ~$200k?), which was 6.4% of entry guard capacity. Clients talk to 3 guard nodes for an average of 45 days each, which means they probably picked a guard ~12 times during this period. Each guard-picking attempt had a ~6.4% chance of landing one of these bad guards, or a 55% chance across all attempts.
"We know the attack looked for users who fetched hidden service descriptors... The attack probably also tried to learn who published hidden service descriptors, which would allow the attackers to learn the location of that hidden service."
I didn't know about those attacks. Very interesting. $200k is chump change for a Tor attack from large organizations. It's interesting to compare that number to the $100k prize offered by Russia. A neat speculation is that better attacks require a few digits more to be extremely effective and that six-digit attacks are at the cost-effectiveness threshold for most national purposes.
By "wide scale infrastructure sabotage" I was trying to refer to QUANTUMINSERT, TEMPORA and other internet-scale mass read and write capabilities. It doesn't look like the FBI had to use those sorts of technologies to interrupt the .onion addresses - I'm really happy about that. First because it shows that law enforcement can fight cybercrime without those tools and second because if they were used proponents/supporters would have championed them as 'necessary' or 'inevitable'.
Bug volume in crypto is extremely high. How many developers reuse IVs in stream ciphers? How many blindly use AES or somesuch other symmetric library and then build in no authentication whatsoever? How many antequated implementations of RSA are used in practice today (see recent Bleichenbacher flaw in NSS)? How many times are poor chaining modes for block ciphers chosen? How many implementations of [anything] fail on side cases (elliptic curves) or massively leak through side channels? How many DH-family protocols miss checks for identity inputs?
You and I mean different things by "crypto vulnerabilities". I took the parent comment to mean things like the RC4 biases; like I said, things for which the "fix" would involve entirely new algorithms or constructions. An example of this kind of NSA disclosure would be the DES s-boxes.
Crypto software implementation vulnerabilities are very common, but the kinds of things you're talking about are most often found in obscure and/or serverside software. Look at the tempo at which bugs like the NSS e=3 bug are released; it's like once or twice a year.
I think implementation bugs are within the spirit of OP, especially provided the NSA claims to have provided an implementation fix for Heartbleed.
The sorts of bugs I'm talking about exist in client and popular software. As far as tempo is concerned this year alone has given us BERserk, gotofail, Android Master Key, OpenSSL fork(), Bitcoin's use of P256, GNUTLS X.509 parsing bug, the OpenSSL compiler optimization+processor family randomness bug, and others.
If we were to entertain OP's point maybe there would be a faster tempo if the NSA were helping out. :)
Right, NEC3's 'solution' to obscure zones by signing hashes effectively just renames zones that probably come from some small collection ('www', 'ftp', 'ns', 'smtp', 'ilo') and not secure for the same reason hashing phone numbers is ineffective.
Arguments that administrators can choose names with large entropy would miss the point as it puts an undue burden on administrators and users to use 'bizarre' names.
Further arguments that you can brute force names using normal ol' A records also miss the point. The difference is online versus offline enumeration.
> it's based on RSA PKCS1v15
Eww. I had no idea. Next you'll tell me the root key's public exponent is 3. Gross.
> If your most important adversary is GCHQ and NSA, then the Internet is far more threatened by the deployment of DNSSEC than it is by DNSSEC's absence.
Not sure it makes too much of a difference in this case... you can't trust non-signed DNS records if your adversary is NSA/GCHQ, and you can't trust typical PKI to bail you out either.
Six digits sounds about right for a Tor bug for one target depending on the specifics. The RCE bug used by the FBI recently against the Tor Firefox Bundle would have cost something similar, though the payload suspended the process where it could have resumed silently. It's not clear where that exploit was developed (my gut says in house but who knows?)
IIRC someone analyzed the payload and compared it to a Meterpreter and saw a lot of similarities. Could have been provided (hacked together?) by the person who sold the vuln.
Oh we're not talking trivial bugs or single-site XSS.
Disappointed that 'mediocre' vulns got interpreted in this thread as 'trivial'.
Mediocre doesn't mean trivial, extremely scoped or useless. Mediocre means that it is for sensitive but not widely deployed software, for widely deployed software on default config but is post-auth or is not reliable, or it is reliable and yiels high auth but requires pairing with another vulns (i.e. memory disclosure) or extended recon (revision number, etc).
A MySQL bug affecting recent revisions that causes arbitrary file overwrites with semi-controlled content but that requires unprivileged (guest) auth would meet this criteria.
Apologies for the confusion with the word 'mediocre' - I figured people here would know.
In general organizations in the offensive world will pay more than those in the defensive world. This is not a hard and fast rule, but mostly it is the case that offensive network operations stand to gain more from the use of 0days than vendors stand to lose by not paying for the disclosure to patch them. It's not really a good calculus to use data from vendors sales to calculate the other.
It's worth five figures to the buyer if they can make five figures or more of value from it.
Not speculating about nation states here but 'groups': making good money from post-Auth MySql RCE not totally absurd - Amazon, Rackspace, HP, Heroku and Jelastic all offer MySql-as-a-service, where you are given low privilege (maintained, geo-redundant, etc) account access to shared MySql instance. If there's more than five digits of business value stored in that database then a five digit exploit makes sense.
Or think about any of the (poorly written) bitcoin services out there that use some default phpAdmin creds for a database that also hosts their vault.
For the record I do not think it is a final solution (what is). I do often have mixed feelings about 'the perfect being the enemy of the good'. With STARTTLS my feelings aren't as mixed. A measurable improvement to passive surveillance for minimal changes and no new infrastructure. Swell.
Again, not going to condone it as a panacea - but it's never advertised itself as one.
Let's keep using it until there's something better. And let's get furious at ISPs that strip it (or modify our traffic in any significant way).