Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm trying to think about whether there's a way for Facebook and/or Tinder to mitigate this attack without degrading user experience. Because the auth token used is from the response to the last request ever made from Steve's computer, having a changing auth token on each request wouldn't help in this scenario. Restricting an auth token to an IP address wouldn't work since both users are presumably behind the same NAT (all devices on the same residential WiFi router) - not to mention that IP addresses change all the time. Restricting an auth token to a user agent string would stop the first attempt at this hack - but then someone would simply tell the proxy to mimic the UA of Steve's desktop - but then could Facebook refuse to honor mobile application authorization requests if the mobile device is mimicking a desktop browser's UA?


It seems like the most reasonable mitigation would be to disallow Burp Suite from working at all by using SSL cert pinning. (I'm actually pretty surprised that they don't do this already -- I know that Google pins certs for their own apps in Chrome.)

This, of course, would not completely stop the issue. But, it would make the author's job that much harder, since he'd have to emulate the Tinder protocol without the assistance of the Tinder app -- or would have to hack the Tinder app (and run it on a jailbroken device) to disable the cert pinning.


In the story, BurpSuite was used only on the attacker's machine for ease of use. You could also hand-craft the requests using curl.

Cert pinning doesn't help when someone installs their own certificate authority. It stops other CAs that came bundled with the browser from working, but if it stopped self-installed certificates from working it never would have gotten off the ground because many organizations demand the ability to use their own certificates for signing things.


Cert-pinning in the application for their own server is totally doable, that's exactly what Google is doing with Chrome.


If you install your own CA into Chrome, it will overrule the cert-pinning that Chrome does. This is very on purpose.


That wouldn't help either: In this article, the Facebook is uninstalled prior to authorization to ensure the Facebook request is forced to go through the browser. So you might be able to get away with HSTS certificate pinning, but that could likely easily be cleared out (unless it's preloaded... i dunno if you can clear HSTS preloads in Safari, or if they even have such a thing). Even then I suspect the authorization could be spoofed somehow, as all of these measures only matter on the attacker's machine


I wonder if it's feasible to reference the TLS/SSL session against the session cookie? While the HTTPS session itself is probably transient, you can tell something is up if the same session cookie is being used with two different HTTPS keys.


You're describing the concept behind channel bound cookies. https://tools.ietf.org/html/rfc5929

As far as I know, it's not supported by any current browser (I welcome feedback to the contrary) but is included in SChannel. Given that we've only recently (arguably) gotten away from SSLv3, I don't have high hopes that it will be viable to require channel binding in the very near term.


Chrome v24+ does support all you need for channel-bound cookies: it supports TLS Channel IDs (previously known as Origin-Bound Certificates). To actually bind cookies, it is the server's responsibility to extract the channel ID from the TLS/SSL handshake, and bind the cookies to it.


Do any cloud SSL terminators like Amazon ELB support forwarding the channel IDs on to the application servers (i.e. in a custom header)? For that matter, is there a configuration setting for i.e. Nginx if you want to roll your own SSL terminator to do this? Having trouble finding good documentation about how to handle this from the server side.


Cool – hadn't heard of that before. I was thinking more along the lines of a purely server-side approach:

  $_REQUEST["salted_SHA_hash_of_symmetric_TLS_key"]
You'd save the current key to a DB, and manually check it in future requests.


Well, if you can intercept the request to the server to can also change that parameter of the TLS certificate hash.


Is that actually true, though (especially w.r.t. Forward Secrecy)? Don't both parties generate separate halves of a symmetric key independently, preventing any one party from forcing the use of a particular key on a new session?


If each computer had a unique hardware private key, that could stop it. But I'm not sure that they do? (Or even if some do, can HTML5 access that somehow?)


That's the idea of a Trusted Platform Module, which many machines have had for some time. TPM provides a per-device hardware environment for signing and storing keys in a tamper-resistant manner.

HTML5 can't access TPM directly, but in ChromeOS, you can create or import a client certificate as a 'hardware-backed' certificate, which is then wrapped by the device's TPM.

At this point (if properly configured) an attacker can't exfiltrate client certificates from the device even with root-level access to the machine. Plus, in theory, extracting key material from the TPM should be made difficult by its manufacturer by means of various physical protections.

Obviously this is a very niche edge-case, but it is possible :)



Another reason: SSL certificates cost money. StartSSL has some free option, though.


The costs-money kind of server SSL certificates and client SSL certificates are two very different things.

Client certificates are generated by the user's machine and signed using your server's private key. The user's client presents them to your server to prove that the client is who they said they were when you signed their certificate. These certificates don't cost anything, besides some CPU cycles on both sides of the process.

The kind of server SSL certificates that cost money are generated by you and signed by a CA that most users' browsers will trust. Your server presents them to the client to state to the client that the server belongs to the domain it says it belongs to.

Most CAs will charge you money for the service of signing those certificates, but that process has nothing to do with the lack of adoption of client SSL certificates.

The parent article does a good job describing why client certificates aren't used more often: the UX doesn't make sense to users and there's not a user-friendly way to protect them with a second factor (the way you can encrypt your SSH keys using a passphrase or authentication device).


Couldn't the author just copy Steve's private key to his computer then?


Not if it's a hardware key. You give the processor something you want to encrypt, but you can't look at the actual key itself (the only way to do that would be with an electron microscope).


That said, if you could gain persistent remote access to the computer, you can just repeatedly ask the processor to encrypt things.

This is incidentally part of why the Chromebook design makes it hard to persistently change the machine; a reboot starts from a clean signed image and then mounts a home directory. It's still possible to stick a persistent exploit somewhere in the home directory, but it's not as simple as just dropping a file in /etc/init.


Can't you just sniff for a browser fingerprint and if too many characteristics have altered end the session?


The problem is that the proxy that the attacker is using could ostensibly alter the outgoing messages from his phone's web browser to mimic the browser fingerprint of Steve's desktop perfectly.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: