Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hah. I had some time to think about that interaction and concluded (after a chat with a few peers at AppSec Cali) that he and I simply differ in mentality. I understand where he's coming from about the risks posed by that level of flexibility in the execution environment, but I'm also more prone to push for the idea that making encryption more convenient for all justifies taking some risks in terms of the execution environment so long as we can (to the best of our knowledge) sufficiently mitigate them. That and browser sandboxing helps a ton more than having someone break an app running in userspace.

His objection might be at a more visceral level, which I understand but steadfastly disagree with.



You guys seem to remember more about "this interaction" than I do. Someone want to bring me up to speed?


There was a thread a couple months back where I brought up WebSign (Cyph's TOFU in-browser code signing layer) in response to a story about MEGAChat, which you generally disapproved of but didn't go into detail on why. There was also some related back-and-forth between you and eganist.

My impression was that you hadn't necessarily had the chance to fully digest the architecture due to time constraints on your end, and were coming more from a position of "the premise of this is scary and it's almost definitely broken in some way" than of having identified a specific flaw. Also, as eganist noted here, you may have held a different opinion than us on the drawbacks of the flexible execution environment (despite Cyph's partial mitigations of TypeScript for static typing and asm.js for pseudo-native crypto) vs the benefits of the well-vetted sandbox.


Oh wow, is Cyph the one with the elaborate caching scheme where they invalidate the certificate once a week to prevent the package, once downloaded, from ever reloading itself off the website?


Yep, that's it. (To clarify, the signed package is freshly downloaded each time; it's just the root of the origin of the application that's pinned via "HPKP suicide" invalidation.)


Look, it's clever. I mean it. I'm not saying the people that came up with that insane scheme were dumb to do that. It would be a good research project!

But I can't imagine relying on that janky set of side effects for my own personal safety, and I can't recommend that anyone else do that either.

If you want to use continuity (aka 'TOFU') for security and you insist on getting your crypto in the form of a browser Javascript application --- which is a bad idea for other reasons that I hope will become clearer later this year --- then you should just use an installable, non-auto-updatable Chrome application. People shouldn't play games with message crypto.

I really don't think doing message crypto in browser Javascript is going to work out. Or, if it does, it will work out only after 5-10 year of semiregular hair-on-fire security emergencies that the world's most repressive governments will tend to know about a year or two before the rest of us do.


Fair enough! I'm comfortable with it in terms of security given that HPKP is at least a security feature that makes certain guarantees, but I fully acknowledge that:

1. A future update to browser implementations and/or Web standards could hypothetically break this in terms of availability, DoSing/bricking Cyph for all of our users; and

2. If the server were to violate users' trust by not actually deleting the old TLS private keys, it wouldn't be as outwardly apparent as if it had instead needed to serve malicious code to the client, which is at least a little gross.

Ultimately, I think the value of this is in making sure that initial user onboarding is as smooth and easy as possible (without being totally insecure), with the next step on our end being to offer an optional browser extension to fortify the scheme in a less experimental manner.

which is a bad idea for other reasons that I hope will become clearer later this year

Well, that sounds rather ominous... Are you referring to something specific, and if so is it something you would be at liberty to share (either here publicly or out of band)?


> 1. A future update to browser implementations and/or Web standards

For some reason I hadn't really considered that all modern browsers are self-updating, so in a standard setup, all web apps are fundamentally insecure. All it takes is for an agency to steal a code-signing certificate (or get one somehow) - and all efforts are defeated.

While bootstrapping based off of gpg-signatures has its own problems (how do you verify the install iso, if you can't know that you have a correct version of gpg/pgp on hand, if you can't (and should not) trust the CA system (SSL/TLS is out)) -- at least you can narrow things down a lot. All of the same issues and more apply to all other software. Mozilla can backdoor your Firefox, Google can backdoor your chrome, Microsoft/Apple can backdoor your IE/Safari and your kernel ... now of course Debian can backdoor your kernel too - but with an open system, it's much easier to control updates -- even if it might be beyond most people to actually verify each and every update themselves. (And to be fair, someone could steal Debian's signing keys at least as easily as they could steal Microsoft's -- and in case of a targeted attack, I find it unlikely that a kernel backdoor in a signed Debian/RedHat/etc update sent to only a handful of machines would be discovered).

[ed: I guess the reason to put less trust in browsers than other OS updates, might be a) it adds a second set of things to worry about (the browser runtime etc) beyond the kernel and other userspace, and b) Browsers are very frequently updated, so the opportunity to install a backdoor is quite frequent (assuming one wants to install it subversively, sending a custom update to a handful of clients at the same time as a general update is out) ]


That isn't exactly a Web-app-specific problem. If you have native software running malicious code with user-level access (whether it's in your browser or any other random update from your package manager), you should consider the whole machine compromised.


> which is a bad idea for other reasons that I hope will become clearer later this year

You're speaking with two people involved with the project, so if you're able/comfortable with talking in more depth about it, my PGP ID: 0x4D4C724C4BFB3E3F




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: