Of course, that requires tenants trust Intel's security.
As a security researcher and given past showings from Intel, I wouldn't put much faith in SGX, even if they try to fix past flaws. SGX as a concept for tenant-provider isolation requires strong local attacker security, which is something off the shelf x86 has never had (not up to contemporary standards, ever) and certainly not in anything Intel has put out. They've demonstrated they don't have the culture nor security chops to actually engineer a system that could be trusted, IMO. Plus then there's all the microarchitectural leak vectors with a shared-CPU approach like that, and we know Intel have utterly failed there (not just Spectre; there was absolutely no excuse for L1TF and some of the others, and those really showed us just how security-oblivious Intel's design teams are).
Right now, the x86 world would probably do well to listen to Microsoft, since their Xbox division managed to coax AMD into actually putting out secure silicon (they're one of the two big companies doing proper silicon security at the consumer level, the other being Apple and Google trying to catch up as a distant third). But given the muted response to Pluton from the industry, and the poor way in which this is all being marketed and explained, I'm not sure I have much hope right now...
> Of course, that requires tenants trust Intel's security.
I generally agree with you. But I recently realized there might be one usecase, and it's pretty much what signal is doing. They're processing address books in SGX so that they can't see them. I don't have much faith in the system because I don't trust SGX, of course.
But there is one interesting aspect to this. If anyone comes knocking and tells them to start logging all address books and hand them over, they can say that it's not possible for them to do so.
Anyone wanting to do that covertly would at least need to bring their own SGX exploits, meaning it probably offers SOME level of protection. Certainly not if the NSA wants the data or some LEA is chasing something high-profile enough that they're willing to buy exploits and get a court order allowing them to use them. But it does allow them to respond with "we don't have this kind of data".
Secure enclave as legal defense is an interesting angle, thanks for sharing.
It's become a moral cause to make a lot of big-data computing deniable, to be data-oblivious. This is a responsible way to build an application, is well-built security, and I like it a lot.
I want to spew curse words, because, from what I have been able to comprehend, all the web crypto systems contravene what you & I seem to agree is a moral, logical goal. All are design to give the host site access to security keys, & to insure the user-agent/browser has the fewest rights possible. We have secure cryptography, but only as long as it's out of the user-agent/client's control.
We've literally built a new web crypto platform where we favor 100% the vile fucking cloud fuckers for all computation rather than the client, which seems as fucked up horseshit backwards trash city dystopia as could be possible. Everything is backwards & terrible.
That said, we 100% cannot trust most user-agent sessions, which are infected with vast vast spyware systems. The web is so toxic about data sharing that we have to assume the client is the most toxic agent, & make just the host/server responsible. This is just epically fucking fucked up wrong, & pushes us completely backwards from what a respectable security paradigm should be.
Hi chiming in to double down on this, as the downvotes ongoingly slowly slowly creep downward even still.
In most places, end-to-end security is the goal. But we've literally built the web crypto model to ensure the end user reaps no end-to-end benefit from web cryptography.
The alternative would be to trust the user-agent, to allow end-to-end security. But we don't allow this. We primarily use crypto to uniquely distinctly identify users, as an alternative to passwords.
This is a busted jank ass sorry sad limited piece of shit way for the web to allow cryptography in the platform. This is rank.
The Nitrokey security key people saw this huge gap, & created a prototype/draft set of technologies to enable end-to-end web encryption & secure storage with their security keys. https://github.com/Nitrokey/nitrokey-webcrypt
The basic idea is that you can play with the clockspeed and voltage of one ARM core using code running on the other. They used this to make an AES block glitch at the right time. The cool part is that, even though the key is baked into the processor, and there are no data lines to read the key (other than the AES logic), this lets them infer the key.
Hmm. The paper is 5 years old. I still think we are a decade away.
That's one reason why most TrustZone implementations are broken: usually the OS has control over all this clocking stuff. It's also one way the Tegra X1 (Switch SoC)'s last remaining secrets were recently extracted.
It's also how I pulled the keys out of the Wii U main CPU (reset glitch performed from the ARM core). Heh, that was almost a decade ago now.
That's why Apple uses a dedicated SEP instead of trying to play games with trust boundaries in the main CPU. That way, they can engineer it with healthy operating margins and include environmental monitors so that if you try to mess with the power rails or clock, it locks itself out. I believe Microsoft is doing similar stuff with Xbox silicon.
Of course, all that breaks down once you're trying to secure the main CPU a la SGX. At that point the best you can do is move all this power stuff into the trust domain of the CPU manufacturer. Apple have largely done this with the M1s too; I've yet to find a way to put the main cores out of their operating envelope, though I don't think it's quite up to security standards there yet (but Apple aren't really selling something like SGX either).
You trust the security of your CPU vendor in all cases. SGX doesn't change that. If Intel wanted to, they could release a microcode update that detects a particular code sequence running and then patches it on the fly to create a back door. You'd never even know.
"SGX as a concept for tenant-provider isolation requires strong local attacker security, which is something off the shelf x86 has never had"
Off the shelf CPUs have never had anything like SGX, period. All other attempts like games consoles rely heavily on establishing a single vendor ecosystem in which all code is signed and the hardware cannot be modified at all. Even then it often took many rounds of break/fix to keep it secure and the vendors often failed (e.g. PS3).
So you're incorrect that Intel is worse than other vendors here. When considering the problem SGX is designed to solve:
- AMD's equivalents have repeatedly suffered class breaks that required replacing the physical CPU almost immediately, due to simple memory management bugs in firmware. SGX has never had anything even close to this.
- ARM never even tried.
SGX was designed to be re-sealable, as all security systems must, and that more or less has worked. It's been repeatedly patched in the field, despite coming out before micro-architectural side channel/Spectre attacks were even known about at all. That makes it the best effort yet, by far. I haven't worked with it for a year or so but by the time I stopped the state of the art attacks from the research community were filled with severe caveats (often not really admitted to in the papers, sigh), were often unreliable and were getting patched with microcode updates quite quickly. The other vendors weren't even in the race at all.
"there was absolutely no excuse for L1TF and some of the others, and those really showed us just how security-oblivious Intel's design teams are"
No excuse? And yet all CPU vendors were subject to speculation attacks of various kinds. I lost track of how many specex papers I read that said "demonstrating this attack on AMD is left for future work" i.e. they couldn't be bothered trying to attack second-tier vendors and often ARM wasn't even mentioned.
I've seen some some security researchers who unfortunately seemed to believe that absence of evidence = evidence of absence and argued what you're arguing above: that Intel was uniquely bad at this stuff. When studied carefully these claims don't hold water.
Frankly I think the self-proclaimed security community is shooting us all in the foot here. What Intel is learning from this stuff is that the security world:
a. Lacks imagination. The tech is general purpose but instead of coming up with interesting use cases (of which there are many), too many people just say "but it could be used for DRM so it must die".
b. Demands perfection from day one, including against attack classes that don't exist yet. This is unreasonable and no real world security technology meets this standard, but if even trying generates a constant stream of aggressive PR hits by researchers who are often over-egging what their attacks can do, then why even bother? Especially if your competitors aren't trying, this industry attitude can create a perverse incentive to not even attempt to improve security.
"the x86 world would probably do well to listen to Microsoft, since their Xbox division managed to coax AMD into actually putting out secure silicon"
SGX is hard because it's trying to preserve the open nature of the platform. Given how badly AMD fared with SEV, it's clear that they are not actually better at this. Securing a games console i.e. a totally closed world is a different problem with different strategies.
"SGX is hard because it's trying to preserve the open nature of the platform"
Except that was an afterthought. Originally only whitelisted developers were allowed to use SGX at all, back when DRM was the only use-case they had in mind.
It clearly wasn't an afterthought, I don't think anyone familiar with the design could possibly say that. It's intended to allow any arbitrary OS to use it, and in fact support on Linux has always been better than on Windows, largely because Intel could and did implement support for themselves. It pays a heavy price for this compared with the simpler and more obvious (and older) chain-of-trust approach that games consoles and phones use.
The whitelisting was annoying but gone now. The justification was (iirc) a mix of commercial imperatives and fear that people would use it to make un-reversable ransomware/malware. SGX was never really a great fit for copy protection because content vendors weren't willing to sell their content only to people with the latest Intel CPUs.
Indeed, it remains to be seen whether or not SGX2 will be trustworthy; the proof is in the pudding. However, other vendors have their own solutions to the same problem, and least AMD's approach is radically different, so one hopes that at least one of them will stand up to scrutiny.
As a security researcher and given past showings from Intel, I wouldn't put much faith in SGX, even if they try to fix past flaws. SGX as a concept for tenant-provider isolation requires strong local attacker security, which is something off the shelf x86 has never had (not up to contemporary standards, ever) and certainly not in anything Intel has put out. They've demonstrated they don't have the culture nor security chops to actually engineer a system that could be trusted, IMO. Plus then there's all the microarchitectural leak vectors with a shared-CPU approach like that, and we know Intel have utterly failed there (not just Spectre; there was absolutely no excuse for L1TF and some of the others, and those really showed us just how security-oblivious Intel's design teams are).
Right now, the x86 world would probably do well to listen to Microsoft, since their Xbox division managed to coax AMD into actually putting out secure silicon (they're one of the two big companies doing proper silicon security at the consumer level, the other being Apple and Google trying to catch up as a distant third). But given the muted response to Pluton from the industry, and the poor way in which this is all being marketed and explained, I'm not sure I have much hope right now...