Hacker Newsnew | past | comments | ask | show | jobs | submit | v4dok's commentslogin

https://en.m.wikipedia.org/wiki/Confidential_computing

This is what they are doing. Search implementations of this to understand more technical details.


It's not, AFAICT from the press release.

Confidential Compute involves technologies such as SGX and SEV, and for which I think Asylo is an abstraction for (not sure), where the operator (eg Azure) cannot _hardware intercept_ data. The description of what Apple is doing "just" uses their existing code signing and secure boot mechanisms to ensure that everything from the boot firmware (the computers that start before the actual computer starts) to the application, is what you intended it to be. Once it lands in the PCC node it is inspectable though.

Confidential Compute goes a step further to ensure that the operator cannot observe the data being operated on, thus also defeating shared workloads that exploit speculative barriers, and hardware bus intercept devices.

Confidential Compute also allows attestation of the software being run, something Apple is not providing here. EDIT: looks like they do have attestation, however it's different to how SEV etc attestation works. The client still has to trust that the private key isn't leaked, so this is dependent on other infrastructure working correctly. It also depends on the client getting a correct public key. There's no description of how the client attests that.

Interesting that they go through all this effort just for (let's be honest) AI marketing. All your data in the past (location, photos, contacts, safari history) is just as sensitive and deserving of such protection. But apparently PCC will apply only to AI inference workloads. Siri was already and continues to be a kind of cloud AI.


Apple's secure enclave docs also mention memory encryption. The PCC blogpost mentions that the server hardware is built on secure enclaves. And since they are claiming that even Apple can't access it, I am currently assuming that there will be memory encryption happening on the servers. At which point you have have the main ingredients of CC: memory encryption & remote attestation.

EDIT: and they mention SGX and Nitro. Other CC technologies :)


> Apple's secure enclave docs also mention memory encryption.

Yes, but that's only within the enclave. Every Mac hardware since T2 has had that, and we don't consider them strong enough to meet the CC bar.

As an example of the difference, CC is designed so that a compromised hypervisor cannot inspect your guest workload. Whereas in Apple's design, they attempt to prove that the hypervisor isn't compromised. Now imagine there's a bug ...

(Not that SGX hasn't had exploitable hardware flaws, but there is a difference here.)


This is Confidential Computing https://en.m.wikipedia.org/wiki/Confidential_computing

with another name. Intel, AMD and Nvidia have been working for years on this. OpenAI released a blog some time ago where they mentioned this as the "next step". Exciting that Apple went ahead and deployed first, it will motivate the rest as well.


This new DataProtector tool streamlines confidential computing software development. Imagine being able to rent access to your data so you own it but code running in a TEE can access it, securely transferring ownership of data, or offering subscription bundles for your data. With generative AI essentially turning into an echo chamber where it will train on its own content, sourcing human-derived data and content is going to be so important. This might be how an economy of that data/content gets off the ground. https://medium.com/iex-ec/introducing-the-content-creator-de...


I'm looking to get a car these days in EU. The government gives out subsidy for electric cars and even without it, there is simply nothing almost as good in Teslas range. EU makers are either too expensive or try to sell you shit for gold


I wonder where this would be practical. How big would the teacher model need to be?


It reminds me of the general idea of generative adversarial networks where instead of a teacher and learner, you have a one trying to learn real inputs from fake inputs while the other tries to trick the first with fakes, with both training each other and the ultimate goal is to have the latter create realistic fakes.

I'm sure it's been or is being researched, but since I'm new to LLMs, my immediate thought was having code-aware models where one tries to find and fix vulnerabilities in a codebase while another tries to exploit them where the ultimate goal is to secure a codebase.


Do you have any research you can point for that, its quite interesting.


I feel like the current meta on finetuning LLMs is random accounts at X/Twitter. Google results are littered with SEO garbage or some kind of guides that fail to work the moment you need something slightly different.


Niche market?? You have no idea how big that market is!


Almost no serious user - private or company - wants to slurp their private data to cloud providers. Sometimes it is ethically or contractually impossible.


The success of AWS and Gmail and Google docs and Azure and Github and Cloudflare make me think this... probably not an up-to-date opinion.

By and large, companies actually seem perfectly happy to hand pretty much all their private data over to cloud providers.


yet they don't provide access to their children, there may be something in that.


We can't use LLMs at work at all right now because of IP leakage, copyright, and regulatory concerns. Hosting locally would solve one of those issues for us.


Yeah I would venture to say it’s closer to “the majority of the market” than “niche”


I agree. 10B is peanuts for MSFT but its Satya's miscalculation. He didn't anticipate that and the board wouldn't be too happy about it.


If the board is unhappy about that they are idiots and should not be board members.

Absolutely no one could have predicted Sam being removed as CEO without anyone knowledge until it happened.

But regardless a 10b investment has yields huge results for MS. MS is using openAI tech, they aren’t dependent on the openAI api or infrastructure to provide their AI in every aspect of MS products.

That 10b investment has prob paid itself back already and MS is leveraging AI faster than anyone else and has a stronger foothold on the market.

If the board can’t look past what 10b got then. I wouldn’t have faith in the board.


He was a marketing person I believe when Bill was in MSFT. To become the CEO of MSFT is a huge political and competence firewall already. Then to do the most spectacular transformation of a mega-corp is next-level. MSFT is now the leading player in AI, while before it was still fucking around with office and Windows licenses. People who are young (not saying you are), and don't remember what MSFT was before Satya, don't really get that MSFT would be like Oracle and IBM if not for Satya.


As far as I know, he actually came from an engineering background, making his career even more impressive. Despite my views on Microsoft and shareholder-oriented capitalism, he certainly seems like a brilliant and genuinely interesting guy.


Marketing person? LOL

The guy was born in the cloud compute division.

The board saw cloud compute was gonna be big. They made him the king. Good bet. The whole company went all in on cloud. Now they print more money than before.

Marketing person lol. He's an engineer. The guy literally gets back into VS code sometimes to stay in touch.


There would be no way to find out if that would be true however. The person in the vat might say they are the same person but you don't know if they really are. Also what happens if you create a copy? Is that two people or is there some kind of shared consciousness? If it is, how do they communicate?


Same with teleportation, you cannot know for sure if the person after the teleportation is the same as the one before, they can just be a copy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: