Hacker Newsnew | past | comments | ask | show | jobs | submit | macrael's commentslogin

Howdy, head of Eng at confident.security here, so excited to see this out there.

I'm not sure I understand what you mean by inference provider here? The inference workload is not shipped off the compute node once it's been decrypted to e.g. OpenAI, it's running directly on the compute machine on open source models loaded there. Those machines are cryptographically attesting to the software they are running. Proving, ultimately, that there is no software that is logging sensitive info off the machine, and the machine is locked down, no SSH access.

This is how Apple's PCC does it as well, clients of the system will not even send requests to compute nodes that aren't making these promises, and you can audit the code running on those compute machines to check that they aren't doing anything nefarious.

The privacy guarantee we are making here is that no one, not even people operating the inference hardware, can see your prompts.


> no one, not even people operating the inference hardware

You need to be careful with these claims IMO. I am not involved directly in CoCo so my understanding lacks nuance but after https://tee.fail I came to understand that basically there's no HW that actually considers physical attacks in scope for their threat model?

The Ars Technica coverage of that publication has some pretty yikes contrasts between quotes from people making claims like yours, and the actual reality of the hardware features.

https://arstechnica.com/security/2025/10/new-physical-attack...

My current understanding of the guarantees here is:

- even if you completely pwn the inference operator, steal all root keys etc, you can't steal their customers' data as a remote attacker

- as a small cabal of arbitrarily privileged employees of the operator, you can't steal the customers' data without a very high risk of getting caught

- BUT, if the operator systematically conspires to steal the customers' data, they can. If the state wants the data and is willing to spend money on getting it, it's theirs.


I'm happy to be careful, you are right we are relying on TEEs and vTPMs as roots of trust here and TEEs have been compromised by attackers with physical access.

This is actually part of why we think it's so important to have the non-targetability part of the security stack as well, so that even if someone where to physically compromise some machines at a cloud provider, there would be no way for them to reliably route a target's requests to that machine.


> I came to understand that basically there's no HW that actually considers physical attacks in scope for their threat model?

xbox, playstation, and some smartphone activation locks.

Of course, you may note those products have certain things in common...


Yeah that's a good point, I don't call that confidential compute though it's a different use case.

CoCo = protecting consumer data from the industry. DRM = protecting industry bullshit from the consumer.

TBF my understanding is that in the DRM usecases they achieve actual security by squeezing the TCB into a single die. And I think if anyone tries, they generally still always get pwned by physical attackers even though it's supposedly in scope for the threat model.


All things that were compromised with physical attacks? What are mod chips if not physical attack as a service?

I'm not aware of working jailbreaks for either Xbox Series or PS5. Its possible that's just a matter of time, but they've both been out for quite a while now it seems like the console manufacturers have finally worked out how to secure them.

Older firmware versions of PS5 are in fact jailbroken (google ps5 jailbreak and you’ll find a bunch of info). I’m not aware of any for Xbox Series but I think that’s more due to lack of interest and the fact that you can run homebrew in development mode already.

Nvidia has been investing in confidential compute for inference workloads in cloud - that covers physical ownership/attacks in their thread model.

https://www.nvidia.com/en-us/data-center/solutions/confident...

https://developer.nvidia.com/blog/protecting-sensitive-data-...


It's likely I'm mistaken about details here but I _think_ tee.fail bypassed this technology and the AT article covers exactly that.

> The privacy guarantee we are making here is that no one, not even people operating the inference hardware, can see your prompts.

that cannot be met, period. your asssumptions around physical protections are invalid or at least incorrect. It works for Apple (well enough) because of the high trust we place in their own physical controls, and market incentive to protect that at all costs.

> This is how Apple's PCC does it as well [...] and you can audit the code running on those compute machines to check that they aren't doing anything nefarious.

just based on my recollection, and I'm not going to have a new look at it to validate what I'm saying here, but with PCC, no you can't actually do that. With PCC you do get an attestation, but there isn't actually a "confidential compute" aspect where that attestation (that you can trust) proves that is what is running. You have to trust Apple at that lowest layer of the "attestation trust chain".

I feel like with your bold misunderstandings you are really believing your own hype. Apple can do that, sure, but a new challenger cannot. And I mean your web page doesn't even have an "about us" section.


That's a strong claim for not looking into it at all.

From a brief glance at the white paper it looks like they are using TEE, which would mean that the root of trust is the hardware chip vendor (e.g. Intel). Then, it is possible for confidentiality guarantees to work if you can trust the vendor of the software that is running. That's the whole purpose of TEE.


I guess you're unaware that Intel TEE does not provide physical protection. Literally out of scope, at least per runZero CEO (which I didn't verify). But anyway, in scope or not, it doesn't succeed at it.

And I mean I get it. As a not-hardware-manufacturer, they have to have a root of trust they build upon. I gather that no one undertakes something like this without very, very, very high competence and that their part of the stack _is_ secure. But it's built on sand.

I mean it's fine. Everything around us is built that way. Who among us uses a Raptor Talus II and has x-ray'd the PCB? The difference is they are making an overly strong claim.


It doesn’t matter either way. Intel is an American company as well, and thus unsuitable as a trust root.

A company of what country would you prefer?

Everyone likes to dunk on the US, but I doubt you could provide a single example of a country that is certainly a better alternative (to be clear I believe many of the west up in the same boat).


A European one. Pulling the kind of tricks the NSA does is considerably harder if you don’t have a secret court with secret orders.

You might want to look into what GCHQ, DGSE, and BND (as examples) actually do. Europe is not some surveillance-free zone.

> Intel is an American company

Literally.


If you’re moving the goalposts from tech implementation to political vibes, it’s just more post-fact nabobism.

"SSL added and removed here :-)"

It’s not about vibes, but clear proof of a strategy to undermine global information security. Is anyone suppose to believe they don’t do that anymore?


Apple actually attests to signatures of every single binary they install on their machines, before soft booting into a mode where no further executables can be installed: https://security.apple.com/documentation/private-cloud-compu...

We don't _quite_ have the funding to build out our own custom OS to match that level of attestation, so we settled for attesting to a hash of every file on the booted VM instead.


> Apple actually attests to signatures

But (based on light reading, forgive errors) the only way to attest them is to ask _Apple_! It reminds me what i call e2e2e encryption. iMessage is secure e2e but you have to trust that Apple is sending you the correct keys. (There's some recent update, maybe 1-2 years old, where you can verify the other party's keys in person I think? But it's closed software, you _still_ have to trust that what you're being shown is something that isn't a coordinated deception.)

Apple claims to operate the infrastructure securely, and while I believe they would never destroy their business by not operating as rigorously as they claim, OTOH they gave all the data to China for Chinese users, so YMMV. And their OS spams me with ads for their services. I absolutely hate that.

Again, anyway, I am comfortable putting my trust in Apple. My data aren't state secrets. But I wouldn't be putting my trust in random cloud operator based on your known-invalid claim of physical protection. Not if the whole point is to protect against an untrustworthy operator. I would much sooner trust a nitro enclave.


You should read the PCC paper: https://security.apple.com/blog/private-cloud-compute/

You are not in fact trusting Apple at all. You are trusting some limited number of independent security researchers, which is not perfect, but the system is very carefully designed to give Apple themselves no avenue to exploit without detection.


> OTOH they gave all the data to China for Chinese users, so YMMV

This is true for the same reason that American data is in the US. China is frequently a normal and competent country and has data privacy laws too.


Thanks for the reply! By "inference provider" I meant someone operating a ComputeNode. I initially skimmed the paper, but I've now read more closely and see that we're trying to get guarantees that even a malicious operator is unable to e.g. exfiltrate prompt plaintext.

Despite recent news of vulnerabilities, I do think that hardware-root-of-trust will eventually be a great tool for verifiable security.

A couple follow-up questions:

1. For the ComputeNode to be verifiable by the client, does this require that the operator makes all source code running on the machine publicly available?

2. After a client validates a ComputeNode's attestation bundle and sends an encrypted prompt, is the client guaranteed that only the ComputeNode running in its attested state can decrypt the prompt? Section 2.5.5 of the whitepaper mentions expiring old attestation bundles, so I wonder if this is to protect against a malicious operator presenting an attestation bundle that doesn't match what's actually running on the ComputeNode.


Great questions!

1. The mechanics of the protocol are that a client will check that the software attested to has been released on a transparency log. dm-verity is what enforces that the hashes of the booted filesystem on the compute node match what was built and so those hashes are what are put on the transparency log, with a link to the deployed image that matches them. The point of the transparency log is that anyone could then go inspect the code related to that release to confirm that it isn't maliciously logging. So if you don't publish the code for your compute nodes then the fact of it being on the log isn't really useful.

So I think the answer is yes, to be compliant with OpenPCC you would need to publish the code for your compute nodes, though the client can't actually technically check that for you.

2. Absolutely yes. The client encrypts its prompt to a public key specific to a single compute node (well, technically it will encrypt the prompt N times for N specific compute nodes) where the private half of that key is only resident in the vTPM, the machine itself has no access to it. If the machine were swapped or rebooted for another one, it would be impossible for that computer to decrypt the prompt. The fact that the private key is in the vTPM is part of the attestation bundle, so you can't fake it


Happy Juneteenth! A reminder that we can change as a country. May we never have to liberate by war again.


OP found no correlation between railway proximity and quality


The point is, that quality is not the metric here; the metric is google ratings. I would take a place with a solid 4.6 but hundreds of ratings over a low double digit 4.9 any time.


Even Google ratings are sometimes gamed nowadays. This wasn't always the case, they used to be reliable. Tripadvisor ratings on the other hand were always garbage.

I recently had some really bad experiences with some fast food places in my corner of the world, at a train station as well.

They all had 4.9 stars but lots of 1 star reviews matching my experience. But also tons and tons of eerily similar 5 star reviews with a generic photo of the counter (no faces, no food) and a random name and glowing review of who "served" them. Which is impossible at those places.


I'm never going to look at ratings. The only use there is photos of menus, food, and the restaurant itself.


I sort ratings by worst but look at the reason. If the 1 stars are "the waitress was rude", then that's fine. I'm there to eat. I don't need them to flatter me. If the 1 stars are "the food smelled foul and I saw them mixing leftover soup back into the pot", I know to avoid it. I've seen both of these types a lot.

And I also do a quick sort by newest. If all the newest reviews are tourists, I know to steer clear. Tourists will give a convenience store egg sandwich 6 stars out of five. They'll write a full-on essay about the fine experience they had at a restaurant and saying it's obvious the chef put lots of care into the meal, not realizing it's a local chain restaurant that just pops things in the microwave. Then they'll take off 2 stars at a good place because the chef couldn't make them a gluten-free, rice-free, beef-free, soybean-free chicken burger (also, they have deadly poultry allergies so they can only eat chicken substitutes). I also see loads of these types of reviews.


I've seen this too, a lot of 1-star reviews from customers who wanted some substitution and didn't get it. Seems designed to be abused by unreasonable customers, cause one of those equates to ~3 honest customers saying "meh" with a 3-star review. I'd prefer the restaurant that doesn't have to charge extra to absorb the costs of avoiding that.


OP found no correlation between railway proximity and quality


You're doing important work throughout this thread. Thanks.


OP found no correlation between railway proximity and quality


Actually OP found a very small correlation between railway proximity and Google rating. The study didn't actually measure "quality"...

Also, the lowest scoring outliers were the closest proximity, which I think is noteworthy.


And probably understandable. Empirically, I don't really expect to find the best restaurants right around railway stations.


Yeah. The overall correlation was tiny but just looking at it you could see a pattern that's getting lost in the analysis.


OP found no correlation between railway proximity and quality


OP failed to reject the null hypothesis of no correlation at the given power level.


OP found no correlation


Reviews probably have too much noise. It's not only the food that gets rated and people taking the time to rate a place might be doing so because of a particularly good or bad experience they just had. It's not really a day to day thing.


Reviews have a lot of noise, but it feels like it’s still the best source, unless anyone can recommend a better alternative.

Reviews are the worse way to test this hypothesis except all the others.


> Reviews have a lot of noise, but it feels like it’s still the best source, unless anyone can recommend a better alternative.

I honestly hate this take, sure, it might be the best (easily available, broad enough thing), but it's not my point, I'm not shutting this down, but giving a remark on what kind of drawbacks should be considered when analysing this, not because it's probably the best you should assume it's perfect.


Not true -- restaurant reviews have a lot of signal. Generally an average score is quite reliable once you hit 100 or so reviews. Even 50 reviews is a pretty decent signal.


Maybe, I've heard statisticians say that 30 samples the mean is pretty much unlikely to change, but that's not the issue here, but that what we are measuring goes beyond food quality and gets skewed towards experiences


not always... the data is skewed by non natives, e.g. a high concentration of americans will typically result in junk food scoring too high, high scoring asian food in the west tastes nothing like what it should, for authentic tastes the scores will be quite mid


> the data is skewed by non natives

That's not skew. That accurately reflects "non-native" clients, who are people too.

> a high concentration of americans will typically result in junk food scoring too high

You do realize that America has the highest number of Michelin-starred restaurants per capita? Way to stereotype

> high scoring asian food in the west tastes nothing like what it should

Are you also going to criticize Japan for not making American BBQ like "what it should"?

You're showing yourself to be extremely prejudiced against all sorts of other nationalities, and against the creative outcomes when nationalities mix. But people have different tastes from whatever you think is "right", and that's OK.


>You do realize that America has the highest number of Michelin-starred restaurants per capita? Way to stereotype

How did you find this data? A quick google says that France has about 630 Michelin restaurants and the US about 230 (and obviously fewer people live in France). It looks like Switzerland has the highest per capita with 143 for about 9 million people. The US has a lot of good eating places, but let's stick to facts.


Then OP shouldn't have used that title. My reasoning still stands.


LOL we may need to update the title of this post, half the top level comments right now are assuming the study confirmed the hypothesis.

> With a mighty Pearson's correlation of 0.091, the data indicates that this could

> be true! If you ignore the fact that the correlation is so weak that calling it 'statistically

> insignificant' would be quite generous.


> be true! If you ignore the fact that the correlation is so weak that calling it 'statistically insignificant' would be quite generous.

I actually came to a different conclusion than the author. Here is the way I'm thinking about the presented statistics:

1. There is 17 Kebab shops (out of 400 samples) with a google review lower than 3 stars. Let's call them "bad kebabs".

2. All those "bad kebabs" actually located within 500m from the nearest station. No kebab located in further than >500m is bad.

3. So if you've ever gotten a bad donor kebab, we can safely assume that you have purchased it from a kebab shop near a train station.

Maybe there are so many kebab eaters near a train station that a mediocre kebab offering becomes profitable?


> with a google review lower than 3 stars

At least here, the majority of 1-2 star reviews are actually complaining about third-party delivery services like Foodora[1].

Of course the fries will be soggy and the burger luke warm when you got a guy who had to pedal a bike for half an hour to deliver it for you. Like what did you expect?

[1]: https://en.wikipedia.org/wiki/Foodora


I expect this guy to keep it fresh for me.


I don't know if you're joking or not but in case you're not, you can't really keep fries "fresh". Regardless, the point remains that the quality of third party food delivery services shouldn't be considered when studying the quality of restaurants.


I always open fries. They will get cold. Fine. Cold fries can be reheated; they won’t be as good as fresh, but soggy fries are much harder to fix.


I agree. Nothing is worse than receiving fries which have been in a closed or semi-closed container and have been soaking in their own steam.


> 2. All those "bad kebabs" actually located within 500m from the nearest station. No kebab located in further than >500m is bad.

Right, but this is selection bias. There will always exist a distance D from which all bad kebabs are located.

Unless D is provenly chosen _before_ looking at the data, this has no meaning.

One also has to take the kebab density into account.


This "you have to choose D" ahead of time nonsense is why people distrust and dislike statisticians! Humans have priors on what is "close" that are independent of this particular article. If they had said "See, everything within 5000m" or "everything within 5m" you might have a point but "500m" being a rough definition of "close to a train station" is pretty reasonable.


> If they had said "See, everything within 5000m" or "everything within 5m" you might have a point

On the contrary, if everything was within five meters, that would make the finding much more impressive.


Bad kebab shops may survive if located near a train station.


The more generally interesting a topic is the more likely a HN user is to read the article. A study.


I am definitely guilty of sometimes clicking "reply" and then reading the linked article to check that I'm not about to essentially tell you what you'd have read or worse, tell you something the article actually debunks.


I only read articles with headlines that describe informative content, not with headlines that sound funny or thought-provoking.


Heh. You've just captured the reason why (the better) clinical journals explicitly and specifically forbid having a statement of results in the title of a paper.


Would it help if I were to chime in with a response about the benefits of kebab case over train case?


Hi there, inventor of the kebab plugin for traindeck here. I'm afraid I was the one who introduced the concept of kebab case, way back in the early 1990s. Back then, trains didn't have enough processing power to handle full cuts of meat, so I thought I'd introduce kebabs as a hack, and it ended up taking off! Didn't expect anyone to still be using it. It's always fun to share stories on HN - you never know who you'll meat here.


Easy fix: just add a ? to the end.


"study" is already in scare quotes


Ha ha I had my coding eyes on. I removed the quotes mentally as the entire title starts with one.


I liked this YouTube video from the blue site: https://www.metafilter.com/206671/The-Greatest-Showman-Richa...

A very well researched dive into how his legend came to be, some of the darker sides of his personality, and some discussion of his very real contributions to science.


American here, nope! It was a huge deal. An attempt to disrupt the peaceful transition of power. Not sure what other examples you think were on par but it was the kind of big deal where people went home sick to their stomachs for the day because I've never seen anything like it in my life. A desecration of something sacred.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: