Personally, I learned programming when I was a kid by watching YouTube tutorials + reading random Internet sources. When helping build SHD, it was important to me that we "paid it forward" & made all our lab materials open for everyone to learn from.
Our labs include building your own real spectre attack against the kernel, bypassing ASLR and building ROP chains with various side channels, finding and exploiting backdoors in a RISC-V CPU by building a hardware fuzzer, and more.
(source: I designed the Spectre lab plus a few others)
If you give them a try, please do let us know what you think! We genuinely want these activities to be fun and approachable (we designed them like a big CTF) and welcome feedback from the community.
Additionally, if can find a way to trick a user into installing a malicious kext, why even bother with PACMAN? You already have arbitrary kernel code execution!
First you need to trick Apple into signing that kext (which is getting more difficult by the day even for legitimate uses), or get the user to disable SIP first.
Hi! This is an interesting idea. However, there is a problem that arises- if you rotate the key, then old pointers now become invalid. And since the kernel is always alive and servicing requests (and contains structures with very long lifetimes) we don't believe this to be a practical solution.
Our goal is to demonstrate that we can learn the PAC for a kernel pointer from userspace. Just demonstrating that this is even possible is a big step in understanding of how mitigations like pointer authentication can be thought of in the spectre era.
We do not aim to be a zero day, but instead aim to be a way of thinking about attacks/ an attack methodology.
The timer used in the attack does not require a kext (we just use the kext for doing reverse engineering) but the attack itself never uses the kext timer. All of the attack logic lives in userspace.
Provided the attacker finds a suitable PACMAN Gadget in the kernel (and the requisite memory corruption bug), they can conduct our entire attack from userspace with our multithread timer. You are correct that the PACMAN Gadget we demonstrate in the paper does live in a kext we created, however, we believe PACMAN Gadgets are readily available for a determined attacker (our static analysis tool found 55,159 potential spots that could be turned into PACMAN Gadgets inside the 12.2.1 kernel).
Something definitely went wrong here though that more guidance was not provided to the tech journalists.
Most of the mainstream articles make it seem like they a) did not read the paper b) are incapable of understanding the paper c) were not provided any guidance about what any of this actually means in the real world.
Which is all scary as the paper is well written and very accessible IMO.
Based on the article, I think the journalist basically understands the situation (and if they don't, they should investigate further, that's the job). The headline is just intentionally over-dramatic to get clicks. This shouldn't be treated as a good-faith error, more guidance isn't required and wouldn't help.
OK, but that doesn't excuse things. There's a problem with journalism and its mostly about how they are incentivized and compensated. I don't know what the fix is but its clear that trust is so low, and rightfully so that journalism has largely failed as an industry at its job.
Journalism is paid for by ads, mostly. For online journalism, unless people click there is no money to pay the producers. Hence clickbait. This is a problem but there are worse problems.
I've seen many cases where mods revert an informative title in favor of a less informative one. Also, the idea that the title of a submission should match the title of the resulting article is quite silly, since often the article is written for a different audience and the informed HN submitter can sometimes craft a title that better summarizes to HN readers why the story is interesting / worth reading / controversial.
I mean, it's been like months of random press about the M1 despite there being anything special other than , _Apple_ so that's basically what you get when your marketing is super effective.
It’s not just a problem with journalism but with humans in general. People are more imprecise with their comprehension of things than they are willing to admit.
> Even here on HN, plenty of folks will comment on all kinds of research they know little about.
I don't see anything wrong with that, because all opinions are not equal.
I think there is some level of pressure to get one's word in quickly, otherwise the nebulous cloud of commenters moves on to the next story, and your well-thought-out comment that took hours to write is seen by no-one. If you're responding to someone hoping to get into a nice conversation, you're out of luck since they have no idea you just responded to them.
Anything related to medicine/biochemistry can get cringe-y pretty quickly here. I think the problem is that the crowd here is generally pretty intelligent, but they know it and it's a coefficient > 1 on the Dunning-Kruger effect
If the information was given solely to public security experts with blog presence (Matthew Green, Bruce Schneier, and a plethora of others) we could've linked to them and either ignore the 'clickbait middleman' or do most of the work for them allowing them an easier time to write up something half decent.
Matthew Green once publicly criticized something I created by simply parroting what someone else had said without bothering to do his own investigation. The original criticism turned out to be hogwash, and Matthew failed to recognize an obvious real crypto problem with the first version of my feature because he was too busy trying to just quickly stick his name into someone else's feature announcement while it was still "hot off the press."
I would take anything Matthew Green blogs about with a grain of salt. It's not clear how much of what he says is just cheap amplification of what others claim.
Youtube titles with CAPITALIZED words make me sad. I don't want to click on any video with a title like that, but creators are incentivized to use those titles because they get more views. Some fine videos end up with those titles and I would miss out on some good stuff if I refuse to click on clickbait titles.
The Techcrunch article is well written and I thought summarises this rather well.
The headline is pretty reasonable too. Apple can't patch this, and as other commentators point out subsequent attack techniques are only going to make this flaw worse.
95% of journalism is at this level of understanding for anything non liberal arts that is commonly offered as a 4 year degree, and has been for as long as journalism has existed.
I really sympathise how your research is being misunderstood based on the reporting and responses to the press stuff missing the main point. And everyone equating modern ARM with "M1".. Anyway, awesome work! Let's hope pointer authentication gets a thorough treatment from the research community and you and other people can build further exciting results on your work!
Given there are "55,159 potential spots that could be turned into PACMAN Gadgets" do you think it is highly probably this attack is now part of a zero-day kill-chain?
Apparently they haven’t fixed it yet, so a hardware solution may in fact not be possible, but is there any reason to believe it couldn’t be patched in “microcode”?
Who can guess at the performance impact, but one could imagine a configurable mechanism capable of disabling speculation past a PAC authentication.
Hi Joseph! Go Illini! I didn't see you my last semester but I'm glad to see Chris's members doing well in the world. Also always love Mengjia's work.
2 questions.
1) it's relatively known that PAC is brute-forcable given its relatively small key space (16 bits, sometimes 8 if TBI is enabled). How does your attack differ from general brute forces? (My impression is just your leveraging of the BTB/iTLB is a bit more stealthy.) Similarly, in your opinion, would a fix be more ISA-level or you think it's more specific to the M1 (given brute forcing in general is a PtrAuth problem)?
2) you mention in section 8 that this took 3 minutes for a 16b key and tons of syscalls. Wouldn't another proper mitigation be to limit the number of signatures per key? 3 minutes is definitely a long time, and some form of temporal separation may be quite helpful.
1) Our attack does apply a brute force technique with the twist that crashes are suppressed via speculative execution. If you tried to brute force a PAC against the kernel, you'd instantly panic your device and have to reboot.
2) Given that we never sign anything (only try to verify a signed pointer), and that every authentication attempt happens under speculation, I'm not sure how you would rate limit this without absolutely destroying performance. Keep in mind the kernel is doing a whole lot more with PAC than just our attack (for example, every function's return address is also signed with PAC) so distinguishing valid uses from a PACMAN attack might be challenging.
I suppose you could track how many speculative PAC exceptions you got, but it's a little late to add that now isn't it? And it could also raise lots of false positives due to type confusion style mechanisms on valid mispredicted paths.
Third Q-- What's your opinion on BTI as a possible mitigation? Given it's an v8.5 feature meant for JOPs, and this attack is essentially a speculative JOP, maybe we could use BTI to mitigate and heavily reduce the number of gadgets, speculative or not.
Would it be possible instead to mitigate this by removing the side-channel: either don't leave any trace in the TLB of the speculative execution, or deny access to the TLB for user mode software?
Unwinding changes to the TLB on every mispredict would have a significant overhead and hurt overall performance. Removing valid data you just cached (speculatively or otherwise) is generally a bad idea.
User mode software requires a TLB (unless you want to do a page walk for every single instruction!)
Even if you could remove the TLB entirely from the CPU somehow, the attacker could just use the cache or some other microarchitectural structure.
A colleague pointed out that FPAC[1] in ARMV8.6-A likely prevents this attack, is that right?
I haven't fully digested the paper, but the gadgets seem to rely on AUT, and "Implementations with FPAC generate an exception on an AUT* instruction where the PAC is incorrect"
You can think of it a lot like that! PAC is more advanced as you can describe what a pointer "should" do on access (aka is this a data or code pointer?).
This is a great question! What this means is that a software patch cannot fix the speculative execution behavior that causes the PACMAN issue since it is built directly into how the hardware operates.
You could maybe do it with lots of fences or just a ridiculous chain of NOPs after each branch such that the ROB is cleared before you have time to try to load a pointer speculatively.
In practice, both of these would probably kill performance, so I don't think either of these are great solutions. Recall we are targeting the kernel where everything needs to be as fast as possible.
This gets into the turing completeness tarpit. Yes, it's possible to make a vulnerable implementation emulate a chip that is not vulnerable. And maybe even detect when you don't need to emulate and run natively some of the time.
They probably won't care about this, although I do find it weird when researchers make a whole website with custom domain just to publish something like this. Personally, it comes off as less trustworthy since it enters the same realm of bullshit as those market manipulation attacks on AMD a few years back[1]
Not saying that's what this is (I'm sure these are legitimate findings), but this tactic raises some red flags for me.
Yeah I hate this trend of naming vulnerabilities and pandering to the tech press. The CTS Labs FUD was just beyond the pale. Most tech journalism just ate up those claims that were clearly B.S. and not even self consistent. They were claiming it was impossible for AMD to patch with firmware or microcode but in the same sentence claiming an attacker could use it to create a rootkit that couldn't be removed. Nobody bothered taking two seconds to think critically about what they were publishing to realize they were claiming that it was, in essence, somehow possible for an attacker to "pull up the ladder behind them" but not for AMD.
Maybe this "unpatchable flaw" with the M1 has some more legitimacy than the "critical AMD vulnerabilities" back in 2018, but please, stop with the stupid trendy names for vulnerabilities. Lets discuss this on the technical merits and skip the marketing.
Actively marketing yourself and your ideas is one of the most important things you can do. Without, most people simply won’t know about it or will dismiss it. Just because you market it, doesn’t mean it’ll be successful - things still have to prove their worth regardless and will otherwise fizzle out.
How many important security vulnerabilities have just had technical white papers and no marketing have gotten wider coverage? Very, very few. It’s also very useful for humans to talk about something when given a short, memorable name.
If they are - Joseph and MIT, please stand up to them. The standard for infringement is confusing similarity. Researchers aren't marketing goods and there's no risk of confusion.
https://shd.mit.edu/2025/calendar.html
https://shd.mit.edu/2025/lectureReadings.html