Hacker Newsnew | past | comments | ask | show | jobs | submit | als0's commentslogin

I use them on Teams on macOS (and Windows) every day and is the main reason I have them. What you say is simply not true.

It's not open. It's not really about devices. And it's certainly not a partnership.


Open as in "open for business".


So they are using RISC-V already for some embedded cores. For application cores, they are participating in the RISC-V consortium to keep the pressure on ARM and also to be ready for the long game.

I do not expect to see Qualcomm made RISC-V application cores until Android or Windows is completely ported to it, which I think rules out the next several years.


Second bus?


CHERI fundamentally relies on capabilities living in memory that is architecturally separate from program memory. You could do so using a bus firewall, but then you're at the same place as MIE with the SPTM.


That's not true. Capabilities are in main memory as much as any other data. The tags are in separate memory (whether a wider SRAM, DRAM ECC bits, or a separate table off on the side in a fraction of memory that's managed by the memory controller; all three schemes have been implemented and have trade-offs). But this is also true of MTE; you do not want those tags in normal software-visible main memory either, they need to be protected.


A CHERI capability is stored in main memory but with the tag bit for that location set. The tags are stored in separate memory pages, also in main memory in current designs.

Maybe you've been confused by a description of how it works inside a processor. In early CHERI designs, capabilities were in different architectural processor registers from integers.

In recent CHERI designs, the same register numbers are used for capabilities and other registers. A micro-architecture could be designed to have either all registers be capability registers with the tag bit, or use register renaming to separate integer and capability registers.

I suppose a CHERI MCU for embedded systems with small memory could theoretically have tag pages in separate SRAM instead of caching main memory, but I have not seen that.


So something like having built in RAM for the pagetables that aren’t part of the normal pool? That way no matter what kind of attack you come up with user space cannot pass a pointer to it?


10 years late is better than never.


Can anyone comment on how fast it is on macOS compared to Safari or Brave? On browserbench speedometer I'm getting 29.3 on Orion and 7.70 on Safari. I'd appreciate more comparisons.


macOS 15.6 / M4 Max

  Orion (0.99.135.0.1-beta) - 22.8
  
  Orion RC (0.99.135.0.1-rc)  - 22.6
  
  Firefox 141.0 - 30.7
  
  Chrome (138.0.7204.184) - 36.7
  
  Safari (18.6)- 41.6
Orion feels as snappy as Safari though, even as a beta. Firefox feels like the slowest here :shrug:


> VLIW is not practical in time-sharing systems without completely rethinking the way cache works

Just curious as to how you would rethink the design of caches to solve this problem. Would you need a dedicated cache per execution context?


That's the simplest and most obvious way I can think of. I know the Mill folks were deeply into this space and probably invented something more clever but I haven't kept up with their research in many years.


> those that bought into it wholeheartedly couldn’t take the criticisms.

"invested"



I really hope they do it well. Some of the things that were new in Neuromancer are tropes these days. e.g. the payphone ringing, in the Matrix and more relevantly in Person of Interest.

It's going to be very hard to navigate between faithfulness to the book and still have it feel fresh.

That and inherent difficulty of taking Gibson's prose to the screen. Maybe it will be by voiceover.


Voiceover worked surprisingly well for Murderbot, so I hope we'll see more of that.


The only cipher there is AES, the rest are not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: