> conversely, running a firewall on something like ZFS also sounds like too much.
this makes no sense. firewalling does not touch the filesystem very much if at all.
what FS is being used is essentially orthogonal to firewalling performances.
if anything, having a copy-on-write filesystem like ZFS on your firewall/router means you have better integrity in case of configuration mistakes and OS upgrade (just rollback the dataset to the previous snapshot!)
my point was that if a hardware vendor were to approach this problem, they'd probably have 2 (prev,next) partitions that they write firmware to, plus separate mounts for config and logs, rather than a kitchen-sink CoW FS
Author argues that values long embedded in Ruby culture (testing, readability, design) are very useful for collaborating with AI, gives an example os asking Claude to follow tdd
> What's different about this qualcomm laptop that makes it inappropriate?
Everything else around the cpu. apple systems are entirely co-designed (cpu to work with the rest of the components and everything together to work with mac os).
While i'd love to see macbook-level quality on other brands (looking at you, lenovo) tight hardware+software co-design (and co-development) yields much better results.
Microsoft is pushing hard for UEFI + ACPI support on PC ARM boards. I believe the Snapdragon X2 is supposed to support it.
That still leaves the usual UEFI + ACPI quirks Linux has had to deal with for aeons, but it is much more manageable than (non-firmware) DeviceTree.
The dream of course would be an opensource HAL (which UEFI and ACPI effectively are). I remember that certain Asus laptops had a microstutter due to a non-timed loop doing an insane amount of polling. Someone debugged it with reverse engineering, posted it on GitHub, and it still took Asus more than a year to respond to it and fix it, only after it blew up on social media (including here). With an opensource HAL, the community could have introduced a fix in the HAL overnight.
I get the lacking Linux support, but what about Windows? Most serious work happens on Windows and their SoCs seem to have much better support there.
Apple's hardware+software design combo is nice for things like power efficiency, but so in my experience so far, a Macbook and a similarly priced Windows laptop seems to be about equal in terms of weird OS bugs and actually getting work done.
I’m getting about 2 hours with current macos on an arm macbook pro. I used to get 4-5 last year.
This is out of the box. With obvious fixes like ripping busted background services out, it gets more than a day. There’s no way normal users are going to fire up console.app and start copy pasting “nuke random apple service” commands from “is this a virus?” forums into their terminal.
Apple needs to fix their QA. I’ve never seen power management this bad under Linux.
It’s roughly on par with noughties windows laptops loaded with corporate crapware.
That's unfortunate, perhaps your particular macbook is having a hardware problem?
As a point of comparison, I daily two ARM macs (work M4 14 + personal M3 14), and I get far better battery life than that (at least 8 hours of "normal" active use on both). Also, antidotally, the legion of engineers at my office with macs are not seeing battery life issues either.
That said, I have yet to encounter anyone who is in love with macOS Tahoe and it's version of Liquid Glass.
The current issue is iOS 26.1’s wallpaper renderer crashes in a tight loop if the default wallpaper isn’t installed. It isn’t under Xcode.
I have macos crash reporting turned off, but crashreport pins the CPU for a few minutes on each ios wallpaper renderer crash. I always have the iOS simulator open, so two hours battery, max.
I killed crashreport and it spun the cpu on some other thing.
In macos 25, there’s no throttle for mds (spotlight), and running builds at a normal developer pace produces about 10x more indexing churn than the Apple silicon can handle.
Sorry, thought I had posted, but didn't get through. It's a T480 with the 72Wh and the 24Wh battery running on FreeBSD. Screen has also been replaced with a low power usage screen which helps a lot in saving battery while still giving good brightness.
Most of the time I am running StumpWM with Emacs on one workspace and Nyxt in another. So just browsing and coding mostly.
OpenBSD gets close, but FreeBSD got a slight edge battery wise. To be fair, that is on an old CPU that still has homogenous cores. More modern CPUs can probably benefit from a more heterogenous scheduler.
Or they just got one of the 'good' models and tuned linux a bit. I have a couple lenovo's and its hit/miss, but my 'good' machine has an AMD which after a bit of tuning idles with the screen on at 2-3W, and with light editing/browsing/etc is about 5W. With the 72Wh battery that is >14h, maybe over 20 if I was just reading documentation. Of course its only 4-5 if i'm running a lot of heavy compile/VMs unless I throttle them, in which case its easy over 8h.
One of my 'bad' machines is more like 10-100W and i'm lucky to get two hours.
Smaller efficient CPU + low power sleep + not a lot of background activity + big battery = very long run times.
for this to happen we would need to see a second company that controls both the hardware and the software and that's not realistic, economically. You can't just jump into that space.
You could argue that is exactly what Tuxedo is doing. In this case, they could not provide the end-user experience they wanted with this hardware so they moved on.
System76 may be an even better example as they now control their software stack more deeply (COSMIC).
when I say "control the software" what i mean is we need another company that can say "hey we are moving to architecture X because we think it's better" and within a year most developers rewrite their apps for the new arch - because it's worth it for them
there needs to be a huge healthy ecosystem/economic incentive.
it's all about the software for end users. I don't care what brand it is or OS and how much it costs. I want to have the most polished software and I want to have it on release day.
Right now, it's Apple.
Microsoft tries to do this but is held back by the need for backward compatibility (enterprise adoption), and Google cannot do this because of Android fragmentation. I don't think anyone is even near to try this with Linux.
Almost everything on regular Fedora works on Ashai Fedora out of the box on Apple Silicon.
You can get a full Ubuntu distribution for RISC-V with tens of thousands of packages working today.
Many Linux users would have little trouble changing architectures. For Linux, the issue is booting and drivers.
What you say is true for proprietary software of course. But there is FEX to run x86 software on ARM and Felix86 to run it on RISC-V. These work like Rosetta. Many Windows games run this way for example.
The majority of Android apps ship as Dalvik bytecode and should not care about the arch. Anything using native code is going to require porting though. That includes many games I imagine.
Not to mention, Proxmox does not support running Docker in an LXC officially (of course many users still do it). It is not a supported configuration as of now
also, looking at the link you posted, it looks like incus can only do like a fraction of what proxmox can do. is that the case or is that web ui a limiting factor?
this makes no sense. firewalling does not touch the filesystem very much if at all.
what FS is being used is essentially orthogonal to firewalling performances.
if anything, having a copy-on-write filesystem like ZFS on your firewall/router means you have better integrity in case of configuration mistakes and OS upgrade (just rollback the dataset to the previous snapshot!)
reply