Hacker Newsnew | past | comments | ask | show | jobs | submit | jason_oster's commentslogin

> If you can confuse or buffer overflow the FS process by sending it messages, you can then edit state inside that process you weren't supposed to be able to access, and as that process controls the security system for everything it's game over.

The assumption here is that the FS is the root of trust for the kernel. (A claim I consider dubious, but what do I know about knowing things?) It's another way to say that if you don't harden your root of trust, you're SOL. Which, ok, fair enough. But that's frankly irrelevant because hardening the root of trust is table stakes. The system cannot be secured without it, regardless of the threat model.

All of the concerns about a definition of "getting hacked" falls out of ignoring the hardening of the root of trust. I don't wish to put words in your mouth, but my interpretation of the argument is essentially, "we can't have nice things because the root of trust cannot be hardened sufficiently to prevent all intrusions."

Iff the FS is the root of trust, and it is not possible to confuse the FS by sending it messages, then there is no game over. You have a root of trust that cannot be broken.

> Microkernels have no way to stop this, which is one reason very few operating systems move the core FS out into a separate process.

My reading of the history reaches a very different conclusion. First, the primary reason that very few operating systems in practice use a microkernel design is because Linus Torvalds believed it was too slow for early 90's hardware [1]. And everyone else just does whatever Linux is doing.

Second, security through surface area reduction (and more broadly, defense-in-depth) was always the point of the microkernel design [2]. Trivially, the principle of least privilege is how one arrives at a secure system. Monolithic kernels, to this very day, continue to prove that they cannot be secured in any practical manner. I can only assume we need things to get worse before kernel developers will tighten up and take security seriously.

> So you might as well just run it in-kernel and reap the performance benefits.

There's that same mentality. Apparently "speed at all costs" is the willful trading of security for performance. That position is just as flawed as trading essential liberty for temporary safety [3]. It doesn't matter how fast the thing is when the slightest bump always causes it to explode, killing everyone on board.

[1]: https://web.archive.org/web/20040210002251/http://people.flu...

[2]: https://www.cosy.sbg.ac.at/~clausen/PVSE2006/linus-rebuttal....

[3]: https://old.reddit.com/r/todayilearned/comments/k0c8o6/til_b...


Ah, I'm not saying we can't have nice things or build more secure software. I think we can build more secure software! But the argument I'm responding to is one that I've seen many times over the years on HN and elsewhere, which is some form of "capability based programming languages fix everything". It's always posited as obvious and easy, as if merely saying "capability based language" is the only explanation required and somehow the entire software industry just missed the memo. Sometimes microkernels often come along for the ride, but not always.

You're completely right that the root of trust has to be secured. I argue that the core filesystem is indeed a part of the ROT, which is why e.g. Apple has put so much effort into making it immutable and fully tied to a cryptographic root hash that's checked by the secure boot process. Moving the FS out of the core kernel wouldn't change much though - if you have a bug in your FS code at runtime then you're just SOL even if everything is arranged in a Merkle tree.

The argument being made by josephg in the sibling comment is that in SEL4 or similar the page cache would be separated from the crypto code. And maybe he's right, but the better way to get the same outcome is to not have IPsec in the kernel rather than not have the core FS - as the latter is a ROT and IPsec isn't.

I disagree that the question of what "getting hacked" means is a reformulation of trust roots. A threat model isn't the same thing as a root of trust. The argument over what appears to be minor semantics is important because it scopes your goals and effort. One of the most common failure modes I've seen in security projects is not defining a threat model up front, often leading to an automatic fallback to "the threat model contains everything" followed by despondency and failure when it turns out to be impossible.

I don't think Apple or Microsoft care much about Linus' opinions tbh. Both NeXT/macOS and Windows NT started out as microkernel designs and all of them have oscillated back and forth over the years. The original concept was indeed far too slow and a lot of functionality went back to monolithic. Then over time some functionality got lifted back out e.g. the GUI subsystem on Windows. Core FS remains though in any OS as the cost/benefit ratio of moving it is so poor.


Systems thinking is severely underrepresented in HN comments.

The biggest problem with my phone is that it took too long to find one that isn't comically large (I have an iPhone 13 Mini). The second biggest problem is that the battery is not what it used to be. It lasts 2 days on a full charge instead of 3. The battery will need to be replaced in a few years.

I feel like I will be using this phone until it crumbles to dust. Apple shows no interest in making decently sized phones. I would support the EU enacting legislation to enforce at least one phone in each lineup to be no bigger than 60 mm x 125 mm. (iPhone Mini is ok, but it's still bigger than what I prefer.)

Smaller and lighter phones are an accessibility concern. Miniaturization has been the goal for computers since they were invented. It is incomprehensible that designers and manufacturers are reversing course. My options right now are basically do nothing or replace my phone with a watch.


True, but carpenters using hand tools are a niche.

If you are implying that programmers who hand code are going the way of carpenters using hand tools, I think I can agree.


I do... but I also think all programmers need to know how to hand code, and all carpenters need to know how to use hand tools.


I agree also.


I have to point out that having "high personal standards" is its own fatal flaw. The worst quality code I've seen comes from developers with little self awareness or humility. They call themselves artisans and take no responsibility for the minefield of bugs and security vulnerabilities left in their wake. The Internet is held together with bubblegum and baling wire [1] [2] because artisans reject self improvement.

These same artisans complain about how bad AI generated code is. The AI is trained on your bad artisan code. It's like they are looking in the mirror for the first time and being disgusted by what they see.

[1]: https://techcrunch.com/2014/03/29/the-internet-is-held-toget...

[2]: https://krebsonsecurity.com/2021/11/the-internet-is-held-tog...


A sufficiently detailed spec is not code. It's documentation containing a wealth of information that the code cannot. Code describes how a product works, not what it is supposed to do. That is the job of the specification [1] [2]. Notably, the specification omits implementation details. That is the job of the code.

Confusing the *how* and the *what* is very common when discussing specifications, in my experience. Programmers gravitate toward pseudocode when they have trouble articulating a functional requirement.

> Specifications were never meant to be time-saving devices.

Correct. Anyone selling specifications as a way to save time does not understand the purpose of a specification. Unfortunately, neither does the article's author. The article is based on a false premise.

LLMs experience the same problems as humans when provided with underspecified requirements. That's a specification problem.

[1]: https://en.wikipedia.org/wiki/Software_requirements_specific...

[2]: https://en.wikipedia.org/wiki/Formal_specification


Absolutely not. GPL is freedom for the authors. The end users have conditions they must meet to use the software. Those conditions are restrictions. That is precisely the opposite of freedom for end users.

To anticipate objections, the conditions keep the software "free for everyone", which is true. But that's still explicitly freedom for the authors. The conditions preemptively eliminate end users who would otherwise find the software valuable. Because it is not freedom for end users.


This is irrelevant over the long run because the environment changes even if nothing else does. A compiler from the 1980's still produces identical output given the original source code if you can run it. Some form of virtualization might be in order, but the environment is still changing while the deterministic subset shrinks.

Having faith that determinism will last forever is foolish. You have to upgrade at some point, and you will run into problems. New bugs, incompatibilities, workflow changes, whatever the case will make the determinism property moot.


I don't know, having done a lot of completely pointless time-wasting staring at hex dumps and assembly language in my youth was a pretty darned good lesson. I say it's a worthwhile hobby to be a compiler.

But your point stands. There is a period beyond which doing more than learning the fundamentals just becomes toil.


This is an excellent observation and puts into words something I have barely scratched the surface of. Along with specifications, formal verification is another domain that received the "just automate it" treatment in the before times.

And because formal verification with LLMs is an active area of open research, I have some hope that the old idea of automated formal verification is starting to take shape. There is a lot to talk about here, but I'll leave a link to the 1968 NATO Software Engineering Conference [1] for those who are interested in where these thoughts originated. It goes deeply into the subject of "specification languages" and other related concepts. My understanding is that the historical split between computing science and software engineering has its roots in this 1968 conference.

[1]: http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PD...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: