Hacker Newsnew | past | comments | ask | show | jobs | submit | tombert's commentslogin

I have thought about doing this. I have no idea what would be involved, but given how many DOS-compatible OS’s were around in the 1980’s it must be comparatively easy to build than other operating systems.

I remember playing with Caldara OpenDOS and Concurrent DOS years ago, and it always seemed like it might be fun to make my own.


I mean DOS is essentially just the frame work between the software and the BIOS/hardware. Once code is running the OS isn't doing too much.

In that sense DOS programs run without any guard rails. Video memory is just a memory address where you can throw data in and it shows up on screen, but it also means that any kind of memory pointer issue can write over almost anything in RAM. It was trial and error to ensure everything worked and seeing as machines were not always online, there was a much smaller risk of security issues being leveraged.


I use Typst a lot now (which this reminds me of), and the equation support is generally very good, but the thing that gives me pause is that I'm afraid that there's going to be something missing, or worse than the LaTeX equivalent.

LaTeX has been the industry standard for the mathematical world for decades and as a result it has had the most work adding new notation or making nicer formatting.

For example, I needed to do a proof tree recently. Typically I would use bussproofs in LaTeX but I was using Typst, and while there is a package for handling proof trees in Typst [1], I think they're not very pretty, and as a result I ended up porting the document over to Pandoc markdown and doing the rest of my work there (which is annoying because Typst renders around ~1000x faster and has better tooling).

[1] https://github.com/SkiFire13/typst-prooftree


I remember using curryst for my proof trees (a few months ago) and they looked fine if I recall correctly. But I agree that often using typst means searching for package that may not exist or is not working correctly since the ecosystem is not very mature currently.

Hadn't seen curryst. Looking at the examples it looks ok. Maybe I should have used that and stuck with Typst.

"Composition" is a word that can mean several things, and without having read the original source I never really understood which version they mean. As a rule, I've always viewed "composition" as "gluing together things that don't know necessarily know about each other", and that definition works well enough, but that doesn't necessarily eliminate inheritance.

So then I start thinking in less-useful, more abstract definitions, like "inheritance is vertical, composition is horizontal", but of course that doesn't really mean anything.

And at some point, it seems like I just end up defining "composition" to mean "gluing together in a way that's not inheritance". Again, not really a useful definition.


I find the Monoid/Semigroup typeclass pretty concisely captures what is generally meant by "composition" in the minimal sense.

> As a rule, I've always viewed "composition" as "gluing together things that don't know necessarily know about each other"

The extension to this definition given the context of Monoids would be "combining two things of the same type such that they produce a new thing of the same type". The most trivial example of this is adding integers, but a more practical example is function composition where two functions can be combined to create a new function. You can also think of an abstraction that let's you combine two web components to create a new one, combining two AI agents to make a new one, etc.

> "inheritance is vertical, composition is horizontal", but of course that doesn't really mean anything.

This can actually be clearly defined, what you're hinting at is the distinction between sum types and product types. The latter of which describes inheritance. The problem with restricting yourself to only product types is that you can only add things to an existing thing, but in real life that rarely makes sense, and you will find yourself backed into a corner. Sum types let you have much more flexibility, which in turn make it easier to implement truly composable systems.


I actually knew most of that (I've done a lot of Haskell). I don't really disagree with what you said, but I feel like like you eliminate a lot of stuff that people would consider "composition" but aren't as easily classified in happy categories.

For example, a channel-based system like what Go or Clojure has; to me that is pretty clearly "composition", but I'm not 100% sure how you'd fully express something like that with categories; you could use something like a continuation monad but I think that loses a bit because the actual "channel" object has separate intrinsic value.

In Clojure, there's a "compose" function `comp` [1], which is regular `f(g(x))` composition, but lets suppose instead I had functions `f` and `g` running in separate threads and they synchronize on a channel (using core.async)? Is that still composition? There are two different things that can result in a very similar output, and both of which are considered by some to be composition. So which one of these should I "prefer" instead of inheritance?

Of course this is the realm of Pi Calculus or CSP if you want to go into theory, but I'm saying that I don't think that there's a "one definition to rule them all" for composition.

[1] https://clojuredocs.org/clojure.core/comp


I think there's still a category theoretic expression of this, but it's not necessarily easy to capture in language type systems.

The notion of `f` producing a lazy sequence of values, `g` consuming them, and possibly that construct getting built up into some closed set of structures - (e.g. sequences, or trees, or if you like dags).

I've only read a smattering of Pi theory, but if I remember correctly it concerns itself more with the behaviour of `f` and `g`, and more generally bridging between local behavioural descriptions of components like `f` and `g` and the global behaviour of a heterogeneous system that is composed of some arbitrary graph of those sending messages to each other.

I'm getting a bit beyond my depth here, but it feels like Pi theory leans more towards operational semantics for reasoning about asynchronicity and something like category theory / monads / arrows and related concepts lean more towards reasoning about combinatorial algebras of computational models.


The thing about inheritance is it limits you to one relation. Composition is not a single relation but an entire class of relations. The user above mentioned monoids. That is one very common composition that is omnipresent in computation and yet completely glossed over in most programming languages.

But there are other compositions. In particular, for something like process connection, the language of arrows or Cartesian categories is appropriate to model the choices. The actual implementation is another story

In general when you want to model something you first need to decide on the objects and then you need to decide on the relations between those objects. Inheritance is one and there's no need for it to be treated specially. You will find though that very objects actually fit any model of inheritance while many have obvious algebras that are more natural to use


"Gluing together in a way that's not inheritance" is useful enough by itself. Most class hierarchies are wrong, and even when they're right people tend to implement th latest and greatest feature by mucking with the hierarchy in a way which generates wrongness, mostly because it's substantially easier, given a hierarchy, to implement the feature that way. Inheritance as a way of sharing code is dangerous.

The thing composition does differently is to prevent the effects of the software you're depending on from bleeding further downstream and to make it more explicit which features of the code you're using you actually care about.

Inheritance has a place, but IME that place is far from any code I'm going to be shackled to maintaining. It's a sometimes-necessary evil rather than a go-to pattern (or, in some people's books, that would make it a pattern like "go-to").


I don't think that it really is a useful enough definition. There are lots of ways to glue things together that aren't inheritance that are very different from each other.

I could compose functions together like the Haskell `.`, which does the regular f(g(x)), and I don't think anyone disputes that that is composition, but suppose I have an Erlang-style message passing system between two processes? This is still gluing stuff together in a way that is not inheritance, but it's very different than Haskell's `.`.


But both of those avoid the pitfalls of inheritance. "Othering" is a common phenomenon, and I think it's useful when creating an appropriate definition of composition.

But I don't think it's terribly useful; there are plenty of things that you could do that the people who coined the term would definitely not agree with.

Instead of inheritance, I could just copy and paste lots of different functions for different types. This would be different than inheritance but I don't think it would count as "composition", and it's certainly not something you should "prefer".


That's fair. I'd agree that isn't composition. I'm not sure the thing you describe is worse than inheritance.... It's not composition though.

> Most class hierarchies are wrong

One of the most damaging things is when they teach inheritance like "a Circle is a Shape, a Rectangle is a Shape, a Square is a Rectangle" kind of thing. The problem is the real world is exceedingly rarely truly hierarchical. Too many people see inheritance as a way to model their domain, and this is doomed to failure.

Where it works is when you invent the hierarchy. Like a GUI toolkit or games. It's hierarchical because you made it hierarchical. In my experience the applications where it really works you can count on one hand, whereas the vast majority of code written is business software for which it doesn't really.


I have always heard "prefer composition to inheritance" also referred to as "has a" instead of "is a." Meaning:

    class Dog : Animal; // inheritance 
    class Car: 
        Wheels wheels; // composition

Yep. "Composition" has many meanings, but in the context of "inheritance vs. composition" it's just referring to "x has a y".

I still find it baffling that Trump managed to convince people that, somehow, "China" is paying these tariffs, despite the fact that that makes no fucking sense. These companies in China aren't charities; even if the exporter were the ones directly paying the tariffs, it would simply be baked into the price of the good being sold.

This really isn't hard. It's literally something I learned on the first day of high school economics.


> "China" is paying these tariffs

What is China doing when they intentionally/greatly devalue the RMB?


All costs will be baked into the price of the good. No free lunches and whatnot.

> I still find it baffling that Trump managed to convince people that, somehow, "China" is paying these tariffs, despite the fact that that makes no fucking sense.

Right. In one forum where someone said that, I posted a U.S. Customs and Border Protection tariff bill. There really is a tax bill, and it has to be paid before you get the stuff.

Usually, there are customs brokers, and shippers acting as customs brokers, to buffer the end user from dealing with CBP directly on small shipments. You still pay.


Its scary how many things Trump says, claims that are roughly equivalent to, "1 + 2 = 4", which people just eat up as truth!

It feels like all the deficets in our education system (going back decades) are finally coming to roost... so many people lack critical thinking skills and media savvy/awareness. I think its too late to fix it, now that a sufficient majority of people are susceptible to this deception, and a sufficient chunk of politicians are willing to deceive them... there is too much motivation to keep them dumb. The hatred towards the "Intellectual Elite" is scary, and really is reminicent of Pol Pot.


It reminds me more of the Cultural Revolution than anything.

Education isn't the problem with Trump voters, stupidity is.

Remember that this is his second term. Every American voter had the benefit of four years of "education" regarding Trump's character and competence. A majority of them responded by freely and enthusiastically demanding four more such years.


There's an outright attack on education as well though.

The fact that you can become a teacher in Florida without a degree just by being a veteran [1] comes as a result of an exodus of teachers because there has been an outright attack on education institutions. The reasons "why" have been debated but ultimately it is abundantly clear that the issue isn't just stupidity, but also the very act of education itself.

There has been a concerted effort by conservatives to tell the public that modern universities are giant "woke propaganda" factories. I was an adjunct in 2022 and 2023, and I must have missed the memo where I was told to instruct every student to get a sex change, but this didn't stop my idiotic grandmother from saying that "even computer science students get woke indoctrination".

But of course, this isn't a recent thing. I remember when I was younger, conservatives would spend a lot of time taking scientific studies out of context to talk about how we're wasting grant money [2].

The "treadmills for shrimp" is a good example: anyone who spent ten seconds actually doing research on this would see that the "treadmills" actually aren't stupid and they're measuring metabolism, and I am quite confident that the people who started spreading this propaganda knew this, so it's an active lie. I'm sure they think they're doing it for good reasons, but this isn't about "stupidity" at this point, it is an outright attack on research and science.

[1] https://apnews.com/article/fact-check-florida-law-education-...

[2] https://www.npr.org/2011/08/23/139852035/shrimp-on-a-treadmi...


My main game console right now is one of those little gaming boxes you can buy on Amazon for about $400, where I have installed NixOS + Jovian to get the "SteamOS" interface.

I really like it. It really does feel like a "game console"; usually when I've made my own console using Linux, it always feels kind of janky. For example, RetroPie on the Raspberry Pi is pretty cool, but it doesn't feel like a proper commercial product, it feels like a developer made a GUI to launch games.

I have like 750 games on Steam that I have hoarded over the years, in addition to the Epic Games Store and GOG, which can be installed with Heroic, and the fact that I can play them on a "console" instead of a computer makes it much easier to play in my living room or bedroom. It even works fine with the Xbox One controllers; I use the official Microsoft USB dongle to minimize latency, it works great.

I think there actually is a chance that Valve could really be a real competitor, if not a winner.


That sounds interesting because with NixOS it should be very easy to move your config to the next thing, and honestly I prefer NixOS over Arch.

What I wanted to ask you: have you converted the device into a STB as well or is that still standalone?


I'm afraid I don't know what an STB is.

Set top box [1], like an Nvidia Shield TV [2], to play (digital) TV and other programs. My point being, would you be able to have this machine function as such (not so much portable, rather standalone with a remote control). Because that way, you take the hardware STB out of the equation, saving you (on the Nvidia Shield TV Pro) a good 200 EUR.

[1] https://en.wikipedia.org/wiki/Set-top_box

[2] https://en.wikipedia.org/wiki/Nvidia_Shield_TV


Ah, sorry, didn't know the acronym.

I tried making it my main media center, but I couldn't figure out a way to get good quality streaming from the main services, since they limit the quality and bitrate pretty heavily for the browsers. I thought I could just live with it, but it was bad enough to really bother me. After a lot of effort to try and get things working with different emulators and user agent settings, I was unsuccessful getting the quality tolerable.

I haven't fully given up on that dream, but right now I'm just using an Nvidia Shield TV that I already had.


Which box is that? I personally have a Nvidia Shield with Steam Link to stream games from my gaming computer to my TV. I connected an Xbox controller and it works pretty well. I also use an old iPad for streaming games for games that don't lend themselves well to a controller.

It's obviously not a direct replacement since it still relies on my gaming machine, which not everyone has, but it gets a pretty good console experience, and it's portable.


I installed NixOS + Jovian on my Steamdeck and it works great as well.

Nix support is built-in to SteamOS already btw, I used that to set up Ship of Harkinian for example.

Could you elaborate? Does steamOS ship the nix binary & mountpoints?

I meant they have set up the Nix directory so you can write to it without having to mess around with bind mounts, overlayfs, etc. because the system is normally read-only.

So all you have to do is install Nix as a user.


IIRC, the nix package manager can run entirely user-level on any distro. It doesn't ship on the Deck, but it's the same process there as anywhere else.

Do you have link to the little gaming box?

Yep! https://www.amazon.com/dp/B0D733JFML?th=1

The one I ordered had 32 gigs of memory; this was more than a year ago so I'm sure there are better ones now, but I have to say that I feel like this thing "punches above its weight" in that it does seem to run a lot more stuff than I thought it would at a decent framerate.


These are the Beelink boxes.

They got very popular when they released a video of the manufacturing process. https://www.youtube.com/watch?v=ohwI3V207Ts


Wow these are so much more well made than I thought.

Hm, does it work well for games? I have a NUC I could use...

I have one of the higher-end beelinks. Super small, quiet, doesn't get hot and I can play modern AAA titles on it, driving my huge screen TV in my living room.

Sorry, I more meant "does Jovian work well?". I have a Beelink ME and love it, but I want a gaming "appliance" OS.

Jovian works pretty great actually. Once I got it set up it pretty much worked exactly as I expected.

Been pretty happy with my SER8 myself... though there are better models available now.

Can you quantify this? Which Beelink? Are you powering a 4K TV? When you talk about playing modern AAA games, which ones, and what settings do you run at?

Fortnite, Cyberpunk, Starfield, probably others I'm forgetting

I believe the TV is 4K, yeah.

It's the Beelink SER9 AMD Ryzen AI 9 HX 370 12core/24thread AI PC Turbo Freq 5.1GHz


Which is more than double the $400 box we are talking about

Very very cool! Every time I look at building a hot gaming PC I think to myself that I'm just going to play SNES games and Elden Ring, so what's the point? Something like this would be great.

Nearly everyone here probably knows someone who has done free labor and "worked for exposure", and most people acknowledge that this is a scam, and we don't have a huge issue condemning the people running the scam. I've known people who have done free art commissions because of this stuff, and this "exposure" never translated to money.

Are the people who got scammed into "working for exposure" required to work for those people?

No, of course not, no one held a gun to their head, but it's still kind of crappy. The influencers that are "paying in exposure" are taking advantage of power dynamics and giving vague false promises of success in order to avoid paying for shit that they really should be paying for.


I've grown a bit disillusioned with contributing to Github.

I've said this on here before, but a few months ago I wrote a simple patch for LMAX Disruptor, which was merged in. I like Disruptor, it's a very neat library, and at first I thought it was super cool to have my code merged.

But after a few minutes, I started thinking: I just donated my time to help a for-profit company make more money. LMAX isn't a charity, they're trading company, and I donated my time to improve their software. They wouldn't have merged my code in if they didn't think it had some amount of value, and if they think it has value then they should pay me.

I'm not very upset over this particular example since my change was extremely simple and didn't take much time at all to implement (just adding annotations to interfaces), so I didn't donate a lot of labor in the end, but it still made me think that maybe I shouldn't be contributing to every open source project I use.


I understand the feeling. There is a huge asymmetry between individual contributors and huge profitable companies.

But I think a frame shift that might help is that you're not actually donating your time to LMAX (or whoever). You're instead contributing to make software that you've already benefited from become better. Any open source library represents many multiple developer-years that you've benefited from and are using for free. When you contribute back, you're participating in an exchange that started when you first used their library, not making a one-way donation.

> They wouldn't have merged my code in if they didn't think it had some amount of value, and if they think it has value then they should pay me.

This can easily be flipped: you wouldn't have contributed if their software didn't add value to your life first and so you should pay them to use Disruptor.

Neither framing quite captures what's happening. You're not in an exchange with LMAX but maintaining a commons you're already part of. You wouldn't feel taken advantage of when you reshelve a book properly at a public library so why feel bad about this?


Now count how many libraries you use in your day to day paid work that are opensource and you didn't have to pay anything for them. If you want to think selfishly about how awful it is to contribute to that body of work, maybe also purge them all from your codebase and contact companies that sell them?

Maybe those people shouldn’t be doing free labor to give me free libraries either.

Maybe such sociopath ideas should be shunned in any healthy society.

I dispute heavily being called a “sociopath” because I feel people should be paid for their labor and that it shouldn’t be taken for granted.

One of many reasons I left Apple. My manager's manager would say stuff like this all the time, and then when I actually made my PR he would basically have me redesign stuff from scratch. It made me dread working on projects because I knew that no matter what I did I would be forced to rewrite it from scratch anyway.

"Second-guessing works by forcing someone to reverse acts of destruction. If I delegate a decision to you, you quickly spin up a set of relevant mental models, work to get a lot of momentum into them, pay the cost of killing many possible worlds, and experience the relief of a lightened load to carry. Then, by second guessing, I suddenly demand that you resurrect dead models, so I can validate or override your decision. Next time, you won’t put so much momentum in to begin with." (Venkatesh Rao, Tempo: timing, tactics and strategy in narrative-driven decision-making)

One of the many reasons I'm still at Apple. My manager honors my decisions (sometimes, let's be honest, with gritted teeth).

("People don't leave jobs, they leave managers")


Yep. There are many, many teams at Apple. Your manager makes all the difference in the world. Hated working on the Photos team at Apple, loved all the other teams I worked on. (So I left the Photos team to go work on a team where the manager was cool. I was able to stay at Apple, just move about.)

I don't dispute that. I wish I had been on a better team. My team had a famously high turnover rate, so it wasn't just me. I liked my direct manager just fine, he's a decent dude, but I thought his manager, who I had to deal with a lot, was kind of a dumbass and I did not enjoy working with him at all.

I tried joining other teams but without going into elaborate detail it didn't pan out.


one tactic is forming a group which bullies a manager out of their job. it's depressingly effective and rife within the professional public sector

I'm in public service, teach me you ways XD

It only works versus good managers.

I normally move within a company when I want to quit a manager. It's much easier than getting an entirely new job usually. And you have a lot more information about the potential role.

It's also a good way to get into areas you have no experience of.


I tried that, multiple times actually. My options were already pretty limited because I didn't want to move to California, and without going into elaborate detail I the interviews for other teams just didn't work out.

Yuck.

The attitude I like to have is that the author can choose to do the design (doc + approval, or live discussion, some kind of buy in) first or go straight to the PR.

If the design is agreed on first, reviewers would need a really good reason to make the author go back and rethink the design—it happens, sometimes a whole team misses something, but it should be very rare. Usually, there's just implementation things, and ones that are objective improvements or optional. (For project style preferences, there should be a guide to avoid surprises.)

If the author goes straight to a PR, it's an experiment, so they should be willing to throw it away if someone says "did you think about this completely different design (that might be simpler/more robust/whatever)".

This is not the approach suggested by this article, and I'm okay with that. I tend to work on high reliability infrastructure, so quality over velocity, within reason.


I like this - and I think it’s a natural reality. When trust is low (for many reasons, including joining a new team), it may reduce risk to start with a design doc.

There are a lot of reasons anyway I like to have the design doc around. A few:

* I think the designs are often better when people write down their goals, non-goals, assumptions, and alternatives rather than just writing code.

* Reading previous designs helps new people (or even LLMs I guess) understand the system and team design philosophy.

* It helps everyone evaluate if the design still makes sense after goals change.

* It helps explain to upper management (or promotion committee in a large company) the work the author is doing. They're not gonna dig into the code!

...so it's usually worth writing up even if not as a stage before implementation starts. It can be a quick thing. If people start using a LLM for the writing, your format is too heavy-weight or focused on style over substance.

There's definitely a negative side to approval stages before shipping, as this article points out, but when quality (reliability, privacy/security, ...) is the system's most important attribute, I can't justify having zero. And while getting the design approved before starting implementation isn't necessary, it should avoid the bad experience tombert had of having to redo everything.


Any chance you were in one of that infamous orgs?

Yeah, my org had a pretty high turnover rate. I didn't enjoy it, I wish my transfer had gone through because I suspect I would have enjoyed the team I was applying for considerably more. Not how it worked out though, and after a certain point I couldn't take it anymore.

"Just decide for yourself and be self-reliant... no, not like that!"

I feel like collaboration can work great with a group of exactly two people. It's not terribly hard for two people to partition work and actively help each other. With two people working on a project, both people can realistically understand most of the codebase, and can competently review each other's pull requests.

I feel collaboration suffers from combinatorial complexity though, and I feel any number bigger than two ends up doing more harm than good. Once you have more than two people, the codebase starts becoming more segmented, it becomes really difficult to agree on decisions, and the project becomes a lot harder than it needs to be.

If I ever get into management, I think I will try and keep this in mind and try and design projects around two-people teams.


If each person can eat half a pizza, you've reinvented the Amazon approach: https://martinfowler.com/bliki/TwoPizzaTeam.html

I've been pretty happy with my Garmin Instinct Crossover. It looks like a regular watch, and so if Garmin decides to drop support for it, then it still has like a 2 month battery as a regular watch.

fwiw I bought a two-year old Garmin model on clearance like 5 years ago and they continued to support it all the way up until I bought an Instinct 2 this year. So I think you’ll be happy with the support you get.

With Gadgetbridge they can drop as much as support as they want.

Gadgetbridge has very limited aGPS support, right? Without aGPS updates, any GPS device is going to have terribly long lock times.

Ideally, watches should do like Garmin's. Mount as mass storage devices via USB, and let the user download activity data and upload updates or routes.


For my Instinct 3 Solar GadgetBridge it - as far as I can see - can deliver AGPS files.

Settings -> Location -> AGPS


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: