The problem is a management pattern:
removing people and organizational slack because they don’t generate immediate profit,
and then expecting the knowledge to still be there when it’s needed.
Short-term cost cutting leads to less junior hiring,
and removes the slack that experienced engineers need in order to teach.
As a result, tacit knowledge stops being transferred.
What remains is documentation and automation.
But documentation is not the same as field experience.
Automation is not the same as judgment.
Without people who have actually worked with the system,
you end up with a loss of tacit knowledge—and eventually, declining productivity.
AI is following the same pattern.
What AI is being sold as right now is not really productivity.
In many domains, productivity is already sufficient.
What’s being sold is workforce reduction.
The West has seen this before, especially in the case of General Electric.
GE pursued aggressive short-term financial optimization,
cutting costs, focusing on quarterly results, and maximizing shareholder returns.
In the process, it hollowed out its own long-term capabilities.
It effectively traded its future for short-term gains.
The same mindset is visible today.
The core problem is that decision-makers—often far removed from actual engineering work—
believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.
Tacit knowledge comes from direct experience with real systems over time.
If you remove the people and the learning pipeline,
that knowledge does not stay in the organization. It disappears.
AGI isn't going to happen within the next 30 years so this is moot. The actual researchers have said so many times. It's only the business people and laypeople whooping about AGI always being imminent.
You cannot get real, actual AGI (the same ability to perform tasks as a human) without a continuous cycle of learning and deep memory, which LLMs cannot do. The best LLM "memory" is a search engine and document summarizer stuffed into a context window (which is like having someone take an entire physics course, writing down everything they learn on post-it notes, then you ask a different person a physics question, and that different person has to skim all the post-it notes, and then write a new post-it note to answer you). To learn it would need RL (which requires specific novel inputs) and retraining (so that it can retain and compute answers with the learned input). This would all take too much time and careful input/engineering along with novel techniques. So AGI is too expensive, time consuming, and difficult for us to achieve without radically different designs and a whole lot more effort.
Not only are LLMs not AGI, they're still not even that great at being LLMs. Sure, they can do a lot of cool things, like write working code and tests. But tell one "don't delete files in X/", and after a while, it will delete all the files in "X/", whereas a human would likely remember it's not supposed to delete some files, and go check first. It also does fun stuff like follow arbitrary instructions from an attacker found in random documents, which most humans also wouldn't do. If they had a real memory and RL in real-time, they wouldn't have these problems. But we're a long way away from that.
Even better than earlyoom is systemd-oomd[0] or oomd[1].
systemd-oomd and oomd use the kernel's PSI[2] information which makes them more efficient and responsive, while earlyoom is just polling.
earlyoom keeps getting suggested, even though we have PSI now, just because people are used to using it and recommending it from back before the kernel had cgroups v2.
Interesting read! If I had not picked Elixir + Godot for the multiplayer game I'm making, then I would've gone with Rust for the whole thing. The old naive version of me would've tried doing it in C++ + Unreal but I knew better this time around.
I think multiplayer game devs are sleeping on Elixir! It has made the network side of things so much easier and intuitive with fast prototyping and built in monitoring - so many lifetime issues are easily found and addressed. I'm pairing Elixir with Godot, Godot is used for the frontend game client. And its crazy because I thought the game client part would be the "hard" part as it would be a new skillset to learn, but Godot makes the actual game part very easy. GDScript is easy to learn, and the way Godot uses "signals" is very similar to the backend code in Elixir with message passing so its not a huge cognitive shift to switch between server/client code.
I get that BEAM doesn't lend well to highly computational tasks, but I just don't see how thats an issue for many types of multiplayer games? If you aren't making some crazy simulation game, then most of the backend computation is more around managing client state and doing "accounting" every tick as inputs are processed. The most computational task I've had is server-side NPC Pathfinding, which I was able to quickly offload onto seperate processes that chug along at their own rhythm.
> What's worse is there's no good reason for it, it's just a power play by execs who are kind of bad at their jobs.
In the end it's more a financial thing IMO. When you just entered a 10 or 15 year lease right before Covid hit, you're now stuck with paying 7 years for an office you're not even using - that's objectively bad for any company's financials (and in some cases, bonuses for decisionmakers tied to stuff like "cost per employee").
The other side of rentals, the entire REIT industry, has it even worse, they're in pure panic mode: there are a lot of projects that got completed shortly before, during or after Covid... and that now lack renters while interest rates for refinancing keep going upwards as the free money ran out and central banks imposed serious rate hikes.
And if that's not enough, cities also have a problem... when people don't commute by car, they don't pay road tolls, endangering their financial calculations. When they don't commute by public transport and don't pay subscription tickets any more, the transport authorities get into financial trouble. When storefronts of bakeries, coffee shops and other ancilliary services for office drones get boarded up as no one is in the office any more to buy all that stuff, blight sets in.
Side industries are also hit: when people demand fast Internet at their homes to work efficiently or utilize other stuff such as water, electricity or trash more (because they'd have to expand their long-outdated, barely adequate service), utility providers have to invest serious amounts of money. Car insurances tied to distance driven per year lose money. The list of economic activities directly tied to an economy dedicated to people commuting from their homes to their office is really fucking long.
Hence, there are massive financial interests putting up serious efforts, both political and medial, to push for people to come back to the office even though largely only the already uber-rich profit from that.
> Nobody should start to undertake a large project. You start with a small trivial project, and you should never expect it to get large. If you do, you'll just overdesign and generally think it is more important than it likely is at that stage. Or worse, you might be scared away by the sheer size of the work you envision. So start small, and think about the details. Don't think about some big picture and fancy design. If it doesn't solve some fairly immediate need, it's almost certainly over-designed. And don't expect people to jump in and help you. That's not how these things work. You need to get something half-way useful first, and then others will say "hey, that almost works for me", and they'll get involved in the project.
A quote from Linus Torvalds someone posted on HN and I saved almost a year ago
Cross-compiling to different targets with `create-exe` command is a very intriguing idea.
> In Wasmer 3.0 we used the power of Zig for doing cross-compilation from the C glue code into other machines.
> This made almost trivial to generate a [binary] for macOS from Linux (as an example).
> So by default, if you are cross-compiling we try to use zig cc instead of cc so we can easily cross compile from one machine to the other with no extra dependencies.
> Using the wasmer compiler we compile all WASI packages published to WAPM to a native executable for all available platforms, so that you don't need to ship a complete WASM runtime to run your wasm files.
Likewise. I've been running my own Jellyfin server and listening to media using Sonixd and FinAmp since the beginning of this year. It was surprisingly easy to set up on my raspberry pi, and I'm really enjoying no subscription and no dark patterns in my music apps.
It really bothers me how companies convert people to subscription services, then twist the knife in ever deeper with scummy dark patterns. I would have been happy with Spotify's old app from 2015 or so pretty much indefinitely... but they just have to shove podcasts in every crevice of the app in the name of profit. And the app continues to get laggier and buggier every month (oh, how I grew to loathe that spinning green circle).
I understand that they're trying to survive, but when I can literally run a more reliable music streaming stack from my living room with FOSS, I question their technical prowess. And boy does it make me wonder what those thousands of engineers are up to.
Yeah, I don't agree with 100% of their political positions (though, to be fair, it is rare that I agree with 100% of anyone's political positions, including my own), but they are really trying to change the way people consume their products. And, many of their products are truly built to last. I have two jackets (one insulated, one a fleece) that are 15+ years old and have been dragged through the dirt, mud, rain, and snow all over the U.S.
Honestly: don't upload unencrypted content to anyone, for exactly this reason.
I have cloud backups of family photos, but they're all through restic or rclone with the crypt filter applied. Privacy is about the right to put yourself in context.
I'm going to be downvoted to hell for this... but the more I see the level of complexity and amount of engineering going into this, the more I miss Rails and how simple things are there, given most of us are just building CRUD apps anyways.
I'm really glad to see an article like this. I've worked in the space for a while (Fluid Framework) and there's a growing number of libraries addressing realtime collab. One of the key things that many folks miss is that building a collaborative app with real time coauthoring is tricky. Setting up a websocket and hoping for the best won't work.
The libraries are also not functionally equivalent. Some use OT, some use CRDTs, some persist state, some are basically websocket wrappers, fairly different perf guarantees in both memory & latency etc. The very different capabilities make it complicated to evaluate all the tools at once.
Obviously I'm partial the Fluid Framework, but not many realtime coauthoring libraries have made it as easy to get started as Replicache. Kudos to them!
A few solutions with notes...
- Fluid Framework - My old work... service announced at Microsoft Build '21 and will be available on Azure
- yJS - CRDTs. Great integration with many open source projects (no service)
- Automerge - CRDTs. Started by Martin Kleppman, used by many at Ink & Switch (no service)
- Replicache - Seen here, founder has done a great job with previous dev tools (service integration)
- Codox.io - Written by Chengzheng Sun, who is super impressive and wrote one of my fav CRDT/OT papers
- Chronofold - CRDTs. Oriented towards versioned text. I'm mostly unfamiliar
- Convergence.io - Looks good, but I haven't dug in
- Liveblocks.io - Seems to focus on live interactions without storing state
- derbyjs - Somewhat defunct. Cool, early effort.
- ShareJS/ShareDB - Somewhat defunct, but the code and thinking is very readable/understandable and there are good OSS integrations
- Firebase - Not the typical model people think of for RTC, but frequently used nonetheless
I should add... I talk to many folks in the space. People are very welcoming and excited to help each other. Really fun space right now.
The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.
Short-term cost cutting leads to less junior hiring, and removes the slack that experienced engineers need in order to teach. As a result, tacit knowledge stops being transferred.
What remains is documentation and automation.
But documentation is not the same as field experience. Automation is not the same as judgment. Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.
AI is following the same pattern.
What AI is being sold as right now is not really productivity. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.
The West has seen this before, especially in the case of General Electric.
GE pursued aggressive short-term financial optimization, cutting costs, focusing on quarterly results, and maximizing shareholder returns. In the process, it hollowed out its own long-term capabilities. It effectively traded its future for short-term gains.
The same mindset is visible today.
The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.
Tacit knowledge comes from direct experience with real systems over time. If you remove the people and the learning pipeline, that knowledge does not stay in the organization. It disappears.