I share the same sentiment. I've had the same Arch install running since ~2016 and have been using Arch since about 2013 and the number of times I've needed to chroot from a live image is under 10 and were mostly related to systemd breaking things during an update which is pretty much entirely no longer an issue these days.
Compared to Windows-land where nuking and reinstalling the entire OS is a routine maintenance task, checking arch news to see if there's any manual intervention and running `pacman -Syu` is all I really ever think about.
I think this is a very interesting observation, because my experience has been fairly opposite. Disclaimer, I've grown up with windows.
Yet I've never had to reinstall windows on any of my devices ever. I've never had things behave in unusual or unpredictable ways.
Meanwhile, a highly suggested utility (on reddit, SE/SO, and even a few distro forums) for touchpad gestures borked my gnome setup. (Uninstalling it, as you might have guessed from my story and tone, did diddly squat.)
Just today I manually flushed my dnf packages (or clear them? Not sure of the terminology.) In the past, I had to debug manually because apparently the default timeout for Fedora was causing timeout issues with a few 100ms internet latency. That was a fun rabit hole "why can't I install an app that's only available via dnf install" "Oh, because Fedora assumes you have good internet. But don't worry if you have Ubuntu, because that doesn't have these issues!".
...I've never even been made aware what download timeouts windows has. As it should be for a user.
I could go on and on. My windows partition goes nearly months without sleep, typically only rebooting if I run out of battery or want to install an update. Linux... doesn't have hibernate yet. Fortunately it doesn't matter! ...Because some odd memory leak (and gpu driver stuff perhaps?) forces me to shut down ever so often. Oh well.
I'm not sure where you get the idea that Linux doesn't have hibernate - there's both userspace systemd-hibernate bindings and also the kernel swsusp which both work equally well (although you may need to make sure you have a large enough swap partition for it to function)
Also the other issues you're describing do sound frustrating but I think it's a byproduct of an entirely different culture. Exposing user-configurable timeouts and you being made aware of it during troubleshooting is something that enables you to deeply understand your system and how it's configured. In Windows, even if there is defaults for things like that it likely is not exposed to the user or configurable at all. If the default settings are bad you're just stuck with it and you aren't expected or intended to modify anything to better suit your needs.
My experience with Arch is mostly due to having been a fairly proficient Linux user prior to switching over and being very comfortable reading the wiki or bbs and tinkering to find solutions to things. A lot of the prior experiences I had with Debian or other "friendly" distros kind of put me through the ringer too and I've found that having the rolling release with Arch fits my preferred workflow much better than something like Ubuntu or Debian or Fedora or the other "batteries included" distros.
As an early adopter (signed up for the matrix.org riot instance some time in 2016) and someone who has run a homeserver on and off for nearly a decade, my primary issue with Matrix these days is that it still feels like there largely is stagnation in homeserver development because the spec oftentimes seems to follow features from Synapse instead of the other way around.
It seems like a lot of MSCs are implemented as experimental in Synapse while they are under active development, but sometimes it takes months or years for the MSC to be ratified in a way that is stable for other homeserver implementations to pick it up. One example that immediately comes to mind is sliding sync as well as threading and spaces. And in the case of sliding sync, the proxy deployment helped, I think only Synapse is the only server that actually supported (or maybe currently supports?) it and in terms of threads, that was more of a client-side issue of actually parsing and rendering m.thread events.
My feeling on it maybe isn't backed up by reality or the actual data of development but it makes developing on the ecosystem feel difficult.
The other real blocker to being a Discord-killer imo is the permissions model. Having power levels 0-100 is a lot less flexible than the RBAC-style model that Discord uses. Once Spaces were rolled out, a feature that would have been nice is to restrict access to certain spaces or rooms that are children of that space based on a role, which afaik still is not possible to do with the current permissions implementation.
The book that's really stood out to me is the Kernighan/Pike "The Practice of Programming" as something that steered me in a really good direction when I was first learning to write code.
I really wish they'd do a revised 2nd edition using Golang as the base for the book instead of C; but otherwise it still really holds up well
But with a containerized app image you can reduce the blast radius of the poorly maintained app compared to running it bare metal on a host with other services. Also you can still maintain base images to patch/try to reduce vulnerability surfaces
Rootkit anti-cheats can still often be bypassed using DMA and external hardware cheats, which are becoming much cheaper and increasingly common. There's still cheaters in Valorant and in Cs2 on faceit, both of which have extremely intrusive ACs that only run on Windows.
At the level of privilege you're granting to play a video game, you'd need to have a dedicated gaming PC that is isolated from the rest of your home network, lest that another crowdstrike level issue takes place from a bad update to the ring 0 code these systems are running
The big publishers already have their own launcher and platforms and are increasingly moving back onto Steam because they see higher PC player counts and sales when their games are there
I view it as Valve is doing me a favor by adding friction towards me installing a rootkit to play video games.
There's also been numerous userspace ACs that work well and also run in userspace (EAC, Battleye, etc.) that have been enabled for Linux/Proton users (including by EA with Apex Legends at one point). A lot of the support for Linux mostly comes down to the developer/publishers not wanting to and not because of technical reasons.
on the other hand you can't play any of the older battlefields due to cheating (not like "is he cheating?" more like blatant "this guy is speedhacking and heashotting everyone" cheating that the server could easily detect if they cared about it)
There are hacks these days that sniff the pcie bus with an FPGA to mitm the ram, render out the game state, and render an overlay on top of the monitor.
It's a crazy arms race that IDK even kernel mode can compete with at the end of the day.
I think this shift away from community-led multiplayer is approaching a dead-end with respect to this hacking arms race.
Player bans and votekicks used to be so easy to do. And while there were some badmins, I argue it still resulted in an overall healthier multiplayer ecosystem.
OF course we know this shift is so the developer can control the game more tightly for monetization purposes. But i think the end result of this is more harm than good.
Sorry to hijack this thread to ask - but what is the current state of sliding sync? Does it still require a separate proxy service to enable sliding sync if you're self-hosting a homeserver; or is it upstreamed into synapse? Also is there a list of clients that are sliding sync aware?
Not that many clients have actually adopted it though, because the MSC is still not 100% finalised - it's currently entering the final review stages at last now over at https://github.com/matrix-org/matrix-spec-proposals/pull/418.... Right now Element X uses it (exclusively), and Element Web has experimental support for it; I'm not sure if any others actually use it yet.
> what happened to just having a single device and having it run the code directly and show you the result directly?
Having access to multiple computers/devices as a single user became cheap and more common. If it was still the 2000s (or maybe early 2010s) and somebody only used a single PC for most of their tasks that'd make sense, but that's just not the reality most people live anymore
As far as I can tell, the price range for consumer PCs hasn't really moved since then. If anything it's worse now for people who expect to have a good quality graphics card. Owning a smartphone outright isn't cheap, either.
You can buy a mini pc for >$200 USD that is capable of running most desktop tasks and can also handle server tasks. Good quality integrated graphics APUs are also plentiful and fairly easy to come by these days.
Looking through the commit graph for Omarchy is wild. It has 2000+ commits, most of which contain the type of intermediary work pushed into the trunk that you'd see from a jr who doesn't squash their local work
There's also Omakub[0] which was sort of a precursor to Omarchy that gives users the
`wget <some url> | bash`
as a means of installation where the install script is a thin wrapper around another
`eval $(wget <some url>`
that then git clones a repository and executes a 3rd script.
That's definitely the kinds of patterns I'd expect some prolific software engineer to use and also encourage complete novices to Linux to be comfortable just piping arbitrary wgets into a shell
It’s as if DHH is so confident in their own work that they skip all the foundational steps and best practices -- even on an operating system project.
The un-squashed commits are just the tip of the iceberg, but the installation method is the most egregious.
Omarchy's distribution and install is the kind of thing I'd expect to see from a college project, not a leader in the tech world. I don't see DHH as a leader in anything except controversy and clicks these days, though, so the cheerleading around Oma-anything confuses me.
Compared to Windows-land where nuking and reinstalling the entire OS is a routine maintenance task, checking arch news to see if there's any manual intervention and running `pacman -Syu` is all I really ever think about.
reply