I recently switched to native Linux. I have a dotfiles repo that installs and configures a number of CLI tools, complete with theme switching for a bunch of years that works on Arch / Debian / Ubuntu and macOS, with WSL 2 support too.
Now I've been piecing together a complete desktop environment using niri and many other tools. If you're using Arch, you can opt-in to turn this feature on. There's an install script which can be run on a new system or existing system, documentation on each package and more at https://github.com/nickjj/dotfiles. I'm using it on 2 systems at the moment.
I also switched to Linux last month. It hasn't been a smooth experience with my GPU as I'm encountering memory leaks in popular compositors. I also get 150-200ms keyboard input delay in all games using some compositors but not all. I documented as much as I could here https://nickjanetakis.com/blog/gpu-memory-allocation-bugs-wi....
Still, despite all of that when it works it is better than Windows. It's just ironic that my Linux desktop is less stable than Windows 10 since I have to reboot 2-3 times a day from GPU memory leaks. Windows 10 was really stable with the same hardware and had no input delay in games. I only rebooted when the OS pushed an update since I keep my machine on 24 / 7.
It would be fair to mention that this is happening for you with a decade old GPU using an EOL driver, which sucks, but is unlikely to be a common experience.
> It would be fair to mention that this is happening for you with a decade old GPU using an EOL driver, which sucks, but is unlikely to be a common experience.
The drivers are still getting maintained by NVIDIA until August 2026. They also got classified as "legacy" on paper 1 day before I installed them.
The compositor memory leak is affecting a lot of people. Since COSMIC and niri both use the same one (smithay), there's threads on GitHub with people using modern GPUs, both NVIDIA and AMD who experience it. There's a lot of replies across all of the different open issues.
The GPU allocation issue on Wayland (separate from the memory leak) also has hundreds of replies on the NVIDIA developer forums with people using new NVIDIA cards with the latest drivers.
The thing is, most people don't talk about either of them because if you have 8+ GB of GPU memory and turn your computer off every night then you won't experience this problem since all GPU memory allocations get reset on shutdown. It happens to be more of a direct problem for me because I have 2 GB of GPU memory but that doesn't mean the problem isn't common. The root cause is still there. Even if I switched to an AMD GPU the niri / smithay memory leak would be present. Instead of rebooting twice a day, if the GPU had 8 GB of memory I'd have to reboot every 2 days (x4 basically).
Since I opened that issue on GitHub NVIDIA did acknowledge it and suggested I try their experimental egl-wayland2 library. I did try that and it hasn't fixed it fully but it has made GPU memory allocations more stable. It even fixed 1 type of leak in niri. This library is decoupled from the drivers themselves as far as I know. I mean, this same library could still be used for the 590 series, it's not 580 specific which means it's not dependent on your GPU model.
> The compositor memory leak is affecting a lot of people. Since COSMIC and niri both use the same one (smithay), there's threads on GitHub with people using modern GPUs, both NVIDIA and AMD who experience it. There's a lot of replies across all of the different open issues.
But then this sounds like a bug in that particular compositor rather than the driver(s)?
fwiw, I have a modern nvidia card, and use the proprietary drivers, and Wayland (KDE/KWin), and that box has a few weeks of uptime.
> But then this sounds like a bug in that particular compositor rather than the driver(s)?
The NVIDIA GPU memory allocation issue affects all NVIDIA cards, at least based on that forum post where there's a good amount of people replying with similar issues with a large combo of cards + drivers.
You probably don't notice it because you rarely ever use all of your GPU's memory at once. Try running `watch nvidia-smi` and then open multiple copies of every hardware accelerated app you have available. Once you reach nearly the max GPU memory apps will either start crashing or fail to open and if you get semi-unlucky your compositor (including KDE on Wayland) will crash if it's the app trying to allocate resources to render the window. I've had plasmashell or kwin hard lock / crash many times with just a small amount of testing.
The expectation is the driver allocates those memory resources to system memory instead of denying the memory to the app. It works correctly in X11 or if you have an AMD card, in Wayland and X11.
The leak is separate and is compositor specific, possibly related to NVIDIA driver bugs (to some degree) but this leak wouldn't be experienced unless you used this compositor. This leak is the compositor doesn't release any GPU memory after any window is closed, so simply opening and closing apps will cause the leak. Combine this with the first problem and that's how you end up rebooting every few hours with a lower end GPU. AMD and NVIDIA are affected.
The post goes into all of these details and there's reproducible tests, and even demo videos showing the first GPU memory allocation problem in Plasma Wayland but not Plasma X11. It also links to all of the related GitHub issues that I could find.
I can't speak for DAWs but in 2019 when I tried switching to native Linux my Scarlett 2i2 3rd gen USB audio interface did not play nicely with Debian at the time. I'd get an endless amount of crackles and pops during recordings / playback and I spent days going over tons of audio configs / tools (Jack, Alsa, PulseAudio, etc.). They weren't xruns, at least Jack wasn't reporting them as that. Buffer sizes were normal too.
The good news is the same interface today works fine with PipeWire, without needing to tweak anything. I am using Arch this time around.
Yeah, these days the Scarletts are fine (as is the aging Firewire connected Edirol FA-101 I've got attached). Non-DAW things such as Supercollider work fine, but I focus less on that sort of environment (which isn't a dis - SC is great!).
When I switched to zsh 5 years ago, I went with a stock set up too.
There's quite a few things the OP's post didn't mention about shell history that I think are really important:
setopt HIST_IGNORE_ALL_DUPS # Never add duplicate entries.
setopt HIST_IGNORE_SPACE # Ignore commands that start with a space.
setopt HIST_REDUCE_BLANKS # Remove unnecessary blank lines.
It's possible to roll your own prompt that does helpful things without using starship.
Also you can roll your own zsh plugin manager in a few lines of shell scripts, `fast-syntax-highlighting` is a really useful plugin to get real-time feedback when typing commands.
Most of those things are mentioned here https://nickjanetakis.com/blog/i-recently-switched-to-zsh-an..., the post is 5 years old but just about all of that is what I do today still. Since then it has evolved to become better IMO, including using a dedicated Vim zsh plugin which is much improved over the default zsh key binds. Also another plugin to show zsh's tab complete in fzf instead of zsh's window. The tab complete demo video is here https://nickjanetakis.com/blog/hooking-up-fzf-with-zsh-tab-c....
I'm happy to see this, not because I wish Adam failure. I am a Tailwind user myself and use it in all of my projects. Generally am a fan of Adam and respect his business.
The happy (in a bad way) part is seeing very successful projects like Tailwind get financially fucked by AI. It means it's not just me.
I am a small tech course creator who was able to make a living for 10 years but over the last 3 years it has tanked to where I make practically zero. Almost all due to less traffic hitting my blog which was the source of paid course purchases. I literally had to shift my entire life around after 25 years of being a successful contractor because of this.
I hope the world understands how impactful (both good and bad ways) having an unchecked AI scrape the world's content and funnel everything directly through their monetized platform while content creators get nothing in return is.
Out of curiosity, do you think the decrease in revenue for your tech course business is due to lack of demand (i.e. potential customers just ask an LLM rather than learn from a course now), or due to disruption in your acquisition channel (i.e. reduced traffic from SEO to your blog due to potential customers seeing Google's LLM answers at the top of the search results page)? Like for example, do you have other marketing channels such as social media, youtube or paid ads?
I think it's both but I think the end result is less traffic means less sales.
I don't have paid ads, everything has been organic with the blog being the main funnel into everything. For quite a few years I tried creating a podcast and also have 5+ years of weekly YouTube videos but the traffic back to the courses from those are close to nothing.
Conversion percent rates haven't changed, they have remained consistent.
Thank you for sharing, I really appreciate it! I've been working on my own tech course/education platform for the past couple years, and the landscape seems to be moving beneath our feet!
I discovered media piracy long ago, but it was very acute before AI because only a small amount of folks pirated this type of content. I ignored them and put 0% energy into it because I wanted to focus on the happy path of people not pirating the content.
If you think of AI as pirating media, it's providing that media to everyone in a context specific form so yes it is a pretty interesting analogy. Not quite a 1 to 1 match but the end outcome is the same and that's all that matters here.
I think there was also planetquake. Loved these sites and frankly, they are beautifully designed. It shows the age, but apart from not being accessible and not being able to scale, the UI was really structured and easy to navigate. I miss these communities.
I still have around 20 PSDs of all of the different Quake clan / gaming ladder sites I put together back in the day.
I think that's why these sites all looked unique. The design started with a blank image in Photoshop because you used to slice up the PSDs into images and stitch it together in code afterwards.
Today you can easily design a site without ever touching an image editor since it can be all CSS rules.
I didn't want to wipe everything and use a different distro unless it was a last resort.
I did try KDE Plasma (Wayland and X11) which produced different results than niri which was enough to get me to realize niri's compositor has a memory leak. The KDE Plasma test was enough to demonstrate the GPU still had problems allocating system memory under Wayland but it was fine in X11. I didn't stick with X11 because niri doesn't use it and it's much less smooth than Wayland. The post covers all of that.
> "People travel on public transport to get somewhere, not to interact with the ticketing system."
I really like this line because it applies to so many things we build.
Public transport is an interesting one because it applies to so many things. If you need to use it but can't depend on it, it's a huge stress creator and time waster. Suddenly you need to pad times by hours to ensure you don't miss your appointment.
Notice the words there, "miss appointment" and not "miss bus or train". The outcome is what matters, not the transport mechanism.
Or, maybe you're traveling in a foreign country. Having every car in the metro display the line in a digital way showing the previous stops, current location and next stops in English is huge for eliminating doubt. Having the audio in multiple languages and clear is important too because maybe you're sitting down and everyone is standing in front of you so you can't see the display clearly. Having a non-digital map as a backup on the wall in case there's a hardware failure is a good idea too.
Thinking "no one needs any of that waste because they can just use their phone" is the wrong mode of thinking. Maybe there's no service because you're underground or maybe that person's eSIM isn't hooked up yet or isn't working. These are real problems.
The travel experience outcome in the grand scheme of things matters a lot. It could mean having a smooth trip or a questionable experience. It could be the difference between recommending the country to your friends and family or not. Suddenly it affects tourism rates at a global scale. Maybe not a lot, but it has an impact.
I don't think Wayland is fully ready, at least not with NVIDIA GPUs with limited GPU memory.
I have a 7,000 word blog post and demo videos coming out this Tuesday with the details but I think I uncovered a driver bug having switched to native Linux a week ago with a low GPU memory card (750 Ti).
Basically on Wayland, apps that request GPU memory will typically crash if there's no more GPU memory to allocate where as on X11 it will transparently offload those requests to system memory so you can open up as much as you want (within reason) and the system is completely usable.
In practice this means opening up a few hardware accelerated apps in Wayland like Firefox and most terminals will likely crash your compositor or at the very least crash those apps. It can crash or make your compositor unstable because if it in itself gets an error allocating GPU memory to spawn the window it can do whatever weird things it was programmed to do in that scenario.
Some end users on the NVIDIA developer forums looked into it and determined it's likely a problem for everyone, it's just less noticeable if you have more GPU memory and it's especially less noticeable if you reboot daily since that clears all GPU memory leaks which is also apparent in a lot of Wayland compositors.
If you already write your posts in Markdown, it makes sense for sure.
About a year ago I converted my 500+ post Jekyll blog to Hugo, overall it's been a net win but boy do I find myself looking up syntax in the docs a lot. Thankfully not so much nowadays but figuring out the templating syntax was rough at the time.
Jeff, you don't have to set draft to false. You can separate your drafts into a different directory and use Hugo's cascade feature to handle it. Also you don't have to update the date in your frontmatter if you prefix the file with YYYY-MM-DD and configure Hugo to use that.
Just a heads up, you didn't mention this in your post but Hugo adds a trailing slash for pretty URLs. I don't know if you had them before but it's potentially new behavior and canonical URL differences unless you adjust for that.
When I did the switch from Jekyll to Hugo, I wrote a 10,000 word post with the gory details and how I used Python and shell scripts to automate converting the posts, plus covered all of the gotchas I encountered. There are sections focused on the above things I mentioned too: https://nickjanetakis.com/blog/converting-my-500-page-blog-f...
> That gives near instant live reload when writing posts which makes a huge difference from waiting 4 seconds.
Mhm. Why? I can write all of my post and look at it only afterwards? Perhaps if there's a table or something tricky I want to check before. But normally, I couldn't care less about the reload speed.
> I use that plugin because it digests your assets by adding a SHA-256 hash to their file names. This lets me cache them with nginx. I can’t not have that feature.
> Mhm. Why? I can write all of my post and look at it only afterwards?
My site has a fixed max width which is what most tablets or desktops will view it as.
Sentence display width is something I pay attention to. For example sometimes I don't want 1 hanging word to have its own full line (a "hanger") because it looks messy. Other times I do want it because it helps break up a few paragraphs of similar length to make it easier to skim.
Seeing exactly what my site looks like while writing lets me see these things as I'm writing and having a fast preview enables that. Waiting 4 seconds stinks.
> Why? [asset digesting and cache busting with nginx]
It helps reduce page load speeds for visitors and saves bandwidth for both the visitor and your server. If their browser already has the exact CSS or JS file cached locally, this allows you to skip a server side call to even determine if the asset can be served locally or needs an update from the server.
The concept of digesting assets with infinitely long cache header times isn't new or something I came up with. It's been around for like 10+ years as a general purpose optimization.
> Sentence display width is something I pay attention to. For example sometimes I don't want 1 hanging word to have its own full line (a "hanger") because it looks messy. Other times I do want it because it helps break up a few paragraphs of similar length to make it easier to skim.
Isn't your website responsive? If it is, for how many different resolutions do you check this? I think I obsess about details, but thankfully not about this!
You should be able to use `text-wrap: pretty;` to eventually remove orphans. If you sometimes want them on purpose and sometimes avoid them, that's just weird. I'm sure this is a lost fight: it'll look different with different setups anyway. Different browser, different OS, different fonts, ... it's a lost battle.
> It helps reduce page load speeds for visitors and saves bandwidth for both the visitor and your server.
Mhm I just use apache's cache_dist[0]. It works fine out of the box, for all my myriads of websites. Yes sometimes there's a new file and a browser is stuck on the old version. Requires a hard refresh. Someone who doesn't know will see the old version. I don't particularly mind that.
It is but the max width is at a point where most folks on a tablet and desktop will see that width so I optimize for that. Mobile always looks ok in the end, but I don't think about how sentences wrap there.
> I guess you're more of a perfectionist than I am.
I try not to get hung up but seeing live previews doesn't take much extra time. The same with asset digesting, once it's set up it happens automatically without thinking.
There's also the mental tax of knowing you're using something semi-annoying all the time, that bugs me if I know it can be improved.
Now I've been piecing together a complete desktop environment using niri and many other tools. If you're using Arch, you can opt-in to turn this feature on. There's an install script which can be run on a new system or existing system, documentation on each package and more at https://github.com/nickjj/dotfiles. I'm using it on 2 systems at the moment.
reply