I'm consistently blown away that stuff like this gets past testing.
Personally I think every software engineer should have a CPU meter of some for running on their machine while developing. Its an essential element of seeing what you're doing. How can you write decent software without having even that much visibility over what your computer is up to while your code runs?
These huge CPU sinks making it through to release required nobody to even glance at a CPU graph.
I think this thinking is exactly how it gets past testing. You could force every engineer to have a CPU meter, but that alone won't help; they'll ignore it, assume it was a coworker's change that made the usage higher, or just assume their CPU is high because of their 30 Chrome tabs currently open while developing.
A better solution would be to have CI monitor CPU usage so that increases/decreases could be monitored and reported over time/per commit.
You could force every engineer to have a CPU meter,
No, you should force every developer to have a shitty Core 2 Duo CPU from 10 years ago. A lot of devs are working on new shiny i7s and that will hide a lot of performance problems because it's a top end CPU. Do your testing on a cheap box and if it is still smooth then you can ship your product.
> These huge CPU sinks making it through to release required nobody to even glance at a CPU graph.
Every bug looks egregious in hindsight; just because an app has one is no reason to assume the engineers who made it don't bother to test anything.
Also: that particular issue only hurt idle CPU usage, and the usage was something like 13% CPU usage IIRC. It wasn't exactly the kind of thing that sets off klaxons.
Most bugs are of the form "do X then Y then it doesn't work". These idle CPU bugs are reproduced simply by opening the application.
I accept that 13% idle usage is invisible for most developers - but that's a bit disappointing. I notice stuff like this just by idly glancing at my CPU meter from time to time. An app that's sitting on high idle usage sets off klaxons for me.
You're missing my point. All software has bugs; something always "makes it past test". Pointing at whichever bug did make it into production and saying "gosh those developers must not be testing anything" is silly.
I assume that's been patched; I just tested it and VS Code sat at about .01-.03% of CPU with a blinking block cursor (VSCode vim extension. Maybe it's better than the default I?).
I noticed a few moments after startup that the CPU usage jumped to the 5-7% range for a couple seconds, then it notified me that an update was ready (I assume it was downloading/checking/processing that update). I don't know if this is Electron, or just a general trend in desktop apps, but it seems to be getting a lot easier to update them. So when those pesky CPU/memory hog bugs are found, they can be quickly patched. As for the general trade off that comes with running an extra Chromium process, I suppose it's up to users/developers as to whether it is worth it in each case.