Correct! Re: latency, as I just noted elsewhere, if you run your prod database using Crunchy Bridge or Supabase or another big provider (which you absolutely should for prod), that typically means that your db will be running within an AWS region. You would, in most cases, need to run your compute in the same region. So yeah, at that point, Hetzner would be out.
It's the other way round, at least some, if not all, of these screens were in KDE before they were released in Windows.
In general, KDE tends to be widely copied. Even macOS has borrowed a lot from KDE.
It has been over 10 years since I stopped being a KDE fanboy and became just a regular fan, but I remember that during my flame-war era, many features from KDE would often appear in Mac OS and Windows and their most popular applications (such as iTunes).
These days I don't care so much, I use KDE and I'm too old to switch.
To be fair, once your data has been stolen, it doesn't make sense to engage with the hackers. There is no way to guarantee that the stolen data won't be used.
What you must do immediately is notify the affected customers, bring down or lock the affected services, and contact the authorities.
There is no guarantee anywhere (strictly speaking, including in the legal market), but that doesn't mean the paying has no effect of the probability of data being dumped.
Notification is an independent requirement.
If an attacker make an extortion threat, but then still follows through on the release/damage after being paid, then people are not incentivized to engage with you, and will go into attack mode right away, making it riskier for you.
HOWEVER, if the attacker make the extortion threat, takes payment, and then honors the agreement, and ends the transaction, then parties are more inclined to just pay to make the problem go away. They know that the upfront price is the full cost of the problem.
I've seen that there are 'ethical attackers' out there that move on after an attack, but you never know what kind you're dealing with :-/ "Never negotiate...."
There's no way to guarantee that I won't get in a car accident. So I pay for insurance. I may never need it, it may never come in handy, but it still makes sense to carry the policy.
The magic of bisect is that you rule out half of your remaining commits every time you run it. So even if you have 1000 commits, it takes at most 10 runs. An n-bisect wouldn't be that much faster, it could be slower because you will not always be able to rule out half your commits.
The idea is, suppose I did a trisect, splitting the range [start,end) into [start,A), [A,B), and [B,end). At each step, I test commits A and B in parallel. If both A and B are bad, I continue with [start,A). If A is good and B is bad, I continue with [A,B). If both A and B are good, I continue with [B,end).
This lets me rule out two thirds of the commits, in the same time that an ordinary bisect would have ruled out half. (I'm assuming that the tests don't benefit from having additional cores available.) In general, for an n-sect, you'd test n - 1 commits in parallel, and divide the number of remaining commits by n each time.
No, unfortunately not. If your history is strictly linear, you could probably hack together something relatively simple on top of git rev-list. But git bisect does all sorts of magic to deal with merge commits and other funny situations, and generalizing that to an n-sect would take a fair bit of work.
Yes, you'd need 4x parallelism for a 2x speedup (16x for 4x, etc). But there's plenty of situations where that would be practical and worthwhile (think a build and test cycle that takes ~1 hour each and can't be meaningfully parallelised further).
Most probably, said ops folks have quite a few war stories to share about logs.
Maybe a JVM-based app went haywire, producing 500GB of logs within 15 minutes, filling the disk, and breaking a critical system because no one anticipated that a disk could go from 75% free to 0% free in 15 minutes.
Maybe another JVM-based app went haywire inside a managed Kubernetes service, producing 4 terabytes of logs, and the company's Google Cloud monthly usage went from $5,000 to $15,000 because storing bytes is supposed to be cheap when they are bytes and not when they are terabytes.
I completely agree that logs are useful, but developers often do not consider what to log and when.
Check your company's cloud costs. I bet you the cost of keeping logs is at least 10%, maybe closer to 25% of the total cost.
Agreed you need to engineer the logging system and not just pray. "The log service slowed down and our writes to it are synchronous" is one I've seen a few times.
On "do not consider what to log and when" .. I'm not saying don't think about it at all, but if I could anticipate bugs well enough to know exactly what I'll need to debug them, I'd just not write the bug.
Just saw this at work recently - 94% of log disk space for domain controllers were filled by logging what groups users were in (I don't know the specifics but group membership is pretty static, and if a log-on fails I assume the missing group is logged as part of that failure message).
Sounds like really bad design choices here. #1 logs shouldn't go on the same machine that's running the app, they should be reported tp another server and if you want local logs, then properly setup log rotators. Both would be good.
Or just validate the binary you download then none of this even matters—for this or any other sort of potential vulnerability, your updater will never end up running untrusted software with escalated privileges.
HTTPS is providing confidentiality, and authentication.
The confidentiality doesn’t really matter here. You’re distributing a software installer. There’s a good chance you’ll give a copy to anyone that visits your website and wants to use your software. And you’re not hiding what you’re downloading in any meaningful way.
The authentication is important. That prevents someone from, say, sending the user a completely different binary and having your software run it.
The authentication could just as easily be solved by signing the files you distribute and validating the signature of the downloaded update before running it.
(Hell, if you’re signing your installers (likely) it could be as simple as deferring to Windows’ WinVerifyTrust method and a check that the certificate used is actually your own.)
Kate was one of the main reasons I switched to Linux in 2004/2005.
I had a lab in MySQL, and back then, the only option to develop in Windows was MySQL Workbench, which was as heavy as it got. Running an SQL statement was painfully slow, and iteration cycles were huge.
In Linux, you would write your SQL in Kate, and run MySQL's cli in the embedded terminal. Once ready, you would click the button “pipe to terminal". Instant run. What took many minutes in Windows took less than 2 seconds in Linux. How can you not love this?
Another reason was Amarok, an (the) mp3 player. Do you like how Spotify and other providers create automatically infinite playlists, radios, etc based on your tastes? Yes, KDE had this since 2002 I think? It was first copied by iTunes, then by Spotify, and now is considered a standard function. :)
Also k3b was an amazing software for burning CDs back then, its interface easily rivaled contemporary proprietary software.
KDE 3.5 was one of the peaks (if not the peak) of graphical interfaces on GNU/Linux.
Experiencing KDE at the time I was used to the Windows XP interface felt amazing, and soon after Vista promises of innovation on interface were nothing compared to what could be done in Compiz (More on the Gnome 2 side).
KDE4 is a sad story. It ruined the reputation of KDE for a very long time. I loved KDE 3.5, but then came back to KDE only in 2021.
It's mostly just a problem of communication. KDE 4.0 should not have been marked as stable and should not been used by distros. If they baked KDE4 for two more years and on the side maintained/developed 3.5, the transition might be much smoother.
While there were some bugs in early KDE 4, those were not the main problem.
No amount of baking could have saved it.
The main problem was caused by the completely different purposes of the new developers, who have removed all the outstanding customization features of KDE 3.5.
For me, indeed KDE 3.5 has been the best graphic desktop that I have ever seen. Neither before, nor after and neither on Apple or Windows have I encountered anything as good.
The main reason for this was that KDE 3.5 allowed extreme customization, so you could make your own graphic desktop that did not resemble at all the default desktop.
After the shock of experiencing the garbage KDE 4, even if I had waited a half of year before making the transition, with the hope that any major bugs would be solved, I have reverted to KDE 3.5 for a few years, until it had become much to painful to make upgrades in such a way as to not damage it.
Then I have switched to XFCE, which does not provide as much as the old KDE 3.5, but at least it does not get in the way of your work with undesirable and hard to remove features. Moreover, any useful KDE applications, such as Kate, work perfectly fine on XFCE, together with any useful Gnome applications.
The same kind of developer philosophy, that users are dumb and they must be prevented from customizing the application has characterized the developers who have converted the Mozilla browser into Firefox, which is another unwelcome change that I have greatly hated.
> The same kind of developer philosophy, that users are dumb and they must be prevented from customizing the application
The more holistic view is that every customization path incurs a support burden.
If you have 3 options to support, you can engineer and test the heck out of each option. 15 options? Not so much.
So having those 12 extra options not only creates permanent extra workload, as dev time is finite you’ve effectively made the 3 aforementioned options worse off.
Another problem with KDE 4 was the buzzword technologies pushed everywere - semantic desktop with Nepomuk (nice research project but not fit for normal use), plasma applet UI added to applications (why?), activities replacing virtual desktops but not really.
That version thing was frustrating because it was an unforced error. Surely someone, at some point, brought up that people would expect the version number to mean that it was ready for use. But they chose to proceed with using their idiosyncratic version scheme, and unsurprisingly suffered a reputation hit for it.
Agreed! I recently switched from a very custom Linux setup with a tiling window manager and all kinds of bells and whistles. The Plasma 6 release in combination with running NixOS which makes trying things out both easily and safely convinced me to give it a shot and I simply haven't left. It took some setup, of course, but Plasma is wonderfully configurable and has everything I wanted available with some tweaking.
Whereas GNOME and others required extensions, which are often out of date or somewhat sketchy -- before I could set things up how I like.
Exactly what happened to me too! Been using a Sway setup on NixOS for many many years, and I was just curious to try out Plasma 6. After a small config change, I had the desktop up and running, and I was impressed how it felt like. You can even use plasma-manager to store your KDE settings to a nix configuration, which make it easy to have a unified configuration across different computers.
kde6 has lost the ability to change the window manager :( I have a wonderful xmonad + kde5 setup on my work laptop but had to stick with mate on my personal machine (not worth fighting with my distro to downgrade to kde5)
I had a love-hate relationship with k3b because years ago it was the only cd burner program that was both somewhat stable and otherwise not terrible on Linux, but also it was the only KDE program I just had to have on my XFCE Gentoo system, which meant compiling allllll of kde libs and qt and losing a bunch of disk space to them.
Yes! When I started using Kate on Linux ca. 2005, I was coming from Notepad on Windows and couldn’t believe how nice it was. I believe it was my first experience of syntax highlighting.
And Amarok! I haven’t thought of that in a while. Losing Amarok was my single biggest regret when I became a Max user. I’ve not used anything since that came close.
What about Clementine or Strawberry? Clementine was a fork of Amarok 1.4, and Strawberry a fork of Clementine.
I recently discovered Strawberry would play music off of my Subsonic server (Navidrome really) and was thrilled to have something for music that didn't feel like a web app.
My editor of choice back in the Windows days was EditPlus, from 1998 (and looks like it's still maintained). I think it had syntax highlighting from the start too.
I think people who got excited about streaming (one low price and you get whatever music you want on tap) have started to realize it isn't as nice as it seems (music disappears, they push music you don't want, and don't support artists like CDs do) and so developers are coming back to local clients that you control managing music you own.
You forget, on windows, we had WAMP with phpmysql so we could run queries in our browser. Not being able to do them within an IDE until around 2001 with Dreamweaver and Microsoft InterDev…
Kate is cool but it wasn’t the first to have this.
Haha, indeed! I was pretty frustrated with configuring WAMP, though. Once I started spending more time on Linux and noticed that Linux was using the slash instead of the backslash for directories and all other OS differences, suddenly, the WAMP configuration made a lot of sense and became one more reason to switch permanently to Linux.
Interesting. I think it was a couple of years earlier than that(?) when I tried using Kate, but it was so buggy and crashy as to be unusable.
Since I tried it pretty close to its initial release, I'm certain those problems were resolved. However, I developed work habits that didn't include it and so I still don't use it to this day.
This is to a bug that, according to their public communications, was fixed years ago. Except of course that it's still present. But other than that, it was fixed years ago.
I mean, it says quite clearly that no one has built an up-to-date version of Amarok for Windows since 2013, but the bug is fixed in the source code and has been for a long time. That no one has stepped up to package Amarok for Windows in that time isn't a bug.