> It feels like a huge trade-off of GCs is almost completely gone.
FWIW the tradeoff of low latency GC is usually paid in throughput.
That is definitely the case for Go, which can lag very much behind allocations (so if your allocation pattern is bad enough the heap will keep growing despite the live heap being stable, because the GC is unable to clear the dead heap fast enough for the new allocations).
throughput can be fixed by adding compute. latency cannot. always optimize for latency with gc.
and no the heap will not keep growing in golang. it'll force threads to help with GC if its falling behind. thereby reducing the rate of allocations and speeding up the collection.
Random Go dev suddenly more of an expert than the literal best-in-class GC experts that have been working on G1GC, ZGC, Shenandoah, Parallel, ConcMarkSweep and others.
GCs are a matter of tradeoffs. Always optimizing for latency is Go's solution, but there are reasons for everything. It's the very reason why the JVM has so many knobs. Yes, it requires a PhD to know what to tune, but there are many parameters for the thousands of different situations that can arise.
rolls eyes I've tuned java's GC for highly available and massive throughput systems.
pretty familiar with the trade offs. java's problem isn't GC (in general) its problem is memory layout and the fact it can't avoid generating a metric shit ton of garbage.
G1GC was a good improvement and I stopped paying attention at that point because I no longer had to deal with its problems (left the ecosystem).
I'm not asserting java hasn't improved or that its GC implementations aren't modern marvels. fundamentally they're just a self inflicted wound.
golang wouldn't benefit anywhere near as much as java has from these implementations because guess what... they've attempted GC that operate under similar assumptions, young allocations, compacting patterns, etc. and they failed to make an improvement that just increasing the heap size knob wouldn't fix using the current GC.
> it can't avoid generating a metric shit ton of garbage.
IMHO this is an often-overlooked aspect of Java and I get flak for pointing out that Java programs are often full of defensive copying because the Java style encourages it. The JDK is very trashy and highly biased against reusing objects or doing anything in-place without creating garbage. You can't, for example, parse an integer out of the middle of a string.
15 years ago when I wrote a lot more Java, I could get away with avoiding JDK libraries to write tight and small Java code (yeah, omg, mutable non-private fields). That was how I got my AVR microcontroller emulator to beat its C competitors. Nowadays if you write that kind of code you get a beatdown.
JVMs work really hard to deal with trashy Java code. Generics make everything worse, too; more hidden casts and adapter methods, etc.
Only in some kinds of apps, like web servers where all the heavy lifting is being done by the database anyway.
Consider a compiler. It's not infinitely scalable to multiple cores. It may not even be multi-threaded at all. It also doesn't care about pause times - for that you want Parallel GC.
depends on the compiler and language. golang seems to counter point your position quite handedly. having one of the fastest compile times and being highly concurrent.
if your application is so simple it doesn't use concurrency then 99% of the time you can completely remove the need for GC by preallocating slabs.
The Go compiler is fast because it doesn't do very much, not because Go's GC is good for throughput oriented jobs.
Go has repeatedly tied itself in knots over the years because they have a goal of not making the compiler slower, yet, the GC needs of the compiler are diametrically opposed to the needs of the HTTP servers Go is normally used for.
"As you can see if you have ROC on and not a lot of sharing, things actually scale quite nicely. If you don’t have ROC on it wasn’t nearly as good ... At that point there was a lot of concern about our compiler and we could not slow down our compilers. Unfortunately the compilers were exactly the programs that ROC did not do well at. We were seeing 30, 40, 50% and more slowdowns and that was unacceptable. Go is proud of how fast its compiler is."
For a compiler, the GC strategy that makes most sense is to never deallocate. The compiler is going to die shortly. No need to do the bookkeeping of figuring out how to deallocate before that happens.
FWIW the tradeoff of low latency GC is usually paid in throughput.
That is definitely the case for Go, which can lag very much behind allocations (so if your allocation pattern is bad enough the heap will keep growing despite the live heap being stable, because the GC is unable to clear the dead heap fast enough for the new allocations).