Hacker Newsnew | past | comments | ask | show | jobs | submit | more winrid's commentslogin

This will be optimized away. You'll just end up doing more.


already has if you're not visiting the instagrammable places your travels aren't worth it


Happily you can stay ignorant about that and just do your thing if you're not in Instagram.


lol Linux Mint with latest KDE is WAY snappier and quicker to start than Win10 on my laptops


Exactly. Nowadays, most web services are run in a GC'ed runtime. That VM will walk pointers all over the place and reach into swap all the time.


Depends entirely on the runtime.

If your GC is a moving collector, then absolutely this is something to watch out for.

There are, however, a number of runtimes that will leave memory in place. They are effectively just calling `malloc` for the objects and `free` when the GC algorithm detects an object is dead.

Go, the CLR, Ruby, Python, Swift, and I think node(?) all fit in this category. The JVM has a moving collector.


Every garbage collector has to constantly sift through the entire reference graph of the running program to figure out what objects have become garbage. Generational GC's can trace through the oldest generations less often, but that's about it.

Tracing garbage collectors solve a single problem really really well - managing a complex, possibly cyclical reference graph, which is in fact inherent to some problems where GC is thus irreplaceable - and are just about terrible wrt. any other system-level or performance-related factor of evaluation.


> Every garbage collector has to constantly sift through the entire reference graph of the running program to figure out what objects have become garbage.

There's a lot of "it depends" here.

For example, an RC garbage collector (Like swift and python?) doesn't ever trace through the graph.

The reason I brought up moving collectors is by their nature, they take up a lot more heap space, at least 2x what they need. The advantage of the non-moving collectors is they are much more prompt at returning memory to the OS. The JVM in particular has issues here because it has pretty chunky objects.


> The reason I brought up moving collectors is by their nature, they take up a lot more heap space, at least 2x what they need.

If the implementer cares about memory use it won't. There are ways to compact objects that are a lot less memory-intensive than copying the whole graph from A to B and then deleting A.


Modern garbage collectors have come a long way.

Even not so modern ones: have you heard of generational garbage collection?

But even in eg Python they introduced 'immortal objects' which the GC knows not to bother with.


It doesn't matter. The GC does not know what heap allocations are in memory vs swap, and since you don't write applications thinking about that, running a VM with a moving GC on swap is a bad idea.


A moving GC can make sure to separate hot and cold data, and then rely on the kernel to keep hot data in RAM.


Yeah but in practice I'm not sure that really works well with any GCs today? Ive tried this with modern JVM and Node vms, it always ended up with random multi second lockups. Not worth the time.


MemBalancer is a relatively new analysis paper that argues having swap allows maximum performance by allowing small excesses, that avoids needing to over-provision ram instead. The kind of gc does not matter since data spends very little time in that state and on the flip side, most of the time the application has twice has access to twice as much memory to use


Python’s not a mover but the cycle breaker will walk through every object in the VM.

Also since the refcounts are inline, adding a reference to a cold object will update that object. IIRC Swift has the latter issue as well (unless the heap object’s RC was moved to the side table).


A moving GC should be better at this, because it can compact your memory.


A moving collector has to move to somewhere and, generally by it's nature, it's constantly moving data all across the heap. That's what makes it end up touching a lot more memory while also requiring more memory. On minor collections I'll move memory between 2 different locations and on major collections it'll end up moving the entire old gen.

It's that "touching" of all the pages controlled by the GC that ultimately wrecks swap performance. But also the fact that moving collector like to hold onto memory as downsizing is pretty hard to do efficiently.

Non-moving collectors are generally ultimately using C allocators which are fairly good at avoiding fragmentation. Not perfect and not as fast as a moving collector, but also fast enough for most use cases.

Java's G1 collector would be the worst example of this. It's constantly moving blocks of memory all over the place.


> It's that "touching" of all the pages controlled by the GC that ultimately wrecks swap performance. But also the fact that moving collector like to hold onto memory as downsizing is pretty hard to do efficiently.

The memory that's now not in use, but still held onto, can be swapped out.


It's still extremely slow and can cause very unpredictable performance. I have swap setup with swappiness=1 on some boxes, but I wouldn't generally recommend it.


HDDs are much, much slower than SSD.

If swapping to SSD is 'extremely slow', what's your term for swapping to HDD?


‘Hard reboot’ (not OP)


Needs a faster database


Oh wow, had no idea Bastion was made with Monogame and C#!


Up until midway through Hades 1 development Supergiant was a c# shop with their own engine on top of libraries (the fact they rewrote their core tech in the middle of a game project that was already available to the public remains insane to me).


Amazing how much faster it tends to be than my indexed search in intellij.


Yeah, the rechargeables always do but not most single use, kind of weird that way.


The alkaline discharge curve slopes dramatically, so those batteries provide dramatically different voltage when new vs used. Devices may or many not work at lower voltages so any battery life is difficult to estimate.

Rechargeables provide a very flat discharge curve, providing mostly the same voltage, so the steep drop off clearly signals the battery life end.


Zig has a pretty great type system, and sometimes languages like Rust and C++ are not great with preventing accidental heap allocations. Zig and C make this very explicit, and it's great to be able to handle allocation failures in robust software.


What's great about its type system? I find it severely limited and not actually useful for conveying and checking invariants.


That is the usual fallacy, because it assumes everyone has full access to whole source code and is tracking down all the places where heap is being used.

It also assumes that the OS doesn't lie to the application when allocations fail.


Zig make allocations extremely explicit (even more than C) by having you pass around the allocator to every function that allocates to the heap. Even third-party libraries will only use the allocator you provide them. It's not a fallacy, you're in total control.


> pass around the allocator to every function that allocates to the heap.

what prevents a library from taking an allocator, saving it hidden somewhere and using it silently?


Nothing, but it would be bad design (unless there is a legitimate documented reason for it). Then it's up to you as the developer to exercise your judgment and choose what third-party libraries you choose to depend on.


authors of the library


Why, are you going to abort if too many calls to the allocator take place?


You can if you want. You can write your own allocator that never actually touches the heap and just distributes memory from a big chunk on the stack if you want to. The point is you have fine grained (per function) control over the allocation strategy not only in your codebase but also your dependencies.


Allocation strategy isn't the same as knowing exactly exactly when allocations take place.

You missed the point that libraries can have their own allocators and don't expose customisation points.


sure they can. but why would they choose to?


Because the language doesn't prevent them, and they own their library.


That's not really an argument. What prevents the author of a library in any language from acting in bad faith and use antipatterns? That's not a problem that would only happen in the Zig language.


the question remains – why would they choose to?


Because they decided to do so, regardless of people like yourself thinking they are wrong.


They can, and they wouldn't be necessarily be wrong.

However if the library is trying to be as idiomatic / general purpose / good citizen as possible, then they should strongly consider not doing that and only use the user provided allocator (unless there is a clear and documented reason why that is not the case).

I don't think it would make sense to restrict this at the language level. As a developer it's up to you to exercise your judgement when you examine what libraries you choose to depend on.

I appreciate the fact it's a common design pattern in Zig libraries and I also appreciate the fact I'm not forced to do it if I don't want to. If it matters to me then I'll consider libraries that are designed that way, if it does not matter to me then I can consider libraries that do not support this.


and so why should he be forced to not do so? he can cook his own thing outside of what most people using this language will do, and they will just not use it, and nothing's wrong with that.


> It also assumes that the OS doesn't lie to the application when allocations fail.

Gotta do the good ol'

  echo 2 >/proc/sys/vm/overcommit_memory
and maybe adjust overcommit_ratio as well to make sure the memory you allocated is actually available.


OS specific hack and unrelated to C.


Your comment was also OS-specific because Windows doesn't lie to applications about failed allocations.


Not at all, rather there is no guarantee that the C abstract machine described on ISO C, actually returns NULL on memory allocation failures as some C advocates without ISO C legalese expertise seem to advocate.


That doesn't require multi threading.


If it does almost nothing - may be not. Otherwise you'll be doing something in the main thread which will take time, unless you also squeeze concurrency (i.e. multitasking) into one thread and then again, why not use multiple threads already.


On a high enough resolution, especially with 5K-6K displays a single-threaded software-only compositor is absolutely going to have horrible performance. Even on Full HD it's actually quite noticeable


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: