Hacker Newsnew | past | comments | ask | show | jobs | submit | mumblingdrunk's commentslogin

All CPUs commit in order and except precisely, because most other options are insane, or would drive you to it. However: single thread commit order =/= observability order.

Observability order of memory operations --- which are the only operations that matter --- are governed by the memory consistency model of the architecture. x86 has what's generally referred to as strong ordering on memory operations.

On x86, part of it means that stores from the same core cannot be observed out of order from each other, nor can loads.

So assuming the compiler does not move the `tail++` up, or move the assignment out of the if-statement (both of which are achieved by marking them `volatile`), the code should actually work on x86. The `tail++` change cannot be observed before the write to the queue and the reading from the queue cannot be observed before the reading of the `tail` and `head` variables.

On RISC-V and Arm, you need more as they have substantially weaker memory consistency. The RISC-V specs have some examples of interesting outcomes you can have. Some of it involves time-travel.

But in the end: yes the reordering done by the CPU is the issue. The compiler can and does reorder stuff when it thinks that it'll unlock more instruction-level parallelism, but no amount of volatile is going to make that queue universally usable on RISC-V. No matter what the compiler does. Even perfectly preserving the single-thread semantics of the code, not reordering a single instruction, the CPU can move stuff around in terms of observability. The alternative is that the compiler inserts a barrier/fence after every instruction.

There are trade-offs. Poorly written code for x86 can absolutely tank performance because of ordering violations requiring code to be replayed, though that is sometimes a problem in even weaker consistency models as well.


Valid points, although I have another perspective on this bit:

> But in the end: yes the reordering done by the CPU is the issue

I think from a programmer perspective, the CPU side of things is mostly beside the point (unless you're writing assembly), and this contributes to the misunderstanding and air of mystery surrounding thread safety.

At the end of the day the CPU can do anything, really. I'd argue this doesn't matter because the compiler is generating machine code, not us. What does matter is the contract between us and the compiler / language spec. Without language-level synchronisation the code is not valid C/C++ and we will likely observe unexpected behaviour - either due to CPU reordering or compiler optimisations, doesn't matter.

I think the article is somewhat missing the point by presenting the case somewhat pretending that the compiler is not part of the equation. It seems like often people think they know how to do thread safety because they know, e.g. what reorderings the CPU may do. "Just need to add volatile here and we're good!" (probably wrong). In reality they need to understand how the language models concurrency.

We could translate that queue code into another language with a different concurrency model - e.g. Python - and now the behaviour is different despite the CPU doing the same fundamental reorderings.


This is true but in practice it's pretty common to find this sort of code seems to work fine on x64 because the compiler doesn't actually reorder things and then sometimes blows up on ARM (or PowerPC, though that's less commonly encountered in the wild these days).


This is why I switched to Helix. The configuration is practically non-existent, and it has default configs for all the language servers I could ever want to interact with. I just put the language server binary in my path and I'm ready to go with autocomplete and all the other features.


Helix is cool, but it doesn't have Vim keybinds, which is unfortunate. I find its own system to be a little worse than Vim's (see my other comment about it in this thread).



I've also switched to Helix recently and I get the feeling that is not emphasized enough how much it is about having default integrations for modern standards like LSP, Tree-sitter, and DAP. It's amazing how much functionality you get with just that and not having any plugins or complex configuration.


I actually dread the day Helix eventually gets the WebAssembly plugins system that's been floated around. The current Helix culture of "you get a Kakoune-like editor that can do three categories of "bonus" things, and NOTHING ELSE" encourages a more manageable pace of development (and maybe more importantly: slower updates for those of us exhausted of software churn breaking our stacks constantly), and discourages feature creep/bloat. I love where Helix is at currently. After some tinkering and adjusting a few keybinds, and after a few releases for them to fix various bugs I'd been dealing with, it's quickly become my new favorite editor, and has nearly fully replaced NeoVim. Hats off to Helix.


I couldn't agree more with this! I actively don't want plugins for Helix either - as tempting as it sounds. Nvim is already a create plugin based modal editor with a rich ecosystem, so Helix's no-config philosophy is kinda it's whole selling point.


I never got the impression that the developers had an attitude. The discussion I've seen has been mostly people making demands or saying Helix will fail if it doesn't support AI tool integration right now, but none of the devs are interested in using that kind of tooling with Helix, so they don't implement it. When they have poked fun at someone, it has been because that someone is very quick to demand the feature, but unwilling to submit a PR.

There is an open PR for getting copilot support, though it's currently just a hotfix and likely won't be accepted into core. You can still patch your own version and compile it.


My understanding is that the devs think that AI-assistance should be plugins, as there's no open spec for communication with Copilot and others:

https://github.com/helix-editor/helix/discussions/4037#discu...

And the plugin system is not done/released yet.


This is just true for a whole lot of the industry tooling. Xilinx Vivado is a bloated piece of crap that'll crash all the time unless you have half a terabyte of RAM. Same goes for lots of other EE-tooling in general. The L and B in MATLAB stand for Legacy and Bloat. People still write programs for PLCs in Ladder, where programs still cannot be portable between vendors, or even different products from the same vendor.

All the companies that produce anything invent their own language for the thing and write their own compiler for it. These compilers are clearly not written by compiler experts.

I don't blame EEs for building bad software. They weren't trained to do it and aren't paid to do it. I blame the "if it works, it works" culture that the industry seems to have. Never go back to refine anything, just keep pushing more plugins, more software; create a patchwork of programs until you get the job done.


Having lived on both sides of the fence, I feel comfortable blaming the EEs. Many are allergic to basic scripting, don't bother to learn how their tools work, and have an elitist attitude towards other design responsibilities (ex. layout, verification).

P.S Set up Vivado in scripted mode with either an in-memory project or non-project mode. It works like a champ.


> Many are allergic to basic scripting, don't bother to learn how their tools work, and have an elitist attitude towards other design responsibilities (ex. layout, verification).

In my very limited experience, most people don't learn new stuff until they're forced to; either by their employer, by their university, or by needing to learn it for something they want to accomplish. This is why you'll have self-taught developers go for years using strings as enums, linear-searching in huge sorted arrays, because it works and why would you seek out something else? I think the solution is to introduce more software development in EE education; forcibly expose them to it. My EE bachelor's degree contained a whopping ONE class that was focused entirely on Python programming. The rest just used cobbled-together C code for microcontrollers, or arcane languages with dumb IDEs for PLCs.

I'll take your hint on Vivado, thanks!


> Set up Vivado in scripted mode

The tried and true :) It's funny seeing people complain about Vivado bugs when I haven't run into any in years. Sure, the IDE may be absolute garbage, but I've thankfully never run into any bugs in the actual synthesis and routing parts of the package, which is all that really matters.


It isn't EEs building the likes of Vivado, they hire software engineers to do that stuff


This reminds me of an article I read a while ago https://alastairreid.github.io/mrs-at-scale/

MRSs would enable much more of the formal verification parts to be generated.


I can also highly recommend a visit to The National Museum of Computing.

https://www.tnmoc.org/

While you're there, take a stroll around Bletchley Park as well. Last time I checked, you get a discount when visiting both, and they're right next to each other.


This weekend (5/20-21) they’re hosting an Acorn Econet LAN party!

https://www.tnmoc.org/events/2023/5/20/econet-lan-party


Oh man I wish I was still in uk i would so be there.


It seems you are slightly misunderstanding the point of 'unsafe' as a concept.

And no, memory safety is a huge deal, it is just that the borrow checker cannot verify the soundness of certain code, meaning you have to provide the guarantees normally given to you outside 'unsafe' blocks.

Yes, this means that a few data structures require 'unsafe', but you should be creating safe wrappers around these structures; 'unsafe' won't propagate up your code and poison everything.


> it is just that the borrow checker cannot verify the soundness of certain code

And I was just proving examples of such code for someone who asked. Honestly, some Rust folks get so defensive it makes them very prone to misinterpret simple factual statements about Rust as criticism.

Apparently you don’t disagree with any of the factual statements that I’m making. You just have some vague unsubstantiated feeling that I don’t ‘get’ Rust.


I'm not fighting your claim that the borrow checker has perfectly reasonable situations it can't deal with. That's why 'unsafe' exists. I've already said that.

You're adding other claims and statements that make me question if you actually understand the thing you are criticising.


If you'd say what those claims and statements were, then we could have a conversation. It's not conducive to a good discussion to reply just by saying "you're wrong and you don't get it".


You don't have access to the C standard library in the Linux kernel either, so what is your point? Ime., the core library of Rust is much nicer to work with than the absolute bare bones landscape that is C when compiled with -nostd.

Boxes, cells, arcs, etc. exist for different purposes and if you'd take two hours to actually read about them and their uses, you'd understand why they exist.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: