* Barriers are useful when you have a bunch of threads and you want to synchronise across of them at once, for example to do something that operates on all of their data at once.
* Latches are useful if you have a bunch of work items and you want to know when they've all been handled, and aren't necessarily interested in which thread(s) handled them.
To add to this: I don't get why C++ distinguishes a latch from a barrier. Both of them are effectively a counter with the ability to decrement that counter (in a thread safe way) and the ability to wait until the value reaches zero (or return immediately if it is already zero). The only difference is that is a latch does those things separately from different threads while a barrier does them both at once in every thread. That's even reflected in the similar method names: .count_down() and .wait() on latch, and .count_down_and_wait() on barrier. Why not make all three methods be methods of one class, which simplifies the API and potentially allows other use cases?
How could either of them be implemented with a counting semaphore? It seems like it waits in the wrong (opposite) situation than barrier and latch need, but maybe I'm just not being imaginative enough.
I wasn't following the C++20 development like I usually do. A combination of no longer working with C++ and feeling a bit disillusioned by the long wait for Concepts and Modules. I did read a summary of all language proposals which made it in, but this made me realize I didn't read up on the changes to the standard library.
Thanks, I wrote the sample code for the std::latch and std::barrier pages. [Unfortunately you can't run them in place, because the compiler backing it is too old. I am wrong! Now you can.] But you can paste them into Godbolt.org and run there, given a bit of compiler link-line fu.
std::barrier has extra knobs that I didn't discover a good use for, or would have used them in the sample. Enlightenment welcome.
This might be a good place to mention: cppreference.com is curated by ISO Standard C++ committee participants. (Cplusplus.com, by contrast, is very poorly maintained.) If cppreference disagrees from the Standard, it is likely that the Standard will soon be amended to match.
Speaking of Godbolt: it enforces a limit of three spawned threads, but the Gcc and Clang thread sanitizers use up a thread. So if you want to try the samples on Godbolt under the thread sanitizer, you have to reduce the problem sizes to two. Learned that the hard way.
FWIW, I did a lot of C++ in the 1990s, then went to Java/Go for about 15 years. I had to come back to C++ briefly about 5 years ago. cppreference was a total godsend!
> Unfortunately you can't run them in place, because the compiler backing it is too old.
Are you sure you've not changed the dropdown? I just made some changes to the example, hit "run this code", hit "run", and the output updated accordingly.
I have made a point of using uint16_t, uint32_t, int16_t, int32_t, etc. to be explicit. Some don't like the look, but it is explicit and helpful for me.
I'm surprised that negative numbers are needed here. But I don't know C++ and my knowledge about latches in Java is spotty at best. Don't know anything about barriers.
Wow, looking at the code in the example, this looks almost nothing like the C++ I used to write 15 years ago. The language really has evolved massively and it seems in a good direction.
string product{"not worked"} is initializing the string product to "not worked".
It's the same as std::string product; product = "not worked";
[&](job& my_job) { } is a lambda expression. & is capturing the variable by reference. my_job is the parameter being passed which is a pointer of type job.
> It's the same as std::string product; product = "not worked";
It's not exactly the same. The original called the converting constructor std::string::string(const char*). Your example calls the std::string default constructor, then the assignment std::string::operator=(const char*). Maybe you didn't mean literally the same, but where trying to illustrate the rough meaning, but the parent commenter said they were familiar with older versions of C++ so I think they'd already be familiar with converting constructors.
It might be more enlightening to say that all of the following are equivalent:
(I'm 90% sure about the last one but can't find documentation for it at the moment.) None of them call the copy constructor std::string::string(const std::string&) or copy assignment operator std::string::operator=(const std::string&), although in older versions of the C++ standard the last two required that the relevant assignment operators (std::string::operator=(const char*) and std::string::operator=(const std::string&) respectively) to be accessible even though it wasn't called.
For other combinations of types, these different syntaxes are not equivalent. For example, uniform initialisation (with the braces) won't allow narrowing conversions, such as short to int or double to float.
In practical terms, none. They are the same, and the same as using the curly braces.
From the POV of the standard, those are different kinds of initializations.
C++ has like 12 different ways of initializing variables, and the differences are quite confusing, but in practical terms in my experience I never had to care too much beside making sure native types are initialized to some specified value.
You can probably find some talks on YouTube talking about the initializations of c++, and 1h30m is probably not enought to cover all the details :')
For this particular example direct initialization i.e. std::string product("not worked"); would be preferred, as you end up using one call to constructor instead of two: default constructor followed by the move assignment.
What you need to search for is "list initialization" (eg https://en.cppreference.com/w/cpp/language/list_initializati...) . I understand that it is not the obvious thing to do. If you search for c++11, it was one of the features that were introduced then. lambda expressions were also introduced in c++11 but are also found in other languages so they are easier to understand for non c++ people.
It's called uniform initialization syntax (aka "brace initialization" aka list-initialization [1]). tl;dr is everyone loves it due to the terseness, but I recommend against it. It's kind of like a forced cast/coercion in some ways (and it also looks ugly). Also be careful with parentheses since C++20; if they invoke implicit (aggregate?) constructors that would've traditionally needed {}, I think you can also run into similar casting issues. I just ran into this one a couple days ago; I haven't narrowed it down yet, but it seemed to be due to this.
The second one is a lambda (which, if you're not familiar with the term, are anonymous functions/functors); the entries in the initial brackets define the captured variables (if any), and whether they're captured by reference (with ampersand) or by value (without).
It is in fact not a forced cast, unlike T() initialization that can indeed be a cast.
Uniform initialization is great, except for the unfortunate interaction with initialized_lists (yes, we can't have nice things), but if you do not have an initializer_list constructor in your class you do not have to worry.
Why do you say this when you can explicitly see I specifically made sure to avoid claiming it is in fact a forced cast, and instead I said it is kind of like a forced cast? Clearly I meant something other than that it was actually a forced cast, right?
> if you do not have an initializer_list constructor in your class you do not have to worry.
Which is precisely an example of my point about it being problematic. How would you go about this when you don't know everything about a class? Like, say, in a template? And how do you prevent your code from silently breaking if a class you use later adds an initializer_list constructor?
> Uniform initialization is great
I disagree. It's awful. It's just a minor cosmetic change (which we can call an "improvement" for the sake of argument, though I think that's also dubious and the syntax is just ugly) that introduces pitfalls in the actual semantics of your program. That's not a great trade-off; it's a terrible one.
> a minor cosmetic change (which we can call an "improvement" for the sake of argument, though I think that's also dubious and the syntax is just ugly) that introduces pitfalls in the actual semantics of your program
std::int64_t i64 { 44 * 44 * 44 };
std::int8_t i8 = i64; // perfectly fine
std::int8_t i8_2 { i64 }; // refuses to narrow the int
uniform initialization resolves lots of headaches, like T() being ambiguous at times with functions, narrowing, etc.
I don't see failure on i64 -> i8 as beneficial here; it's harmful if anything. Integer conversions already happen in all sorts of constructs. Which is why compilers have warnings for them. If you don't like them, you can enable warnings (or even set them to errors if you want to outright prohibit them). If you think they should happen implicitly then there's nothing particularly interesting about construction sites vs. assignments or really anything else; there's no reason integer conversions should be treated specially for construction sites. If anything, that would make you feel safer than is actually warranted.
With T() being ambiguous, you can already syntactically disambiguate. With extra parentheses or whatever other construct the case may warrant. As people have been doing all these years. Again, it's just a minor syntactic inconvenience.
Integer narrowing is dangerous and should always be explicit. The fact that it is still implicitly possible at assignment doesn't mean it should be tolerated at type construction - personally I have saved hundreds of hours of bug fixing since I started consistently initializing integers with {}.
Yes, C++ has got quite a lot more fun to use. C++20 is almost as different from C++11 as C++11 was from C++98.
If you make a point to always use the newest features, where there is a choice, and really put the type system to work for you, programs generally run right the first time, once the compiler is satisfied. In the past ten years I have spent more time filing compiler bug reports than debugging C++ memory usage errors.
It is amazing how many people are all up-to-date on what is new. I think people who complain about C++ on HN must be complaining about a much older version of the language.
Speaking from my experience, thers is the C++ I use on hobby projects and see at talks like C++Now, and there is the C++ I get to see in the wild when plugging native libs into our Java/.NET code bases.
Even Android NDK and some subsystems are good examples of this second C++ flavour.
I was looking to use this feature, alongside some other C++20, but I was dissapointed to see that it wasn't supported yet by my compiler. So hopefully that gets supported soon. Until then a condition variable and an atomic<bool> will do the job.
I am looking forward to c++23 and beyond. Hopefully they will eventuall add fixed-point arithmetic support.
So many threading primitives have different names on different platforms. This is basically Go's WaitGroup, except the C++ latch can't be incremented (it's otherwise identical). I wonder why that is?
/// ----------------------------------------------------------------------------
/// latches are great for multi-threaded tests with following sequence of steps
/// 1. setup test data
/// 2. create a latch
/// 3. create test threads each decrementing the latch-count
/// 4. when all threads have reached the latch, they are unblocked.
///
/// sligtly verbose commented code to illustrate this
void foo()
{
/// --------------------------------------------------------------------
/// create a latch with non-zero-count
unsigned const thread_count=...;
std::latch done(thread_count);
my_data data[thread_count];
std::vector<std::jthread> threads;
/// --------------------------------------------------------------------
/// threads decrement count as they are created
for(unsigned i=0;i<thread_count;++i)
threads.push_back(std::jthread([&,i]{
data[i]=make_data(i);
done.count_down();
do_more_stuff();
}));
/// --------------------------------------------------------------------
/// others wait for latch to be signalled, when the count reaches zero,
/// latch is permanently signalled, and threads are woken
done.wait();
process_data();
}
I realise this was just a quick example, but just to be clear, the latch is tied to the number of work items not the number of threads. If you had a thread pool (or single thread) with a queue, you can happily push any number of work items on to it and use the latch to wait until they're all complete, regardless of which threads they're handled in (or even if the jobs are interleved with other jobs from some other source).
Yep. C++ is not good at naming things somehow. But bringing up Golang in a discussion about naming is just ridiculous. Unless you want to show that C++ is not actually the worst.
Is go worst than rust at naming? It took me a while to get used to Cow and Arc... (not to mention "fn" which to me will always be the french far right political party)
I like that Rust has optimized certain bits for ease of intermediate use. Sure those could be EitherOwnedOrBorrowed and AtomicReferenceCountedPointer, but that'd be annoying to type once you know them. They're also not hard to remember if you learn the acronyms (for me, at least).
EitherOwnedOrBorrowed and AtomicReferenceCountedPointer, but that'd be annoying to type once you know them.
code is read more than it is written, so having those longer terms helps with reducing mental load and providing self-documenting code
and with most modern ide or even text editors providing autocompletion, (imho) i think abbreviations not really neccessary (though they can be fun in the case of kotlin ^^)
I disagree. Code should optimize for the things you want to look for being relatively the most prominent. This is why I think go is harder to review: the bits you actually care about are overwhelmed.
well, i think we can at least say, there is a balance for sure, so there is a limit to verbosity where it get overwhelming, ill defintely agree to that
To me, the parent poster didn't imply that golang's naming is better, they just wondered why there's a lack of naming conventions regarding threading in general.
It's the other way around. Latch is the standard accepted name for this concurrent construct. It is also used in Java, databases, concurrency papers, etc.
Usually such guarantees allow the implementation to use something other than atomics (e.g. memory barriers) for avoiding data races, which are more performant and can avoid contention/context switching/spinning.
I too would love to know the specific reasoning though.
I like these names. Maybe it's a pain if you're using something like vim to program, but in a typical Java IDE or VSCode you'd just write CDL, ctrl+space, enter.
Likely minimalism. It can be implemented more efficiently if it can only count down. It is a low lever synchronization primitive, you can build more complex stuff on top of it.
In short (copied from that answer):
* Barriers are useful when you have a bunch of threads and you want to synchronise across of them at once, for example to do something that operates on all of their data at once.
* Latches are useful if you have a bunch of work items and you want to know when they've all been handled, and aren't necessarily interested in which thread(s) handled them.
[1] https://stackoverflow.com/a/62631294