Atomics are hardly “C”. They are a primative exposed many CPU ISAs for helping to navigate the complexity those same CPUs introduced with OOO execution and complex caches in a multi-threaded environment. Much like simd atomics require extending the language through intrinsics or new types because they represent capabilities that were not possible when the language was invented. Atomics require this extra support in Java just as they do in rust or C.
Use HTTP server-sent events instead. Those can keep the connection open so you don't have to poll to get real-time updates and they will also let you resume from the last entry you saw previously.
Yeah, but in real life, SSE error events are not robust, so you still have to do manual heartbeat messages and tear down and reestablish the connection when the user changes networks, etc. In the end, long-polling with batched events is not actually all that different from SSE with ping/pong heartbeats, and with long-polling, you get the benefit of normal load balancing and other standard HTTP things
“Normal load balancing” means “Request A goes to server A”, “Request B goes to server B” and there is no state held in the server, if there is a session its stored in a KV store or database which persists.
With SSE the server has to be stateful, for load balancing to work you need to be able to migrate connections between servers. Some proxies / load balancers don’t like long lasting connections and will tear them down if there has been no traffic so your need to constantly send a heart beat.
I have deployed SSE, I love the technology, I wouldn’t deploy it if I don’t control the end devices and everything in between, I would just do long polling.
Your description of "normal load balancing" is certainly one way to do load balancing, but in no way is it the presumptive default. Keeping session data in a shared source of truth like a KV store or DB, and expecting (stateless) application servers to do all their session stuff thru that single source of truth, is a fine approach for some use cases, but certainly not a general-purpose solution.
> With SSE the server has to be stateful, for load balancing to work you need to be able to migrate connections between servers.
Weird take. SSE is inherently stateful, sure, in the sense that it generally expects there to be a single long-lived connection between the client and the server, thru which events are emitted. Purpose of that being that it's a more efficient way to stream data from server to client -- for specific use cases -- than having the client long-poll on an endpoint.
> Keeping session data in a shared source of truth like a KV store or DB, and expecting (stateless) application servers to do all their session stuff thru that single source of truth
What would be a scalable alternative?
Simple edge-case why this is a reasonable approach. Load balancer sends request to server A, server A sends response and goes offline, now load balancer has to send all request to server B->Z until server A comes back online. If the session data was stored on server A all users who were previously communicating to server A now lost their session data, probably reprompting a sign-in etc
Theres some state you can store in a cookie, hopefully said state isn’t in any was mean to be trusted since rule 1 of web is you don’t trust the client. Simple case of a JWT for auth, you still need to validate the JWT is issued by you and hasn’t been invalidated, ie a DB lookup.
Exactly that you use a cookie which stores an id to a session stored in the KV/DB.
Moving the session data to a JWT stores some session data in the JWT but then you need to validate the JWT on each request which depending on your architecture might be less overhead but it still means you need some state stored in a KV/DB and it cannot be stored on server same as with a session, this might legitimately be less state, just a JWT id of some sort and whether it’s not revoke but it cannot exist on the server, it needs to be persistent.
This take that SSE is stateful is so strange. Server dies it reconnects to another server automatically (and no you don't need ping/pong). It's only stateful if you make it stateful. It works with load balancing the same as anything else.
The SSE spec has an event id and the spec states sending last event id on reconnection. That is by its nature stateful, now you could store that in a DB/KV itself but presumably you are already storing session data for auth and rate limiting so now you had to implement a different store for events.
And I too naively believed there won’t be a need for ping/pong, then my code hit the real world and ping/pong with aliveness checks was in the very next commit because not only do load balancers and proxies decide to kill your connection, they will do it without actually closing the socket for some timeout so your server and client is still blissfully unaware the connection is dead. This may be a bug, but it’s in some random device on the internet which means I have to work around it.
Long polling might run into the same issues but in my experience it hasn’t.
I really do encourage you to actually implement this kind of pattern in production for a reasonable number of users and time, there’s a reason so many people recommend just using long polling.
This also assumes long running servers, long polling would fall back to just boring old polling, SSE would be more expensive if your architecture involves “serverless”.
Realistically I still have SSE in production, on networks I can control all the devices in the chain because otherwise things just randomly break…
> The SSE spec has an event id and the spec states sending last event id on reconnection.
Last event ID is not mandatory. You may omit event IDs and not deal with last event ID headers at all.
More importantly, the client is sending the last event ID header. Not the server. The only state in the server is a list of events somewhere which you would have to have anyway if you want clients to receive events that occurred when they were not connected or if you allowed clients to fetch a subset of them like with long-polling.
So there is really no difference at all here with regards to long-polling
Never had to use ping/pong with SSE. The reconnect is reliable. What you probably had happen was your proxy or server return a 4XX or 5XX and that cancels the retry. Don't do that and you'll be fine.
SSE works with normal load balancing the same as regular request/response. It's only stateful if you make your server stateful.
6 tabs is the limit on SSE. In my opinion Server Sent Events as a concept is therefore not usable in real world scenarios as of this limitation or error-detection around that limitation. Just use Websockets instead.
So, let me get this straight.
You published your software under a free license that stipulates they can't remove the license and are otherwise free to do as they please.
They took you by your word and did exactly that.
What did you think a license is for? For artistic expression?
It's a contract. If you want to get paid, put that in your license.
I recommend AGPL 3. Then nobody will rip you off. And if they do, you can drag them to court over it.
The title is confusing.
He is not reimplementing the STL.
He is writing some C++ classes providing functionality that is also already implemented in STL.
I found that the uBlock Origin extension breaks the final result. To fix it, add adblock.turtlecute.org as an exception in uBlock rules.
Exactly the kind of belly laugh I needed right now. That side also falsely "measures" that my ad blocker lets all kinds of sites through when in fact my setup lets absolute zero third party sites through. Hilarious!
I wonder how many people fall for sites like that.
I find "this worked for me once at Etsy when we were a 20 person team" not a very convincing argument. That does not mean I think he's wrong. Just that the conclusion needs better arguments.
One argument that comes to mind is: If you treat people like children, they will start behaving like children. Treat people as adults if you want them to shoulder responsibility.
The main message of this talk is that when a designer tried to deploy a fix into production and it blew up production, they realized he had the wrong kind of permissions and their solution was to give him full deployment permissions.
Well, great if that worked for you.
It might or might not work for others.
I would recommend not letting anybody deploy to production. You can deploy to staging, then tests are run, and only after those all pass can anyone deploy to production.
Also, the current process is not just the result of ego. It is also the result of evolution. We usually take steps to prevent things from happening because they have blown up in the past and we would like to not have that happen again.
Why can the tests not be ran automatically and why not let anyone deploy at anytime? Like, if super senior dev or first day intern deploy, I expect tests to run and the deployment to go out and for systems to be automatically monitored and the deployment reverted if errors pop up, and also manually revertable if a test missed something.
Absolutely, let the designer deploy. Let them have stage access first so they can play, sure. But let them not be blocked. And if they turn the site purple, consider revoking the trust. But default to trust
That does not bode well for Microsoft. At least from the outside perspective it looks like he was the adult in the room, the driving force behind standards adoption and even trying to steer C++-the-language towards a better vision of the future.
If he is gone, MSVC will again be the unloved bastard child it has long been before Herb's efforts started to pay off. This is very disheartening news.
I'm happy he held out for this long even though he was being stonewalled every step of the way, like when Microsoft proposed std::span and it was adopted but minus the range checking (which was the whole point of std::span).
Now he has been pushing for a C++ preprocessor. Consider how desperate you have to be to even consider that as a potential solution for naysayers blocking your every move.
The rumor that has been widely circulating is that the MSVC backend is being reused as a code generator for the Rust compiler (because nobody really understands PDBs anymore, not even Microsoft, and especially LLVM doesn't. So rustc could be a MSVC frontend instead to reuse all the existing arcane logic.)
MSVC will continue to be used for many years, and especially the backend might see renewed effort. But I don't know about the C++ frontend specifically, I've seen complaints about more and more bugs on the cpp subreddit. It's possible MS will be investing a little less in C++.
Disregarding the rumor, it is quite public information that on Azure side, C and C++ are now only allowed for existing code bases, or scenarios where nothing else is available.
Meanwhile on Windows side, it was made officially at Ignite that a similar decision is now to be followed upon Windows as well.
Here the official stuff, so whatever happens to MSVC is secondary,
> in alignment with the Secure Future Initiative, we are adopting safer programming languages, gradually moving functionality from C++ implementation to Rust.
This seems like one hell of an initiative for the Windows OS. That is millions of lines of C++ code, often with parts from waaay back. A friend who works on one of the OS teams told me that his team got a boomerang hire that worked on Windows back in the 90s and he was still finding parts of his code in there!
I hope this corporate interest bodes well for Rust though. It seems like for C++ it really caused a schism over the ABI break issue where Chandler et al were basically rebuffed finding some timeline to break it, and then Google dropped all their support on the committee in favor of Carbon, Rust, etc.
Apple and Google focusing on their own stuff, is one of the reasons why clang lost velocity in ISO C++ adoption, most of the C++ compiler vendors that fork clang don't contribute frontend stuff only LLVM, and with them out, it took some time until new folks jumped in to replace their contributions.
Likewise you will noticed MSVC is no longer riding the wave in regards to C++23, after being the first to fully support C++20.
Then there are all those other compilers out there, lost somewhere between C++14 and C++17, and most likely never moving beyond that.
They've made statements like that for a long time now. But they've never escaped using C++ when performance matters. The game dev roles very clearly ask for C++, for example.
Rather, it seems that as computers have gotten faster, there's been more places where safety is preferable to performance.
The proof is on the pudding, how performance critical do you consider Pluton firmware, network cards firmware supporting Azure workloads?
Two examples of stuff publicly rewritten into Rust.
Games are special that isn't what Windows security cares about in first instance, when TinyGlade is the first ever commercial success using Rust.
Yet most games are done with Unreal and Unity, and yes there is lots of C++ there, but is mostly Blueprints, Verse, C# on top, that large majority of studios reach for.
I have no magic window into Microsoft, but they've been saying they need to stop writing C++ for genuine decades now, and it's still prominent on their jobs site with new projects.
I'm aware they're trying, but I just don't believe their statements from the evidence available.
He has been showing it, but not pushing it. the difference is subtle but important. He is showing a lot of "what ifs" trying them, and pushing the useful ones back into the language. Reflection is on track for C++26 in large part because he inspired a lot of people with his metaclasses talk (a long time ago, but doing things right takes time)
Wait, why does std::span not do the range checking? We ran into that exact thing at work and were really confused why the hell it doesn't do it currently.
I believe this is because [] doesn’t do checking normally, so this is seen as a consistency. I am not 100% sure, but I do remember it being a contentious decision.
It looks like he's staying on the committee and what not, just changing his day job. That's actually one of the benefits of having a committee & iso standardization process -- things aren't so reliant on a single engineer staying employed at a single company.
I'm sure it's never as clean a situation as anyone would like, but hey, world is a rough place sometimes.