I’ve been thinking that defaulting to durable execution over lower‑level primitives like queues makes sense a lot of the time, what do you think?
A lot of the "simple queue" use cases end up needing extra machinery like a transactional‑outbox pattern just to be reliable. Durable‑execution frameworks (DBOS/Temporal/etc.) give you retries, state, and consistency out of the box. Patterns like Sagas also tend to get stitched together on top of queues, but a DE workflow gives you the same guarantees with far less complexity.
The main tradeoff I can think of is latency: DE engines add overhead, so for very high throughput, huge fan‑out, or ultra‑low‑latency pipelines, a bare‑bones queue + custom consumers might still be better.
Curious where others draw the line between the two.
Drawing the boundary at high throughput, huge fan-out and ultra-low-latency is correct - I'd also add that MQs are often used for pub/sub and signaling.
MQs are heavily optimized for reducing E2E latency between publishers and consumers in a way that DE engines are not, since DE engines usually rely on an ACID compliant database. Under load I've seen an order of magnitude difference in enqueue times (low single-digit milliseconds for the MQ p95 vs 10ms p95 for Postgres commit times). And AMQP has a number of routing features built-in (i.e. different exchange types) that you won't see in DE engines.
Another way to think about it is that message queues usually provide an optional message durability layer alongside signaling and pub/sub. So if you need a very simple queue with retries _and_ you need pub/sub, I'd be eyeing an MQ (or a DE execution engine that supports basic pub/sub, like Hatchet).
It doesn't make sense to persist by default. If I send a message to my rendering thread's message queue that the window is fully occluded I never want that message to be persisted. If the process crashes the fact that the window was occluded back then has no relevance to the current insurance of the process.
Trying to persist things has a performance cost that you don't want to pay everytime you want a thread to communicate with another.
Do you think ArrayList or std::vector should be persisted by default?
Highly biased opinion here since I'm the CEO of DBOS:
It'll be rare that the overhead actually has an effect, especially if you use a library like DBOS, which only adds a database write. You still have to write to and read from your queue, which is about as expensive as a database write/read.
Tried writing an electrostatic particle simulator in Turbo Pascal 7 with BGI as a teen, a handful of particles before it crawled. Then saw a galaxy collision sim on a CD-ROM magazine disc handling thousands of bodies smoothly. Thought it was assembly tricks.. now I'm sure it's algorithmic (avoiding N**2 runtime) but never dug into the specifics. Are charges vs gravity sims essentially the same n-body problem?
There's two ways of doing it, I implemented them both in my PhD and didn't have a ton of fun doing it.
(a) There's a method that works well for monopolar sources (gravitational + electrostatic particles) called the Barnes-Hut method. You effectively divide space up into a quadtree (2D) or octree (3D), and in each cell work out the center of mass / total charge. You make particles in "nearby" cells (using a distance criterion that can be adjusted to speed up/slow down the simulation in a trade off with accuracy) interact directly, and far away cells you just use the center of mass to work out the interaction between any given 'far' particle and the particles in that cell. The method is O(N log N) but in practice, this is 'good enough' for many applications.
(b) uses a more rigorous technique called the Fast Multipole Method which is O(N), where rather than just using the center of mass or sum of charges, you expand the potential from particles out into higher order components which captures the distribution of particles within each cell. This also means you can capture more complex potentials. The downside is that this is a nightmare to implement in comparison to the Barnes-Hut method. Each cell has it's own multipole expansion, and it is 'transferred' to work out the additive contribution to every 'far' cell, calculating a 'local' expansion. Typically people use the most compact representation of these potential expansions which uses Lagrange polynomials, but this is a pain.
Oh this brings memories. I have tried to create a little bit of 3D→2D renderer in TP 6.0 but precision was never enough for nodes to not fall apart and 80286 speed was too slow to render anything meaningful except maybe a cube.
>Are charges vs gravity sims essentially the same n-body problem?
The force falls off as the inverse square of distance in both cases. So they are essentially the same problem. Except that charge can attract or repel and gravity (as far as we know) only attracts.
I haven't used ruby in more than a decade, but I remember there was always some controversy around the corner. Up next: Zed Shaw comes out from his cave, joins forces with the mummy of _why to combat DHH's anti-woke agenda.
Page Object Models trade off clarity for encapsulation. Concrete example [1]. They can make tests look "cleaner" but often obscure what's actually happening. For example:
await page.getStarted(); // what does this actually do?
The second version is explicit and self-documenting. Tests don't always benefit from aggressive DRY, but I've seen teams adopt POMs to coordinate between SDETs and SWEs.
Each of those lines is 3 to 20 lines of Playwright code. Aggressive DRY is bad, but Page Object Models are usually worth it to reduce duplication and limit churn from UI changes.
I am working on a tool[1] to try to make working with playwright a bit more ergonomic and approachable. Sorry to shamelessly plug but I'd love feedback on if it is even a good idea/direction.
Hadn't considered the Page Object Model and will definitely have to consider how to incorporate that for those who want to do things that way.
Is much more readable, and on typing "page." you will see what props on your PO are available.
Another note on your specific example: You are probably in the US and only have a single-language project. I am a Frontend Contractor in Europe and for the past 10 years didn't have a single project that was single language, hence the "hasText" selector would always be off-limits. Instead, very often we used the react-intl identificator as text content for code execution - which would make the above example look much more unreadable without POs, while with POs the code looks the same (just your PO looks different).
It's not a trade-off of clarity just to save developers some extra typing. It's actually improving the clarity by bringing the thing you care about to the foreground: the getting started page having a table of contents with specific items.
...this way, one could vet packages one by one. The main caveat I see is that it’s very inconvenient to have to vet and publish each package manually.
It would be great if Verdaccio had a UI to make this easier, for example, showing packages that were attempted to install but not yet vetted, and then allowing approval with a single click.
This takes away so many of the criticisms I see in this thread. The issue with Tailwind, and my only minor criticism, is just long, unreadable, not easy to deliminate lists of classes. This very easily takes care of that (and I use it for more complex class lists daily)
I do this to this day, when I’m writing manual vanilla CSS. I group spacings, fonts, texts, borders etc together so it is easier for me to debug without using too many tools.
There's been a lot of work in this area but no definitive answer, and few constructed languages speakers [1]. I imagine the wins may be marginal, similar to the wins in dvorak vs qwerty typing.
A lot of the "simple queue" use cases end up needing extra machinery like a transactional‑outbox pattern just to be reliable. Durable‑execution frameworks (DBOS/Temporal/etc.) give you retries, state, and consistency out of the box. Patterns like Sagas also tend to get stitched together on top of queues, but a DE workflow gives you the same guarantees with far less complexity.
The main tradeoff I can think of is latency: DE engines add overhead, so for very high throughput, huge fan‑out, or ultra‑low‑latency pipelines, a bare‑bones queue + custom consumers might still be better.
Curious where others draw the line between the two.
reply