> If you are on AWS and AWS goes down, that's covered in the news as a bunch of billion dollar companies were also down. Customer probably gives you a pass.
Exactly - I've had clients say, "We'll pay for hot standbys in the same region, but not in another region. If an entire AWS region goes down, it'll be in the news, and our customers will understand, because we won't be their only service provider that goes down, and our clients might even be down themselves."
My guess is their infrastructure is set up through clickops, making it extra painful to redeploy in another region. Even if everything is set up through CloudFormation, there's probably umpteen consumers of APIs that have their region hardwired in. By the time you get that all sorted, the region is likely to be back up.
You can take advantage by having an unplanned service window every time a large cloud provider goes down. Then tell your client that you where the reason why AWS went down.
Yes, this is by design. SQL is a great general purpose query language for read-heavy variable-length string workloads, but TigerBeetle optimizes for write-heavy transaction processing workloads (essentially debit/credit with fixed-size integers) and specifically with power law contention, which kills SQL row locks.
I spoke about this specific design decision in depth at Systems Distributed this year:
Depending on contention and RTT, as a specialized OLTP DBMS, TB can do roughly 1000-2000x more performance than a single node OLGP DBMS (cf. the live demo in the talk above)… but also with strict serializability. You don’t need to sacrifice correctness or real-time resolution, and that’s important. For example, if you need to do real-time balance checks.
Node count doesn't matter. You could use an embedded database and encounter the same problem. There is some time T between when you acquire a lock and release it. Depending on the amount of contention in your system, this will have some affect on total throughput (i.e. Amdahl's law).
Almost all commercial MVCC implementations (including Postgres) use row locks. Very few use OCC even though it's arguably a more natural fit for MVCC. Pessimistic locking is simply more robust over varied workloads.
TB seems really awesome, but is there non-DebitCredit use cases where it can be applied effectively? I like trying to find off-label uses for cool technology
Compared to Redis, TigerBeetle has strong durability, and an LSM storage engine to page incremental updates to disk, so it can support 10+ TiB data sets without running into snapshot stalls or OOM.
It helps to know what kind of data TigerBeetle handles. The data committed by its transactions is an immutable Transfer of id:128-bit, debit_account_id:128-bit, credit_account_id:128-bit, amount:128-bit, ledger:128-bit, code:16-bit, flags:bitfield, timestamp:64-bit, user_data_128, user_data_64, user_data_32.
Transactions atomically process one or more Transfers, keeping Account balances correct. Accounts are also records, their core fields (debits_posted, credits_posted, etc).
This gives a good idea of what TigerBeetle might be good for and what it might not be. For anything where latency/throughput and accuracy really, really matters, it could be worth the effort to make your problem fit.
> what stops me is the sudden drop in corporate sponsorship of them.
That's true in two ways: not only are less companies paying to send their attendees to training, but less companies are paying to sponsor these events as well.
Even at AI research conferences the trend seems to be such (also fewer industry exhibits), though I'm not perfectly up to date on this, might have turned around very recently. The reason seems to be that they are not hiring as much right now.
I think this is a shorter-term trend in the economy though, it doesn't necessarily hold as much inertia as other factors. Unless the AI job replacement really works out the way many companies hope.
> I'd be curious to hear how often newbies showed up to these SQL events pre-COVID.
Large SQLSaturday events used to regularly get 300-400 attendees, and a good 10-20% of them were new to the field. I would regularly do a show-of-hands in my session asking how many of them were attending a SQL Saturday for the first time, and it wasn't unusual to see half the hands go up.
You're right - Twitter used to generate FOMO amongst those not attending, plus make it easier for attendees to coordinate after-hours events. Both of those factors are diminished.
Twitter used to be a huge conference backchannel. But, as far as I can tell, that's largely gone and neither Mastodon nor Bluesky have really recreated the Twitter of old. I have accounts on all three and I barely look at them and many others I know are the same.
I have a hard time getting excited about this when they have such an atrocious record of handling pull requests in VS Code already: https://github.com/microsoft/vscode/pulls
Look at the first comment in the PR, it will have a badge "This user is a member of the Microsoft organization". Alternatively, look at the release notes on the website, any non-Microsoft contributions are listed at the bottom.
And see there's only 63 authors and > 90% of the merged PRs are from Microsoft (which.. fair, it's their product).
I think the signal is strong enough that you can legitimately reach the same conclusion by mk 1 eyeball.
NOTE: I'm not criticising, it's a Microsoft-driven project and I am fine with that. The _do_ merge things from "random" contributors (yay) but they are overwhelmingly things that a) fix a bug while b) being less than 5 lines of code. If that is your desired contribution then things will go well for you and arguably they do well at accepting such. But they are very unlikely to accept a complex or interesting new feature from an outsider. All of this is seems totally reasonable and expected.
I hate this analogy. Just because something is open source, doesn’t mean it is forced to commit or comment on every pull request which takes development time. If that notion really bothers you, you are free to fork VSCode and close all 600 pull requests on your fork.
It's a common theme across most (all?) Microsoft "Open Source" repos. They publish the codebase on Github (which implies a certain thing on it's own), but accept very little community input/contributions - if any.
These repo's will usually have half a dozen or more Microsoft Employees with "Project Manager" titles and the like - extremely "top heavy". All development, decision making, roadmap and more are done behind closed doors. PR's go dormant for months or years... Issues get some sort of cursory "thanks for the input" response from a PM... then crickets.
I'm not arguing all open source needs to be a community and accept contributions. But let's be honest - this is deliberate on Microsoft's part. They want the "good vibes" of being open source friendly - but corporate Microsoft still isn't ready to embrace open source. ie, it's fake open source.
I've looked at a bunch of the popular JS libraries I depend on and they are all the same story, hundreds of open PRs. I think it's just difficult to review work from random people who may not be implementing changes the right way at all. Same with the project direction/roadmap, I'd say the majority of open source repos are like that. People will suggest ideas/direction all day and you can't listen to everyone.
Not sure for VSCode, but for .NET 9 they claim: "There were over 26,000 contributions from over 9,000 community members! "
I've had a lot of PRs merged. If you don't create an issue or the issue already says it doesn't suit their vision then it won't get merged. It also helps to update the PR in November/December, even if there are no merge conflicts, as that's when they "clean up" and try to close as many as possible.
Summary of the video: 24 hours before the fatal helicopter-American CRJ midair crash, a similar event was prevented by TCAS (traffic collision avoidance system) because the plane was above 1000 feet altitude. It shuts off below that.
Exactly - I've had clients say, "We'll pay for hot standbys in the same region, but not in another region. If an entire AWS region goes down, it'll be in the news, and our customers will understand, because we won't be their only service provider that goes down, and our clients might even be down themselves."