Hacker Newsnew | past | comments | ask | show | jobs | submit | more hobs's commentslogin

There's a reason that one of the big corporate skills books is Strength Finder - because fundamentally playing to your weaknesses isn't a good play, its that you need to consistently challenge yourself to keep building whatever muscle you choose to do. You don't want to build strength by lifting 10,000 pounds all at once, but by increasing your load every day.

In most professions barely anyone is doing the continual education or paying attention to the "scene" for that profession, if you do that alone you're probably already in the top 10%.


"A Specialist knows more and more about less and less until he knows absolutely everything about nothing.

A Generalist knows less and less about more and more until he knows absolutely nothing about everything"

Getting paid well doing something you actually enjoy doing is key =3

https://stevelegler.com/2019/02/16/ikigai-a-four-circle-mode...


I would say quite the opposite - most business have little need for eventual consistency and at a small scale its not even a requirement for any database you would reasonably used, way more than 90% of companies don't need eventual consistency.


No. The real world is full of eventual consistency, and we simply operate around it. :-)

Think about a supermarket: If the store is open 24/7, prices change constantly, and some items still have the old price tag until shelves get refreshed. The system converges over time.

Or airlines: They must overbook, because if they wait for perfect certainty, planes fly half empty. They accept inconsistency and correct later with compensation.

Even banking works this way. All database books have the usual “you can’t debit twice, so you need transactions”…bullshit. But think of a money transfer across banks and possibly across countries? Not globally atomic...

What if you transfer money to an account that was closed an hour ago in another system? The transfer doesn’t instantly fail everywhere. It’s posted as credit/debit, then reconciliation runs later, and you eventually get a reversal.

Same with stock markets: Trades happen continuously, but final clearing and settlement occur after the fact.

And technically DNS is eventual consistency by design. You update a record, but the world sees it gradually as caches expire. Yet the internet works.

Distributed systems aren’t broken when they’re eventually consistent. They’re mirroring how real systems work: commit locally, reconcile globally, compensate when needed.


These analogies (except for DNS, perhaps) aren't very illuminating on the difference between a CP system and an AP system in the CAP sense, though. In banking, there are multiple parties involved. Each of those parties is likely running a CP system for their transactions (almost guaranteed). Same with stock exchanges - you can look up Martin Thompson's work for a public glimpse of how these systems work (LMAX and Aeron are systems related to this).

These examples are closer to control loops, where a decision is made and then carried out or finalized later. This kind of "eventual consistency" is pervasive but also significantly easier to reason about than what people usually mean by that term when talking about a distributed database, for example.

To expand on the 24/7 grocery store example: if the database with prices is consistent, you will always know what the current price is supposed to be. If the database is eventually consistent, you may get inconsistent answers about the current price that have to be resolved in the code somehow. That's way harder to reason about then "the price changed, but the tag hasn't been hung yet". The first case, professional software engineers struggle to deal with correctly. The second case, anyone can understand.


None of the systems you describe are the 90% of businesses - grocery, airlines, banking, stock markets, dns, they are all modeling huge systems with very active logistics compared to most businesses, I still don't agree with you at all.

Banks across countries - not again a problem most businesses ever have to deal with.


> Even banking works this way. All database books have the usual “you can’t debit twice, so you need transactions”…bullshit. But think of a money transfer across banks and possibly across countries? Not globally atomic..

Banking is my "go to" anology when it comes to eventual consistency because 1: We use banking almost universally the same ways, and 2: we understand fully the eventual consistency employed (even though we don't think about it)

Allow me to elaborate.

When I was younger we had "cheque books" which meant that I could write a cheque (or check if you're American) and give it to someone in lieu of cash, they would take the cheque to the bank, and, after a period of time their bank would deposit funds into their account, and my bank would debit funds from mine - that delay is eventual consistency.

That /style/ of banking might have gone for some people, but the principle remains the same, this very second my bank account is showing me two "balances", the "current" balance and the "available" balance. Those two numbers are not equal, but they will /eventually/ be consistent.

The reason that they are not consistent is because I have used my debit card, which is really a credit arrangement that my bank has negotiated with Visa or Mastercard, etc. Whereby I have paid for some goods/services with my debit card, Visa has guaranteed the merchant that they will be paid (with some exceptions) and Visa have placed a hold on the balance of my account for the amount.

At some point - it might be overnight, it might be in a few days, there will be a reconciliation where actual money will be paid by my bank to Visa to settle the account, and Visa will pay the merchant's bank some money to settle the debt.

Once that reconciliation takes place to everyone's satisfaction, my account balances will be consistent.


I have been working on payment systems and it seems that in almost all discussions about transactions, people talk about toy versions of bank transactions that have very little to do with what actually happens.

You don't even need to talk about credit cards to have multiple kinds of accounts (internal bank accounts for payment settlement etc.), multiple involved systems, batch processes, reconciliation etc. Having a single atomic database transaction is not realistic at all.

On the other hand, the toy transaction example might be useful for people to understand basic concepts of transactions.


I don't have a lot of payment experience, but AFAIK actual payment systems work in an append-only fashion, which makes concurrency management easier since you're just adding a new row with (timestamp, from, to, value, currency, status) or something similar. However, how can you efficiently check for overdrafts in this model? You'd have to periodically sum up transactions to find the sender's balance and compare it to a known threshold.

Is this how things are usually done in your business domain?


> how can you efficiently check for overdrafts in this model?

You already laid the groundwork for this to be done efficiently: "actual payment systems work in an append-only fashion"

If you can't alter the past, it's trivial to maintain your rolling sums to compare against. Each new transaction through the system only needs to mutate the source and destination balances of that individual transaction.

If you know everyone's balance as of 10 seconds ago, you don't need to consider any of the 5 million transactions that happened before 10 seconds ago.

(If your system allowed you to alter the past and edit arbitrary transactions in the past, you could never trust your rolling sums, and you'd be back to summing up everything for every operation.)


So you're saying each line records the new value of the source and destination balance, rather than just the sum that is being exchanged?


No.

At the beginning of time, all your accounts will have their starting value.

When the first transaction (from,to,value) happens, you will do one overdraft check, and if it's good, you will do 1 addition and 1 subtraction, and two of the accounts will have a new value.

On the millionth transaction, you will do one overdraft check, and if it's good, you will do 1 addition and 1 subtraction, and two of the accounts will have a new value.

At no point will you need to do more than one check & one add & one sub per arriving transaction.

(The append-only is what allows this: the next state is only ever a single, cheap step from the current state. But if someone insists upon mutating history, the current state is no longer valid because it no longer represents the history that led up to it, so it cannot be used to generate the next state - you need to throw it all away and regenerate the current/next states, starting from 0 and replaying every transaction again.


Ok so basically you have a Transactions table as well as a separate Accounts table which stores balances, and every time Alices wishes to pay Bob, a (database) transaction appends an entry to the Transaction table and updates balance in Accounts only if the sender’s balance is ok? Something like a “INSERT INTO…SELECT”?


The rolling balance is a "projection"

Your bank statement has the event (A deposit or withdrawal) with details, and to one side the bank will say, your balance after this event can be calculated to be $FOO

The balance isn't a part of the event, it's a calculation based on the (cached) balance known from the previous event.

Further, your bank statements are (typically) for the calendar month, or whatever. They start with the balance bought forward from the previous statement (a snapshot)


> Is this how things are usually done in your business domain?

I don't know about "usually" and I cannot explain details. But many banks are migrating from batch-based mainframes to real-time systems. Maybe that answers your question about "efficiently".


And then they take that toy transaction model and think that they're on ACID when they're not.

Are you stepping out of SQL to write application logic? You probably broke ACID. Begin a transaction, read a value (n), do a calculation (n+1), write it back and commit: The DB cannot see that you did (+1). All it knows is that you're trying to write a 6. If someone else wrote a 6 or a 7 in the meantime, then your transaction may have 'meant' (+0) or (-1).

Same problem when running on reduced isolation level (which you probably are). If you do two reads in your 'transaction', the first read can be at state 1, and the second read can be at state 2.

I think more conversations about the single "fully consistent" db approach should start with it not being fit-for-purpose - even without considering that it can't address soft-modification (which you should recognise as a need immediately whenever someone brings up soft-delete) or two-generals (i.e. consistency with a partner - you and VISA don't live in the same MySql instance, do you? Or to put it in moron words - partitions between your DB and VISA's DB "don't happen often" (they happen always!))


RE: "All it knows is that you're trying to write a 6. If someone else wrote a 6 or a 7 in the meantime, then your transaction may have 'meant' (+0) or (-1)."

This is not how it works at all. This is called dirty writes and is by default prevented by ACID compliant databases, no matter the isolation level. The second transaction commit will be rejected by the transaction manager.

Even if you start a transaction from your application, it does not change this still.



Postgres as an example is ACID compliant if you want it to be. All those databases that have full serialization possible do utilize RC by default which is enough to prevent dirty writes and was my original point.

Thanks for the link still, it was valuable!


I have no problem with ACID the concept. It's a great ideal to strive towards. I'm sure your favourite RDBMS does a fine job of it. If you send it a single SQL string, it will probably behave well no matter how many other callers are sending it SQL strings (as long as the statements are grouped appropriately with BEGIN/COMMIT).

I'm just pointing out two ways in which you can make your system non-ACID.

1) Leave it on the default isolation level (READ_COMMITTED):

You have ten accounts, which sum to $100. You know your code cannot create or destroy money, only move it around. If no other thread is currently moving money, you will always see it sum to $100. However, if another thread moves money (e.g. from account 9 to account 1) while your summation is in progress, you will undercount the money. Perfectly legal in READ_COMMITTED. You made a clean read of account 1, kept going, and by the time you reach account 9, you READ_ what the other thread _COMMITTED. Nothing dirty about it, you under-reported money for no other reason than your transactions being less-than-Isolated. You can then take that SUM and cleanly write it elsewhere. Not dirty, just wrong.

2) Use an ORM like LINQ. (Assume FULL ISOLATION - even though you probably don't have it)

If you were to withdraw money from the largest account, split it into two parts, and deposit it into two random accounts, you could do it ACID-compliantly with this SQL snippet:

    SELECT @bigBalance = Max(Balance) FROM MyAccounts
    SELECT @part1 = @bigBalance / 2;
    SELECT @part2 = @bigBalance - @part1;
    ..
    -- Only showing one of the deposits for brevity
    UPDATE MyAccounts
    SET Balance = Balance + @part1
    WHERE Id IN (
        SELECT TOP 1 Id
        FROM MyAccounts
        ORDER BY NewId()
    );
Under a single thread it will preserve money. Under multiple threads it will preserve money (as long as BEGIN and COMMIT are included ofc.). Perfectly ACID. But who wants to write SQL? Here's a snippet from the equivalent C#/EF/LINQ program:

    // Split the balance in two
    var onePart = maxAccount.Balance / 2;
    var otherPart = maxAccount.Balance - onePart;

    // Move one half
    maxAccount.Balance -= onePart;
    recipient1.Balance += onePart;

    // Move the other half
    maxAccount.Balance -= otherPart;
    recipient2.Balance += otherPart;
Now the RDBMS couldn't manage this transactionally even if it wanted to. By the final lines, 'otherPart' is no longer "half of the balance of the biggest account", it's a number like 1144 or 1845. The RDBMS thinks it's just writing a constant and can't connect it back to its READ site:

    info: 1/31/2026 17:30:57.906 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command) 
        Executed DbCommand (7ms) [Parameters=[@p1='a49f1b75-4510-4375-35f5-08de60e61cdd', @p0='1845'], CommandType='Text', CommandTimeout='30']
        SET NOCOUNT ON;
        UPDATE [MyAccounts] SET [Balance] = @p0
        WHERE [Id] = @p1;
        SELECT @@ROWCOUNT;


For example 1) Let's be clear about what we are doing.

If you are running in RC isolation, and perform a select sum() from table, you are reading values committed by other threads BEFORE the select statement began, you are not getting other threads committed values during the select, you are not breaking ACID.

If you are suggesting that running a simple BEGIN; select sum() from table; COMMIT is breaking acid in a default RC level, you are wrong and should best avoid commenting on isolation levels in RDBMS online, to not confuse people further.

If you are however suggesting that we are breaking ACID if we do app side stupidity such as:

value1=BEGIN; SELECT value from table where id=1;commit value2=......

sum = value1+value2....+value10

Then yes obviously its not acid but nobody in their right minds should be doing that. Even juniors quickly learn that this is incorrect code.

If you are suggesting we do repeatable reads in RC then yes its obviously not ACID but your example does not mention repeatable summations only a single one.


The point is to give people who don't realise that they have been dealing with eventual consistency all along, that it's right there, in their lives, and they already understand it.

You're right I go into too much detail (maybe I got carried away with the HN audience :-) and you are right that multiple accounts is something else that people generally already understand and demonstrate further eventual consistency principles.


I wasn't criticizing you, just making the point that when people talk about toy example bank transactions, they usually want to just introduce the basic understanding. And I think it ok, but I would prefer that they would also mention that REALLY the operations are complex.

I modified my comment above that by multiple types of accounts I meant that banks have various accounts for settlements with the other banks etc. even in the common payment case.


My bad, I didn't mean to sound too upset, but I do get a bit "trigger happy" from time to time


No, this is confusing how the financial institutions operate as a business with how the data store that backs those institutions operates as a technology.

You can certainly operate your financial system with a double entry register and delayed reconciliation due to the use of credit and the nature of various forms of promissory notes, but you're going to want the data store behind the scenes to be fully consistent with recording those transactions regardless of how long they might take to reconcile. If you don't know that your register is consistent, what are you even reconciling against?

What you're arguing is akin to arguing that because computers store data in volatile RAM and that data will often differ from what is on disk, that you shouldn't have to worry about file system consistency or the integrity of private address spaces. After all, they aren't going to match anyways.


No.

I clearly state

> analogy (sorry about the initial misspell) when it comes to eventual consistency because 1: We use banking almost universally the same ways, and 2: we understand fully the eventual consistency employed (even though we don't think about it)

The point is, you understand that your bank account is eventually consistent, and I have given an explanation of instances of eventual consistency that you already usually know and understand.

You make the mistake of thinking about something else (the back end storage, the double entry bookeeping).


Depends on when you stop calculating, and how you exactly value the work

By 1900 the united states had 215 thousand miles of railroads https://www.loc.gov/classroom-materials/united-states-histor...

Depend on you value land mileage and work this could easily be north of 1T modern dollars.


Land value underneath railroad tracks is an interesting subject. Most land value is reasonably calculated by width * length, and maybe some airspace rights. And that makes sense to our human brains, because we can look at a parcel of land and acknowledge it might be worth $10^x for some x given inflation.

But railroads kind of fail with this because you might have a landowner who prices the edge of their parcel at $1,000,000,000,000 because they know you need that exact piece of land for your railroad, and if the railroad is super long you might run into 10 of these maniacs.

Meanwhile the vast majority of your line might be worth less than any adjacent farmland, square foot by square foot, especially if it’s rocky or unstable etc.

Having a continuous line of land for many miles also has its own intrinsic value, much more than owning any particular segment (especially as it allows you to build a railroad hah).

Anyway, suffice to say, I don’t think “land value underneath railroads from the 18th century” is something that’s easily estimated.


Don't forget -

layer 0 - how you stored the data was wrong.

layer -1 - your understanding of modeling the behavior was wrong before you ever created a table.

layer -2 - your fundamental business process was wrong and all your information is lies.

This is why instead of a central source of truth I call it the central source of lies.


Jobs fucked over a lot of people and respected the machines. Woz dealt with the machines and respected the people.


>Jobs fucked over a lot of people

Oft repeated, and not untrue, but very incomplete.

Jobs also made a lot of people. A lot of fortunes in SilVal only exist because of Steve Jobs.

He also virtually single handedly and without much fanfare at the time or credit in the history books created the employee compensation model that came to define SilVal success, with workaday employees and especially engineering contributors receiving stock options to reward them and keep them invested in the company's success.


I don't disagree with what you say, but I have literally never seen or heard "SilVal". Is this a common shorthand? I hear "the Valley" and see "SV" but never this halfcronym.


You are correct that jobs made a ton of people - and not just wealthy, he created an entire ecosystem around Apple, which made a large number of people vast sums of money.

That last part however.. is not actually true - Fairchild Semiconductor did it, and did it far before Apple did. I'd like to say intel (and a ton of others) did the same thing.


Sure, but he was cruel for no reason to many people who did not deserve it, I don't even care about his tech problems. Nobody should park in the handicap stalls without a license plate because he keeps leasing new cars.


No, the ones on broadcast television news where they go scene by scene breaking down any claims of Alex being at fault being bogus lies that you are now repeating.


Many people on hacker news have a reason to care about the united states government's position on signal and their evolving efforts relating to civil rights.


You mean people like this - The COVID vaccine “has been proven to have negative efficacy.”

https://www.politifact.com/factchecks/2023/jun/07/ron-johnso...

This is called disinformation that will get you killed, so yeah, probably not good to have on youtube.

- After saying he was attacked for claiming that natural immunity from infection would be "stronger" than the vaccine, Johnson threw in a new argument. The vaccine "has been proven to have negative efficacy," he said. -


Unfortunately it's not disinformation, it's going to be a while for people to discover how many things they were lied to about


https://www.wpr.org/health/health-experts-officials-slam-ron...

Extraordinary claims require extraordinary evidence instead of just posting bs on rumble.


Wrong https://www.americanhistory.si.edu/explore/exhibitions/ameri...

There has been many ways to stop you from voting, contesting your vote, calling your registration into account, imitating tests that are impossible to validate if you are intelligent enough to vote, etc

Spend some time educating yourself on how voting suppression has worked historically and you wont sound so ignorant.


While a required literacy test may be a form of voter suppression, it is not "harassment", which is what we are discussing.


To be clear, no, it is not, because of the opportunity cost of all the other slop. That's what this is all about.


Then no bug reports and no fixes. Sounds good enough.


Of course there are still bug reports and fixes without financial compensation. The proof is all of open-source, including cURL.


They'll still get bug reports and fixes from people who actually give a shit and aren't just trying to get some quick money.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: