I have sat with these numbers for a great deal of time, and I can’t find any evidence that Anthropic has any path to profitability outside of aggressively increasing the prices on their customers to the point that its services will become untenable for consumers and enterprise customers alike.
This is where he misunderstands. Enterprise companies will absolutely pay 10x the cost for Claude. Meta and Apple are two large customers, you think they won't pay $500 a month per employee? $1000 a month per employee? Neither of those are outrageous to imagine if it increases productivity 10%.
I'm much more skeptical about this for two major reasons.
Firstly, a huge amount of labour that can be accelerated by LLMs fall into the "bullshit jobs" category, where you can make someone faster at writing emails but the emails themselves don't really contribute much value. The majority of LLM use I see falls into this category. Many people can speed up parts of their job, but you can add as much efficiency as you want without actually impacting the bottom line -- and for various reasons that are not tractable right now, including with LLMs, businesses aren't able to get themselves to remove these roles.
Secondly, the median company is incapable of doing the things that aren't driven entirely by hype or political promises made by executives. We still exist in the universe where they prefer to have all their staff attrition out due to not getting raises, then end up paying the same amount for a bunch of folks that have no knowledge of the business when they inevitably have to replace their best talent.
With all that said, I'm sure a few savvier places would happily drop $1000 month per head if the value is there, but I really think in the average case that this would be more about marketing than any logic. People still buy Informatica in 2025 for much more money than they spend on LLMs.
You don't need to remove rolls if it makes everyone 10% better at their job. You think companies like paying for Office 365? That's barely a value add compared to free versions, but every company on the planet forks over money for that.
To clarify, I'm saying adding 10% to a reasonably useless job is still 0%.
And secondly, maybe not an important point, but the paid version of O365 is something even my team has been forced to buy. The free versions paywall very important features. The free Word can't delete section breaks!
Also spend will drop dramatically if the models level out a bit more. The training is what's compute heavy, and if you aren't having to retrain every month, but able to use things like Skills to stay competitive your costs will drop a lot.
I suppose that's the pessimistic-on-AI side. On the other hand, once you create God little things like money are meaningless.
Its too easy to switch providers when theres a billion dollars a year at stake. If youre an apple exec who sees that the company is spending 10k per employee per year why wouldnt you start an initiative that cuts that spend to 1k per employee per year? Then you can go to the board and say you personally saved them all that money and get a big promotion.
> So I feel that there are many people like me who are confused and kind of unsure on how to proceed.
Don't let AI write the code for you and send diffs when you're a newbie.
Use it to understand, to ask questions, use it like a better stack overflow/google, but don't copy/paste chunks of code.
If you do have it generate more than a single line, mess with it, change it around, type it in but change the way it works, see if there's other method calls that would do what you're doing, see if you can refactor it.
Basically, don't just get into a copy/paste loop. The same thing happened when Stack Overflow became big, you had a whole generation of code monkeys who could copy-paste something sorta working from stack overflow/googling, but when something broke, they had no clue how to fix it.
Copy-paste here (or having it send diffs) is the evil part, not the AI. AI can really help you learn new tech. Have it do code reviews, have it brainstorm ideas, or have it even find the right apis for you, Just don't copy paste!
Also, you can ask the AI to review your code, and it won't give you grief like the Internet would. You can ask questions without the need for asbestos underwear.
Agree with both of the above. Two things I would add:
- Translate the problem you are trying to solve into the most generic terms possible, and then translate the AI response back into the problem you are trying to solve. AI suggests the tools for the job, you decide (and understand) if and how they get used.
- Read the docs on whatever features it is suggesting. Or use AI to help understand the docs. Once you've learned syntax, the two "technical" parts of coding are algorithms and features, both of which are documented. AI is really good at reading docs (hence the natural language processing part of natural language processing). Use it to help you read the docs.
Dude go back in time and try to use bluetooth in 2003 and tell me things are worse. Try figuring out how to deploy code to multiple servers and how smooth that process was...
> We need a ton more electricians but it's nearly impossible to get into schools. Or you need union apprenticeships and their are not enough slots.
This got solved in tech 20 years ago. You don't look at credentials and instead design a very arduous interviewing process that selects for both high IQ and people who are willing to study/work at it.
Then you provide lots of training. 20 years ago at Google (or Apple etc) there were tons of well educated non CS hires who were given good training and became exceptional software engineers.
Google spends a shitton on their employees. A large majority of that is on training. And of that, a large amount of that is because what you know about computers from the outside world isn't useful at Google. At the lowest level, Google's computers are still the same as everyone else's, but because of all the layers of automation that have been built up around them, you need to learn all the Google systems on how you interact with them, especially at Google scale. Some of that training is applicable elsewhere, but the trope in our industry is (and I'm as guilty of this as the next Xoogler, try as I might to not do this) "well at Google we did..." and for it to not be useful in the current job's context because the current job isn't Google and doesn't have that kind of resources or culture.
Their "solution" rides on an unfathomably large tsunami of money. Which is great for them (and by extension, my bank account while I was there), but how do we accomplish that when there isn't one?
>but how do we accomplish that when there isn't one?
I worked for a small (~300 engineers) well run org in Verizon before I went to Google, and I was surprised at how well their training was, even with smaller budgets, and a tier below compensation wise and interviewing rigor.
This org supported being deliberate about hires, getting people who were experts and liked coaching/training along with those who were newbies with aptitude and desire. There was good documentation, good shared culture, and lots of safeguards like linting, excessive testing, example projects, if not quite the full fledged codelabs google has.
But part of it, both for Verizon and Google, was signaling that "good people work here and are well rewarded" (comparatively).
Hire, then train them for a long period of time? That is an apprenticeship. It's what they do in the trades already. There aren't enough slots (union or not).
You need a diploma, a smattering of algebra, a driver's license, and the physical ability to do the work. Everything else you will be taught on the job, while being paid.
Do you like working? If you like working and solving problems, it sounds like you have some time to learn new skills and find a company that would value your contributions.
There are small companies/startups that would presumably trade your experience and a desire to not be overworked for a moderate salary.
You obviously can't act like you're looking for a 'chill' job in the interview, but you can be on the lookout for companies that are a bit more relaxed, and if you aren't trying to maximize your comp, they probably will be OK if you don't put in 'startup hours'
>I mean, where is the money hungry corporation in this story?
In the staffing and service provider companies the nonprofit funnels its money into. And let's not even mention the cost of medical devices and medicine.
There are almost certainly physicians groups operating out of that hospital that are for-profit and likely owned by PE. A large part of your bill is going to come from them.
I don't find it convincing that the tech isn't the moat. If the tech wasn't a moat, you'd see Microsoft spinning up its own competitor, you'd see Amazon, Apple, Meta, Oracle all have SoTA frontier models as well.
We don't see that, we see three established players in the frontier model space, and a lot of folks fighting in the second tier category.
> If the tech wasn't a moat, you'd see Microsoft spinning up its own competitor, you'd see Amazon, Apple, Meta, Oracle all have SoTA frontier models as well.
Rather: these companies consider it to be a really bad business idea to spend lots of billions for building a new state-of-the-art model that will be obsolete half a year later.
> Are you under the impression they aren't burning money trying to make their own foundational models?
Indeed, I think they burn money with that, but not as much as they would if they were putting all of their eggs into one casket, as the "AI companies" like OpenAI and Anthropic do (or do much more).
There exist multiple (not mutually exclusive) explanations for that:
- Limititing the "money burn rate" on AI is a political compromise that was made at the companies between various decision makers
- The companies hope that at the end even a not state-of-the-art AI model might offer business opportunities
- Perhaps such a model might give you a better "bang per buck" rate (cost of training, cost of running)
- These companies want to get experience with AI, so they currently burn a lot of money of it, but will pivot when their AI models have been out-competed
- Such a pivot could be getting from "state-of-the-art models" to "models that are insanely cheaply to run, while still being powerful"
- Perhaps the decision makers of the respective companies believe in the (not implausible) scenario that AI could from a technical perspective continue improving a lot, but these models will get disproportionally expensive to run, i.e. in the upcoming future AI models won't improve so much anymore because no one will be able to pay for it. In such a case having a slightly worse model is much less of a disadvantage.
> but not as much as they would if they were putting all of their eggs into one casket, as the "AI companies" like OpenAI and Anthropic do (or do much more).
Microsoft invested over 15 billion in OpenAI.
You think they wouldn't have rather spent a fraction of that to have a SoTA model?
Limititing the "money burn rate" on AI is a political compromise that was made at the companies between various decision makers
Meta has no one to compromise with as Zuck is the majority shareholder (voting wise), Apple has so much cash they don't know what to do with it (cars, VR, etc),
This argument doesn't hold up for these cash rich top companies.
The companies hope that at the end even a not state-of-the-art AI model might offer business opportunities
I find it hard to believe that OpenAI, Anthropic, and Google all aren't optimizing for 'bang for buck' here.