Fully disagree. OpenAI has 800 millions active users and has effectively democratized cutting-edge AI to an amazing number of people everywhere. It took much longer for the Internet or Mobile Internet to have such an impact.
And it's up to a $1bn+ monthly revenue run rate, with no ads turned on. It's the first major consumer tech brand to launch since Facebook. It's an incredible business.
I propose oAI is the first one likely to enter the ranks of Apple, Google, Facebook, though. But it's just a proposal. FWIW they are already 3x Uber's MAU.
Spotify goes back and forth from barely profitable to losing money every quarter. They have to give 70% of their revenue to the record labels and that doesn’t count operating expenses.
As Jobs said about Dropbox, music streaming is a feature not a product
So so so happy about the "no ads" part and do really hope there is a paid option to keep no ads forever. And hopefully the paid subscriptions keep the ads off the free plans for this who aren't as privileged to pay for it.
My hot take is that it will probably follow the Netflix model of pricing once the VC money wants to turn on the profit switch.
Originally Netflix was a single tier at $9.99 with no ads. As ZIRP ended and investors told Netflix its VC-like honeymoon period was over - ads were introduced at $6.99 and the basic no ad tier went to $15.99 and the Premium went to 19.99.
Currently Netflix ad supported is $7.99, add free is $17.99 and Premium is $24.99.
Mapping that on to OpenAI pricing - ChatGPT will be ~$17.99 for ad supported, ~$49.99 for ad free and ~$599 for Pro.
Netflix has lots of submarine (product placement) ads that you get even on ad-free plans. I expect OpenAI to follow that model too, except it'll be much worse.
I don't completely agree. Brand value is huge. Product culture matters.
But say you're correct, and follow the reasoning from there: posit "All frontier model companies are in a red queen's race."
If it's a true red queen's race, then some firms (those with the worst capital structure / costs) will drop out. The remaining firms will trend toward 10%-ish net income - just over cost of capital, basically.
Do you think inference demand and spend will stay stable, or grow? Raw profits could increase from here: if inference demand 8x, then oAI, as margins go down from 80% to 10%, would keep making $10bn or so a year in FCF at current spend; they'd decide if they wanted that to go into R&D or just enjoy it, or acquire smaller competitors.
Things you'd have to believe for it to be a true red queen's race:
* There is no liftoff - AGI and ASI will not happen; instead we'll just incrementally get logarithmically better.
* There is no efficiency edge possible for R&D teams to create/discover that would make for a training / inference breakaway in terms of economics
* All product delivery will become truly commoditized, and customers will not care what brand AI they are delivered
* The world's inference demand will not be a case of Jevon's paradox as competition and innovation drives inference costs down, and therefore we are close to peak inference demand.
Anyway, based on my answers to the above questions, oAI seems like a nice bet, and I'd make it if I could. The most "inference doomerish" scenario: capital markets dry up, inference demand stabilizes, R&D progress stops still leaves oAI in a very, very good position in the US, in my opinion.
The moat, imo, is mostly the tooling on top of the model. ChatGPT's thinking and deep research modes are still superior to the competition. But as the models themselves get more and more efficient to run, you won't necessarily need to rent them or rent a data center to run them. Alibaba's Qwen mixture of experts models are living proof that you can have GPT levels of raw inference on a gaming computer right now. How are these AI firms going to adapt once someone is able to run about 90% of raw OpenAI capability on a quad core laptop at 250-300 watts max power consumption?
I think one answer is that they'll have moved farther up the chain; agent training is this year, agent-managing-agents training is next year. The bottom of the chain inference could be Qwen or whatever for certain tasks, but you're going to have a hard and delayed time getting the open models to manage this stuff.
Futures like that are why Anthropic and oAI put out stats like how long the agents can code unattended. The dream is "infinite time".
Huge brand moat. Consumers around the world equate AI with ChatGPT. That kind of recognition is an extremely difficult thing to pull off, and also hard to unseat as long as they play their cards right.
"Brand moat" is not an actual economic concept. Moats indicate how easy/hard it is to switch to a competitor. If OpenAI does something user-adversarial, it takes two seconds to switch to Anthropic/Gemini (the exception being Enterprise contracts/lock-in, which is exactly why AI companies prioritize that). The entire reason that there are race-to-the-bottom price wars among LLM companies is that it's trivial for most people to switch to whatever's cheapest.
Brand loyalty and users not having sufficient incentive by default to switch to a competitor is something else. OpenAI has lost a lot of money to ensure no such incentive forms.
Moats, as noted in Google's "We Have no Moat, and Neither Does OpenAI" memo that made the discussion of moats relevant in AI circles, has a specific economic definition.
Switching costs only make sense to talk about for fully online businesses. The "switching cost" for McDonalds depends heavily on whether there's a Burger King nearby. If there isn't then your "switching cost" might now be a 30 minute drive, which is very much a moat.
That's not entirely true. They have a 'infinite' product moat - no one can reproduce a big mac. Essentially every AI model is now 'the same' (queue debate on this). The only way they can build a moat is by adding features beyond the model that lock people in.
The concept of ‘moat’ comes out of marketing - it was a concept in marketing for decades before Warren Buffett coined the term economic moat. Brand moat had been part of marketing for years and is a fully recognized and researched concept. It’s even been researched with fMRIs.
You may not see it, but OpenAI’s brand has value. To a large portion of the less technical world, ChatGPT is AI.
Nokia's global market share was ~50% in smartphones back in 2007. Remember that?
Comparing "brand moat" in real-world restaurant vs online services where there's no actual barrier to changing service is silly. Doubly silly when they're free users, so they're not customers. (And then there are also end-users when OpenAI is bundled or embedded, e.g. dating/chatbot services).
McDonald's has lock-in and inertia through its franchisees occupying key real-estate locations, media and film tie-ins, promotions etc. Those are physical moats, way beyond a conceptual "brand moat" (without being able to see how Hamilton Wright Helmer's book characterizes those).
I wouldn’t necessarily say so.
I guess that’s what they are trying to « pulse » people and « learn » from you instead of just providing decent unbiased answers.
In Europe, most companies and Gov are pushing for either mistral or os models.
Most dev, which, if I understand it correctly, are pretty much the only customers willing to pay +100$ a month, will change in a matter of minutes if a better model kicks in.
And they loose money on pretty much all usage.
To me a company like Antropics which mostly focus on a target audience + does research on bias, equity and such (very leading research but still) has a much better moat.
It has 20m paid users and ~ 780m free users. The free users are not at all sticky and can and will bounce to a competitor. (What % of free users converted to paid in 2025? vs bounced?) That is not a moat. The 20m paid users in 2025 is up from 15.5m in 2024.
Forget about the free tier users, they'll disappear. All this jousting about numbers on the free tier sounds reminiscent of Sun Microsystems chirpily quoting "millions and billions of installed base" back in the Java wars, and even including embedded 8-bit controllers.
For people saying OpenAI could get to $100bn revenue, that would need
20m paid users x $5000/yr (~ the current Pro $200/mth tier), but it looks they must be discounting it currently. And that was before Anthropic undercut them on price. Or other competitors.
Free users are users. Google search is free, though ad monetized. But there's nothing stopping, and in fact they plan to monetize free users with ads.
>The free users are not at all sticky and can and will bounce to a competitor.
If you really believe this, that just shows how poor your understanding of the consumer LLM space is.
As it is, ChatGPT (the app) spends most of its compute on Non work messages (approx 1.9B per day vs 716 for Work)[0]. First, from ongoing conversations that users would return to, to the pushing of specific and past chat memories, these conversations have become increasingly personalized. Suddenly, there is a lot of personal data that you rely on it having, that make the product better. You cannot just plop over to Gemini and replicate this.
- for code-generation, OpenAI was overtaken by Anthropic
- your comment about lock-in for existing users only applies historically to existing users.
- Sora 2 is a major pivot that signals what segment OpenAI is/isn't targeting next: Scott Galloway was saying today it's not intended to be used by 99% of casual users; they're content consumers, only for content creators and studios.
- for code-generation, OpenAI was overtaken by Anthropic
And that's nice for them.
- your comment about lock-in for existing users only applies historically to existing users.
ChatGPT is the brand name for consumer LLM apps. They are getting the majority of new subscribers as well. Their competitors - Claude, Gemini are nowhere near. chatgpt.com is the 5th most visited site on the planet.
Perhaps for as long as the base tier remains free, ad-free, and burn $8+billion/year, and for as long as they can continue funding that with circular trades with public stocks such as this week's nVidia and AMD deals.
You're aware they already announced they'll add ads in 2026.
And the circular trades are already rattling public markets.
How do they monetize users on the base tier, to any extent? By adding e-commerce? And once they add ads how do they avoid that compromising the integrity of the product?
>How do they monetize users on the base tier, to any extent? By adding e-commerce? And once they add ads how do they avoid that compromising the integrity of the product?
Netflix introduced ads and it quickly became their most popular tier. The vast majority of people don't care about ads unless it's really obnoxious.
You mean it's thanks to the incredible invention known as the Internet that they were able to "democratize cutting-edge AI to an amazing number of people"
OpenAI didn't build the delivery system they built a chat app.
They changed the video game dota2 permanently. Their bots could not control a shared unit (courier) among themselves so bot matches against their AI had special rules like everyone having their own. Not long after the game was changed forever.
As a player for over 20 years this will be a core memory of OpenAI. Along with not living up to the name.
Apple has physical stores that will provide you timely top notch customer service. While not perfect, their mobile App Store is the best available in terms of curation and quality. Their hardware is not so diverse so is stable for long term use. And they have the mindshare in way that is hard to move off of.
Let’s say Google or Anthropic release a new model that is significantly cheaper and/or smarter that an OpenAI one, nobody would stick to OpenAI. There is nearly zero cost to switching and it is a commodity product.
Their API product is easy to swith away from but their consumer product (which is by far the biggest part of their revenue) has much better market share and brand recognition than others. I've never heard anyone outside of tech use Gemini or Copilot or X AI outside of work while they all know ChatGPT.
Anecdata but even in work environments I hear mostly complaints about having to use Copilot due to policy and preferring ChatGPT. Which still means Copilot is in a better place than Gemini, because as far as I can tell absolutely nobody even talks about that or uses it.
There is only a zero cost to switching if a company is so perfectly run that everyone involved comes to the same conclusion at the same time, there are no meetings and no egos.
The human side is impossible to cost ahead of time because it’s unpredictable and when it goes bad, it goes very bad. It’s kind of like pork - you’ll likely be okay but if you’re not, you’re going to have a shitty time.
Let's say Google release a new phone that is significantly cheaper and/or smarter than an Apple one. nobody would stick to apple. There is nearly zero cost to switching and it is a commodity product.
The AI market, much like the phone market, is not a winner take all. There's plenty of room for multiple $100B/$T companies to "win" together.
> Let's say Google release a new phone that is significantly cheaper and/or smarter than an Apple one. nobody would stick to apple.
This is not at all how the consumer phone market works. Price and “smarts” are not only factor that goes into phone decisions. There are ecosystem factors & messaging networks that add significant friction to switching. The deeper you are into one system the harder it is to switch.
e.g. I am on iPhone and the rest of my family is on Android. The group chat experience is significantly degraded, my videos look like 2003 flip phone videos. Versus my iPhone using friends everything is high resolution.
> Let's say Google release a new phone that is significantly cheaper and/or smarter than an Apple one. nobody would stick to apple.
I don't think this is true over the short to mid term. Apple is a status symbol to the point that Android users are bullied over it in schools and dating apps. It would take years ti reverse the perception.
You’re aware that LLMs all have persistent memory now and personalize themselves to you over time right? You can’t transfer that from OAI to Anthropic.
Idk, it's a company with 4.5B in revenues in H1 2025.
It's not insane numbers but it's not bad either. YouTube had those revenues in...2018. 12 years after launching.
There's definitely a huge upside potential in openai. Of course they are burning money at crazy rates, but it's not that strange to see why investors are pouring money into it.
It is when no one knows they're dollar bills. Obviously, I take those dollar bills to the bank and make $0.95. Easy money. But how about when it's not a dollar bill, but a conversation with a robot? And that robot lies and kinda sucks? Why would anyone even pay nickles to see that show? Haven't they heard there's free porn on the Internet they can watch? Free! If people are getting $20 or even $200 / months's worth of something out of the robot that's kinda dumb and lies, to the tune of $4.5 B in 6 months, which works out to be $750 million/month, it seems our priors, that this robot is dumb and kinda sucks, maybe isn't quite that dumb, even if it does lie occasionally?
Okay, but do you see it so hard to consider the point of who thinks that they can 5/10/20x their current revenues without seeing similar ballooning in costs long term?
It's an insane number considering how little they monetize it. Free users are not even seeing ads rn and they already have 4.5B revenue. I think 100B by 2029 is a very conservative number.
Sure but they're not selling ads because the lack of ads is the unique selling point of the consumer product. It's a loss leader to build the brand for the b2b / gov stuff.
If they junk up the consumer experience too much users can just switch to Google who, obviously is the behemoth in the ad space.
Obviously there's money to be made there but they have no moat - I feel like despite the first mover advantage their position is tenuous and ads risk disrupting that edge with consumers.
Some of us remember when Google was just an upstart and anyone who wanted to do anything serious used Altavista or MetaCrawler. Even the behemoths can get taken out.
I'm in awe they are still allowing free users at all. And I'm one of them. The free tier is enough for me to use it as a helper at work, and I'd probably pay for it tomorrow if they cut off the free tier.
It’s not that they are “allowing free user at all” they are expanding their free offerings. Last year I paid $20/month for ChatGPT. This year I haven’t paid anything though my usage has only increased.
I've used them all, and they all have their place I guess.
ChatGPT is far and away my favorite for quick questions you'd ask the genius coworker next to you. For me, nothing else even comes close wrt speed and accuracy. So for that, I'd gladly pay.
Don't get me wrong, Claude is a marvel and Deepseek punches above its weight, but neither compare with stuff like 'write me a sql query that does these 30 things as efficiently as possible.'. ChatGPT will output an answer with explanations for each line by the time Claude inevitably times out...again.
...not monetized yet: Can't find the post, but a prev. HN post had a link to an article showing that OpenAI had hired someone from Meta's ad service leadership - so I took that to mean it's a matter of time.
It is hard though. Getting people to hand $4.5B to a company is difficult no matter how much money you are losing in the process.
I mean sure, you can get there instantly if you say "click here to buy $100 for $50", but that's not what's happening here - at least not that blatantly.
I get what you're saying, and it's especially interesting if revenue grows faster than costs, but for private entities it's harder to tell what the actual dynamics are. We don't really have the breakdown of the revenues, do we?
even if thats the case, they have eaten multiple times that amount of other companies lunch. Companies that currently use ads, whereas cgpt does not.(but will).
As a reminder, even Apple didn't hit 1T market cap until late 2018. We didn't get a second in the 4 comma club until mid 2019 with MSFT. Google and Facebook in 2021.
And now we have 4 companies above 3T and 11 in the 4 comma club. Back when the iPhone was released oil companies were at the top and they were barely hitting 500B.
So yeah, I don't think anyone has really been displaced. Nvidia at up, Broadcom at 7, and TSMC at 9 indicate that displacement might occur, but that's also not the displacement people are talking about.
I don't entirely know what to make of a very-small number of companies' valuations going sky-high that fast (and a few completely without any apparent connection to the fundamentals or even the best-plausible-case mid-term future of those fundamentals, like Tesla) but I can't help but think it means something is extremely broken in the economy, and it's not going to end well.
Maybe we all should have been a little more pro-actively freaked out when dividends went from standard to all-but extinct, and nobody in the investor class seemed to mind... like, it seems that the balance between "owning things that directly make money through productive activity" and "owning things that I expect to go up in value" has gotten completely out-of-wack in favor of the latter.
My guess? Hype. All the companies at the top have a lot of hype. I don't think that explains everything, but I believe it is an important factor. I also think with tech we've really entered a Lemon Market. It is very difficult to tell the quality of products prior to purchase. This is even more true with the loss of physical retail. I actually really miss stores like Sharper Image. Not because I want to buy the over priced stuff, but because you would go into those stores and try things.
I definitely think the economy has shifted and we can see it in our sector. The old deal used to be that we could make good products and good profits. But now "the customer" is not the person that buys the product, it is the shareholder. Shareholder profits should reflect what the market value of the product being sold to customers, but it doesn't have to. So if we're just trying to maximize profits then I think there is no surprise when these things start to diverge.
Ed... I wrote him a long note about how wrong his analysis on oAI was earlier this year. He wrote back and said "LOL, too long." I was like "Sir, have you read your posts? Here's a short version of why you're wrong." (In brief, if you depreciate models over even say 12 months they are already profitable. Given they still offer 3.5, three years is probably a more fair depreciation schedule. On those terms, they're super profitable)
> I wrote him a long note about how wrong his analysis on oAI was earlier this year.
Why don't you consider posting it on HN either as a response in this thread or as it's own post. There's clearly interest in teasing out how much of OAI's unprecedented valuation is hype and/or justified.
The depreciation schedule doesn't affect long term profitability. It just shifts the profits/loss in time. It's a tool to make it appear like you paid for something while it's generating revenue. Any company would look really profitable for a while if it chose long enough depreciation schedules (e.g. 1000 years), but that's just deferring losses until later.
No it would in fact be appropriate to match the costs of the model training (incurred over a few months) with the lifetime of its revenue. That’s not some weird shifting - it helps you understand where the business is at. In this case on a per model basis, very profitable.
> it would in fact be appropriate to match the costs of the model training with the lifetime of its revenue
You're right. But this also doesn't mean singron is wrong.
Think about their example. If the deprication is long lived then you are still paying those costs. You can't just ignore them.
The problem with your original comment is that it is too simple of a model. You also read singron's comment as a competing model instead of "your model needs to account for X".
You're right that it provides clues that the business might be more profitable in the future than current naïve analysis would suggest but you also need to be careful in how you generalize your additional information
I think we are, like you suggest, just talking past each other. Depreciation is supposed to conceptually be tied to the useful life of an asset; a 1000 year depreciation schedule might be reasonable for say a Roman bridge.
When we talk accrual basis profits we are trying as best we can to match the revenues and expenses even if they occur at different points in the useful life of the asset.
Almost zero kibitzers or journalists take this accrual mindset into account when they use the word profit - but that’s what profit is, excess revenue applied against a certain period’s fairly allocated expense.
What they generally mean is cashflow; oAI has negative cashflow and is likely to for quite a while. No argument there. I think it’s worth disambiguating these for people though because career and investment decisions in our industry depend on understanding the business mechanics as well as the financial ones. Right now financially simplistic hot takes seem to get a lot of upvotes. I worry this is harming younger engineers and founders.
The problem of this "depreciation" rationale is that it presumes that all the cost is in training, ignoring that actually serving the models is also very expensive. I certainly don't believe they would be profitable, and vague gestures at some hypothetical depreciation sounds like accounting shenanigans.
Also, the whole LLM industry is mostly trying to generate hype, at a possible future where it is vastly more capable than it currently is. It's unclear if they would still be generating as much revenue without this promise.
Your brief doesn't make sense, maybe you need to expand?
They're only offering 3.5 for legacy reasons: pre-Deepseek, 3.5 did legitimately have some things that open source hadn't caught up on (like world knowledge, even as an old model), but that's done.
Now the wins come from relatively cheap post-training, and a random Chinese food delivery companies can spit out 500B parameter LLMs that beats what OpenAI released a year ago for free with an MIT license.
Also as you release models you're enabling both distillation of your own models, and more efficent creation of new models (as the capabilities of the LLM themselves are increasingly useful for building, data labeling, etc.)
I think the title is inflammatory, but the reality is if AGI is really around the corner, none of OpenAI's actions are consistent with that.
Utilizing compute that should be catapulting you towards the imminent AGI to run AI TikTok and extract $20 from people doesn't add up.
They're on a treadmill with more competent competitors than anyone probably expected grabbing at their ankles, and I don't think any model that relies on them pausing to cash in on their progress actually works out.
Longcat-flash-thinking is not super popular right now; it doesn't appear on the top 20 at open router. I haven't used it, but the market seems to like it a lot less than grok, anthropic or even oAI's open model, oss-20b. Like I said I haven't tried it.
And to your point, once models are released open, they will be used in DPO post-training / fine-tuning scenarios, guaranteed, so it's hard to tell who's ahead by looking at an older open model vs a newer one.
Where are the wins coming from? It seems to me like there's a race to get efficient good-enough stuff in traditional form factors out the door; emphasis on efficiency. For the big companies it's likely maxing inference margins and speeding up response. For last year's Chinese companies it was dealing with being compute poor - similar drivers though. If you look at DeepSeek's released stuff, there were some architectural innovations, thinking mode, and a lottt of engineering improvements, all of which moved the needle.
On treadmills: I posit the oAI team is one of the top 4 AI teams in the world, and it has the best fundraiser and lowest cost of capital. My oAI bull story is this: if capital dries up, it will dry up everywhere, or at the least it will dry up last for a great fundraiser. In that world, pausing might make sense, and if so, they will be able to increase their cash from operations faster than any other company. While a productive research race is on, I agree they shouldn't pause. So far they haven't had to make any truly hard decisions though -- each successive model has been profitable and Sam has been successful scaling up their training budget geometrically -- at some point the questions about operating cashflow being deployed back to R&D and at what pace are going to be challenging. But that day is not right now.
You're arguing a different point than the article, and in some ways even agreeing with it.
The article is not saying OpenAI must fail: it's saying OpenAI is not "The AGI Company of San Francisco". They're in the same bare knuckle brawl as other AI startups, and your bull case is essentially agreeing but saying they'll do well in the fight.
> In fact, the only real difference is the amount of money backing it.
> Otherwise, OpenAI could be literally any foundation model company, [...] we should start evaluating OpenAI as just another AI startup
Any startup would be able to raise with their numbers... they just can't ask for trillions to build god-in-a-box.
It's going to be a slog because we've seen that there are companies that don't even have to put 1/10th their resources into LLMs to compete robustly with their offerings.
OpenRouter doesn't capture 1/100th of open weight usage, but more importantly the fact that Longcat is legitimately robustly competitive to SOTA models from a year ago is the actual signal. It's a sneak peak of what happens if the AGI case doesn't pan out and OpenAI tries to get off the treadmill: within a year a lot of companies catch up.
I agree with you. Even Anthropic's CEO said EXACTLY this. He said, if you actually look at the lifecycle of each model as its own business, then they are all very profitable. It's just that while we're making money from Model A, we've started spending 10x on Model B.
> Perhaps at some point we'll say "this model is profitable and we're just gonna stick with that".
I don't follow it that closely but my perception is that's already happened. Various flavors of GPT 4 are still current products, just at lower prices.
That is not what GP was saying. They were saying that a foundation model company would say "$CURRENT_MODEL is so good that it is no use training $NEXT_MODEL right now", which I don't think is the current stance of any of those companies.
Given that they are all constantly spending money on R&D for the next model, it does not really matter how long they get to offer some of the older models. The massive R&D spend is still incurred all the time.
I don’t quite understand your wording here. Do we have to “pay valuations between 150 and 500b” to access the data that supports the justification of the valuation or can you just link to it?
> Given they still offer 3.5, three years is probably a more fair depreciation schedule.
But the usage should drop considerably as soon as next model is released. Many startups down the line are existing in hope of better model. Many others can switch to a better/cheaper model quite easily. I'd be very surprised if the usage of 3.5 is anywhere near what it was before release of the next generation, even given all the growth. New users just use the new models
OpenAI expects multi-year losses before turning consistently profitable, so saying they are already profitable based solely on an aggressive depreciation assumption overstates the case
That's disappointing to hear. I've generally liked Ed's writing but all his posts on AI / OAI specifically feel like they come from a place of seething animosity more than an interest in being critical or objective.
At least one of his posts repeated the claim that essentially all AI breakthroughs in the last few years are completely useless, which is just trainwreck hyperbole no matter where you lie on the spectrum as far as its utility or potential. I regularly use it for things now that feel like genuine magic, in wonder to the point of annoying my non-technical spouse, for whom it's just all stuff computers can do. I don't know if OpenAI is going to be a gazillion dollar business in ten years but they've certainly locked in enough customers - who are getting value out of it - to sustain for a while.
That's not remotely surprising though if you've known anyone like how Ed presents himself through his writing and videos. I don't know him personally, so I can't speak to what he's actually like, but there's something about his writing style and how he comes off in his videos as a conviction that he's right and his got the secret truth and why won't everybody believe him. Cassandra's curse, perhaps, but naming it doesn't convince me.
It's a rude response from someone whose public persona is famously rude and abrasive. It's also worth considering the difference between publishing 10000 words to an audience of subscribers, and sending 10000 words unsolicited to a stranger.
There’s no proof that this exchange ever happened. Happy to eat crow if proven wrong. I’ve disagreed and argued with Ed on his subreddit, he never banned me or acted hostile.
Especially rude given, if he was feeling it was too long, he could've had an AI summarize it.
But this shows a certain intellectual laziness/dishonesty and immaturity in the response.
Someone's taken the time to write a response to your article, you can choose to learn from it (assuming it's not an angry rant), or you could just ignore it.
In fact, that completely dismisses this stupid article for me.
If you spend more money training the model and offering it as a service (with all the costs that that entails) than you earn back directly from that model's usage, it can only be profitable if you use voodoo economics to fudge it.
Luckily we live in a time period where voodoo economics is the norm, though eventually it will all come crashing down.
You’re right, but that’s not what’s happening. Every major model trained at Anthropic and oAI have been profitable. Inference margins are on the order of 80%.
That’s true, but OpenAI and its proponents say each model is individually profitable so if R&D ever stops then the company as a whole will be profitable.
The problem with this argument is that if R&D ever stops, OpenAI will not be differentiated (because everyone else will be able to catch up), so their pricing power will disappear, and they won't be able to charge much more than the inference costs.
You're missing that they're pricing the value of models progressing them towards AGI, and their own use of that model for research and development. You can argue the first one, and the second is probably not fully spun up yet (though you can see it's building steam fast), but it's not total fantasy economics, it's just highly optimistic because investors aren't going to buy the story from people who're hedging.
LLMs by themselves are not going to lead to AGI, agreed. However, there's solid reasons to believe that LLMs as orchestrators of other models could get us there. LLMs have the potential to be very good at turning natural language problems into "games" that a model like MuZero can learn to solve in a superhuman way.
If you've been around, imgur, basically. All the image hosting solutions before it (imageshack) and all the file hosting solutions before them. Yahoo.
"Crashing" in this context doesn't mean something goes completely away, just that its userbase dwindles to 1-5-10% of what it once was and it's no longer part of the zeitgeist (again, Yahoo).
the only way I can see that happening here (userbase dwindles to 1-5-10% of what it once was) is if another better service came about but I don't think that is what @bayarearefugee is talking about when s/he said 'come crashing down' ?
I wouldn't write them off yet - but if their funding dries up and there's no more money to support their spending habits this will seem like a great prediction. Giving away stuff that's usually expensive for free is a great way to get numbers - It worked for facebook, uber and many others but it doesn't mean you'll become a profitable company.
Anyone with enough money can buy users - example they could start an airline tomorrow where flights are free and get a lot of riders - but if they don't figure out how to monetize, it'll be a very short experiment.
It's this and it's really funny to see users here argue about how the revenue is really good and what not.
OpenAI is only alive because it's heavily subsidizing the actual cost of the service they provide using investor money. The moment investor money dries up, or the tech industry stops trading money to artificially pump the market or people realize they've hit a dead end it crashes and burns with the intensity of a large bomb.
> The moment investor money dries up, or the tech industry stops trading money to artificially pump the market or people realize they've hit a dead end it crashes and burns with the intensity of a large bomb.
You have hit the nail on the coffin
To me, it is natural for investor money to dry up as nobody should believe that things would always go the right way yet it seems that openAI and many other are just on the edge... so really its a matter of when and not if
So in essense this is a time bomb, tick tock, the time starts now and they might be desperate because of it as the article notes.
Inevitably it will get jammed with ads until barely profitable. Instead of being able to just cut and paste the output into your term paper, you're going to have to comb through it to remove all the instances of "Mountain Dew is for me and you!" from the output.
Open weights models exist too and there is no moat if OpenAi does this but at this point I am not sure, maybe people wouldn't be able to figure out anything AI related except chatgpt but for most people it can definitely just be to use lets say any other provider which isn't enshittened if that ever becomes true.
Would also lose them the api business but i assume you are saying that they would have good ad free models on the api and ad riddled models in free tier
The funny thing is that maybe we already have it but its just more subtler who knows, food for thought :)
I thought it was starting when Ilya said that scaling has plateaued about a year ago. Now confirmed with GPT-5. Now they'll need to sell a pivot from AGI to productization of what they already have with a valuation that implies reaching AGI?
Reminds me of MoviePass or early stage Uber. Everything is revolutionary and amazing when VCs are footing the bill. Once you have to contend with market pricing things tend to change.
Moviepass had to close, but Uber's running a profit these days. The days of $1 Ubers was clearly unsustainable but unless you're inside OpenAI/Anthropic, we're all just guessing as to how much inference costs them to run. Some people have more detailed analysis than others. Most aren't quite so rude and angry as Mr Zitron though, preferring to let their work speak for itself rather than try and get you to go along with their analysis because he's telling you what you want to hear while yelling at you.
i dont believe this for a second. Inference margins are huge, if they stopped R&D tomorrow they would be making an incredible amount of money, but they cant stop investing because they have competitors.
I don't know if the site is just broken for me at the moment, but this has used to track how much people were costing the companies (based on tokens per $ if I remember correctly): https://www.viberank.app/
Plenty of people are to blow through resources pretty quickly, especially when you have non-deterministic output and have to hit "retry" a few times, or back-and-forth with the model until you get what you want, whereby each request adds to total tokens used in the interaction.
AI companies have been trying to clamp down but so far unsuccessful, and it may never be completely possible without alienating all of their users.
This is unrelated to the original assertion: "If they charged users what it actually costs to run their service, almost nobody would use it."
5 million paying customers on 800 million overall active users is an absolutely abysmal conversion rate. And that's counting the bulk deals with extreme discounts (like 2.50 USD/month/seat) which can only be profitable if a significant number of those seats never use ChatGPT at all.
>the only real difference is the amount of money backing it
Judging by how often Sam Altman makes appearances in DC, it's not just money that sets OpenAI apart. It's likely also a strategically important research and development vehicle with implicit state backing, like Intel or Boeing or Palantir or SpaceX. The losses don't matter, they can be covered by a keystroke at the Fed if necessary.
One of the consequences of this is that is there actual economic growth here? If it becomes a browser / social network / workplace productivity etc. company, it's basically becoming another google/microsoft. While great for OpenAI, is there a lot of space for them to find new money rather than just take it from Google/Microsoft/facebook?
Most things written about this subject is already polarizing.
I'd believe this if there was more internal company data than just some
outsider using the same secondary data that openai seemingly manipulates
to draw conclusions that have so many logical holes in them, they won't hold
half a litre of water for 5 minutes.
Three inline subscription CTAs, a subscription pop-up, and a subscribe-wall a few paragraphs in.
Oof!
Reacting to what I could read without subscribing: turns out profitably applying AI to status-quo reality is way less exciting than exploring the edges of its capabilities. Go figure!
it's a shame; i generally agree with most of what ed has to say and i think his arguments come from a good place, but the website is pretty irritating and i find his delivery to be breathless and melodramatic to the point of cliche (not befitting the serious nature of the topics he argues). i had to stop listening to his podcast because of the delivery; its not an uncommon situation for other CZM podcasts but at least some of them handle their editorial content with a little more maturity (shout out to Molly Conger's Weird Little Guys podcast).
I hate to make the comparison between two left-ish people who yell for a living just because they're both British, but it kinda feels like ed is going for a john oliver type of delivery, which only really works well when you have a whole team of writers behind you.
I love almost all of the CZM shows, and even I have a hard time making it all the way through a full-on rant from Ed :/ and I agree with him. Sorry, Ed.
> Edward Benjamin Zitron (born 1986 or 1987) is an English technology writer, podcaster, and public relations specialist. He is a critic of the technology industry, particularly of artificial intelligence companies and the 2020s AI boom.
I, personally, think that OpenAI is way overhyped and will never deliver on that promise. But... it might not matter.
There is going to be an awful lot of disruption to the economy caused by displacing workers with AI. That's going to be a massive political problem. If these people get their way, in the future AI will do all the work but there'll be no one to buy their products because nobody is employed and has money.
But I just don't see one company dominating that space. As soon as you have an AI, you can duplicate it. We've seen with efforts like DeepSeek that replicating it once it's done is going to require significantly less effort. So that means you just don't have the moat you think you do.
Imagine the training costs get to $100M and require thousands of machines. Well, within a few years it's going to be $1M or less.
So the question is: can OpenAI (or any other company) keep advancing to outpace Moore's Law? I'm not convinced.
But here's why it might not matter: Tesla. Tesla should not be a trillion dollar company. No matter how you value it on fundamentals, it should be a fraction of that. Value it as a car maker, an energy company or whatever and it gets nowhere near $1T. Yet, it has defied gravity for years.
Why? IMHO because it's become too large to fail and, in part, it's now an investment in the wealth transfer that is going on and will continue from the government to the already wealthy. The government will make sure Tesla won't fail as long as it's friendly to the administration.
As much as AI is hyped, it's still incredibly stupid and limited. We may get to the point where it's "smart" enough to displace a ton of people but it's so expensive to run it's cheaper to employ humans.
This is not a direct response to this piece, but I wrote a short post about the egregious errors Zitron is comfortable pushing in order to make things sound as bad as possible.
Your piece is well-written, but embeds an important assumption that leads to your conclusion being different from Ed's.
> And how can TV retailers make money in this situation? Did they expect to keep charging $500 for a TV that’s now really worth $200, and pocket the $300 difference?
> Why, then, is Ed Zitron having such a hard time when it comes to LLM inference? It’s exactly the same situation!
The AI situation is not analogous to one where the TVs initially costed $450 to manufacture and the stores were selling them for $500, then the manufacturing cost went down.
The equivalent TV analogy is that we're selling $600-cost TVs for $500 hoping that if people start buying them, the cost will drop to $200 so we can sell them for $300 at a profit. In that situation, if people keep choosing the $600-cost/$500-price unprofitable TVs, the existence of the $200-cost/$300-price profitable TVs that people aren't buying don't tell us anything about the market's future.
---
In the AI scenario that prompts all the conversations about the "cost of inference", the reason that we care about the cost is that we believe that it's currently *ABOVE* what the product is being sold for, and that VC money is being used to subsidise the users to promote the product. The story is that as the cost drops, it will eventually be below the amount that users are willing to pay, and the companies will magically switch to being profitable.
In that scenario, anything which forces the cost above the revenue is a problem. This applies to customers choosing to switch to more expensive models, customers using more of the service (due to reasoning) while paying fixed rates, or customers remaining on free plans rather than switching to affordable profitable paid plans.
The AI Hype group believes that the practical cost of providing inference services to users will drop enough that the $20/month users are profitable.
The AI Hype group's argument is that because the cost per token is coming down, that means we're on a trajectory to profitability.
The AI Bubble group believes that the practical cost of providing inference services to users is not falling fast enough.
Ed's argument is that despite the cost per token coming down, the cost per request is not coming down (because requests now require more advanced models or more tokens per request in order to be useful), so we are not on a trajectory to profitability.
The AI Bubble group (which I'm most likely a member of) also believe that the current added value of AI is obscured by anthropomorphization (regular people WANT the AI to be as smart as a human being), insane levels of marketing, FOMO from executive levels/shareholder/capitalists (basically) - all hoping to get rid of the cost of labor.
Outside of coding, the current wave of AI is:
* a slightly more intuitive search but with much "harder" misfires - a huge business on its own but good luck doing that against Google (Google controls its entire LLM stack, top to bottom)
* intuitive audio/image/video editing - but probably a lot more costly than regular editing (due to misfires and general cost of (re-)generation) - and with rudimentary tooling, for now
* a risky way to generate all sorts of other content, aka AI slop
All those current business models are right now probably billion dollar industries, but are they $500 billion/year industries to justify current spending? I think it's extremely unlikely.
I think LLM tech might be generating $500 billion/year worth of revenues across the entire economy, but probably in 2035. Current investors are investing for 2026, not 2035.
What would be a balanced perspective? Perhaps that oAI may now be another "boring" startup in that it is no longer primarily about moving the technology frontier, but about further scaling while keeping churn low, with margins (in the broader sense, i.e. for now prospective margins) becoming increasingly important?
Their only moat is that they started 'AI revolution'. More shock waves like DeepSeek release are still to come. Not too mention that LLM->AGI transition in near future is a moot point. They're riding the wave but for how much longer?
How long until disciples of Zitron realize that he is just feeding them sensationalist doomer slop in order to drive his own subscription business. Maybe never!
I find he exhbits the same characteristics of things that drove people like red letter media in the early aughts to be "successful". Make something so long and tedious that the idea of arguring with its own points would require something twice as long, and as such the ability to instead just motion to an uncontested 40 minute longread is then used as a surrogate for any actual arguement. Said diffferently, it's easy for AI skeptics to share this as some way of proving backing up their own point. It's 40 minutes long, how could it be wrong!
- Ed has no insider information on the accounting or strategy of these AI companies and primarily reacts to public rumors and/or public announcements. He has no education in the field or any special credentials relating to it
- The people with full information are intelligent, and are continually pouring a shit-tonne of money into it at huge valuations
To agree with his arguments you have to explain how the people investing are being fooled.. which is never brought up
The “arguments” I see about that are always some variation of “they were wrong about WeWork!”, and leave it at that. Obviously smart people can be wrong, obviously dumb money exists, but the entire VC model is that the vast majority of your ultra-risky investments will fail, so pointing to failures proves nothing.
If the vast majority of a VC’s ultra-risky investments fail (this is generally true, though usually somewhat less true at the “late-stage investments of billions” stage, hence WeWork being an interesting example), that would imply that there’s little reason to assume VCs are any good at reading financials; it won’t impact their business that much.
WeWork is IMO fairly strong evidence that SoftBank is, or at least was, either incompetent here or simply not looking at all.
> To agree with his arguments you have to explain how the people investing are being fooled
The people with insider knowledge are also the people who are financially invested in AI companies, and therefore incentivized to convince everyone else that growth will continue.
I don't disagree that openai is desperate, this is a fierce competition and Google has a pretty huge head start in a lot of ways, but I wonder at what point these people who constantly dismiss LLMs and AI will change their tune? I understand hating it and wishing we could all agree to stop things, I do too, but if you can't find any uses for it at this point it's clear you're not trying
Ads or the obvious path, they just haven't had time to pull it off yet.. plus it's going to be hard to pull it off without weakening the experience so they'd like to push that out as much as possible similar to how Google has only eroded the experience slowly over time. Their biggest competitor is Google
They don’t really need to make much money on ads. They just need to weaken the free user experience to convert as many as possible into paid subscribers, then shake off the rest.
I like all the “This article is wrong because he is very rude. Obviously they could easily make tons of money if they wanted to” responses here that ignore the eight completely different and unrelated businesses listed at the top of the post that OpenAI purports to be operating
I'm not even saying he's "wrong"; I wouldn't want to be long OpenAI (I don't think they're doomed but that's too much risk for my blood). But I would bet all my money that Zitron has no idea what he's talking about here.
Yeah, I'm also not saying that, I'm not "OMG AGI tomorrow" either. I think he was one of the first to voice concerns about the financial situation of AI companies and that was valuable, but if you look at his blog he's basically written the same post nonstop for two years. How many times do you need to say that?
Mind sharing why you think that (genuinely curious)?
I think Ed hit some broad points, mostly (i) there were some breathless predictions (human level intelligence) that aren't panning out; (ii) oh wow they burn a ton of cash. A ton; (iii) and they're very Musky: lots of hype, way less product. Buttressed with lots of people saying that if AI did a thing, then that would be super useful; much less showing of the thing being done or evidence that it's likely to happen soon.
None of which says these tools aren't super useful for coding. But I'm missing the link between super useful for coding and a business making $100B / year or more which is what these investments need. And my experience is more like... a 20% speed improvement? Which, again, yes please... but not a fundamental rewriting of software economics.
Yeah; if this ends up as “some people and companies find this to be a useful programming tool and will pay a moderate amount for it”, then you have essentially recreated Borland, or Jetbrains, business-model-wise. The current valuations are based on something else altogether.
I absolutely would not trust an AI CEO either, though all of them understand the basics of technology more than Zitron does; Zitron has said some really cringe things (he's not a technologist by background, and it shows).
I have been strongly-tempted to make Zitron and Marcus GPTs... But every time I think about getting started I realize a simple shell script would work better.
Oh wait Claude did a better job than I would have:
That's why I wanted to share that session specifically -- usually I provide a heck of a lot more context than that. But ... They were so well-known in personalityspace that that's all it took, which was its own funny.
GPT5 completely demolishes every other AI model for me. With minimal prompting it repeatedly and correctly makes massive refactors for me. All the other models pump out garbage on similar tasks.
Yup, what a pack of desperate losers. They should already be at $50 bill revenue, 90% gross margin, 60% operating margin, no capex. Unit economics can't possibly change, they have eaked out every last ounce of efficiency in training, inference, caching and hit every use case possible. It's all just really terrible.
It would be awesome if this blog post was made by an OpenAI [investor / stakeholder / whatever that non profit has] in order to drive up engagement for defending or hyping up OpenAI's efforts.
Do people really buy this nonsense? I mean just this week Sora 2 is creating videos that were unimaginable a few months ago. People writing these screeds at this point to me seem like they’re going through some kind of coping mechanism that has nothing to do with the financials of AI companies and everything to do with their own personal fears around what’s happening with machine intelligence.
So, wait, you're saying that these guys just aren't impressed by the AI technology, and that is blinding them to the fact that the AI companies' economics look really good?
That is a laughable take.
The AI technology is very very impressive. But that doesn't mean you can recover the hundreds of billions of dollars that you invested in it.
World-changing new technology excites everyone and leads to overinvestment. It's a tale as old as time.
I’m saying that seeing dubious economics is blinding people from accepting what’s actually going on with neural networks, and it leads to them having a profoundly miscalibrated mental model. This is not like analyzing a typical tech cycle. We are dealing with something here that we don’t really understand and transcends basic models like “it’s just a really good tool.”
I've followed the human level intelligence stuff for about 45 years, back before it was called AGI and the basic thesis is kind of anti religious. It's that human intelligence is basically the result of a biologically constructed computing device, not some god given spirit, and as human built computing devices continue their Moore's law like progression they will overtake at some point.
It's been true and kind of inevitable since Turing et all started talking about it in the 1950s and Crick and Watson discovered the DNA basis of life. It's not religious, not a mania, not far fetched.
The angle currently is the opposite. They're positing that the machine has some sort of spirit - see the other poster talking about "unexplained emergent intelligence".
Saying we don’t understand why LLMs are intelligent is both true and completely unrelated to religion. You inserted the word “spirit” so perhaps you are the one conflating the two.
Well it's sort of true in that people stick these LLMs together and they produce intelligent seeming outputs in ways that the people building them don't fully understand. Kind of like how evolution stuck a bunch of biological neurons together without needing to fully understand how it works.
It’s not a religious angle, we literally don’t know how or why these models work.
Yes we know how to grow them, but we don’t know what is actually going on inside of them. This is why Anthropic’s CEO wrote the post he did about the need for massive investment in interpretability.
It should rattle you that deep learning has these emergent capabilities. I don’t see any reason to think we will see another winter.
Correct, my opinions have nothing to do with financials. If I was around for the discovery of fire I wouldn’t be wondering about the impact on the bottom line.
(To be clear, I do agree that AI is going to drastically change the world, but I don't agree that that means the economics of it magically make sense. The internet drastically changed the world but we still had a dotcom bubble.)
Ed's newsletter, on HN, unflagged?! Maybe the bubble really is about to pop.
I've read pretty much all his posts on AI. The economics of it are worrying, to say the least. What's even more worrying is how much the media isn't talking about it. One thing Ed's spot on about: the media loved parroting everything Sam and Dario and Jensen had to say.
Speaking of boring and desperate, if you browse the posts on this "newsletter" for more than 2 minutes it's clear that the sole author is a giant bozo who also happens to be in love with himself.
The intensely negative reaction to GPT-5 is a bit weird to me. It tops the charts in an elaborate new third-party evaluation of model performance at law/medicine, etc [0]. It's true that it was a bit of an incremental improvement to o3, but o3 was a huge leap in capabilities to GPT4, a completely worthy claimant to be the next generation of models.
I will be the first person to say that AI models have not yet realized the economic impact they promised - not even close. Still, there are reasons to think that there's at least one more impressive leap in capabilities coming, based on both frontier model performance in high-level math and CS competitions, and the current focus of training models on more complex real-world tasks that take longer to do and require using more tools.
I agree with the article that OpenAI seems a bit unfocused and I would be very surprised if all of these product bets play out. But all they need is one or two more ChatGPT-level successes for all these bets to be worth it.
I think a lot of it is a reaction to the hype before the launch of GPT-5. People were sold and were expecting a noticeable big step (akin to GPT 3.5-4), but in reality it's not that much noticeably better for the majority of use cases.
Don't get me wrong, I actually quite like GPT-5, but this is how I understand the backlash it has received.
Yeah that is fair. I admit to being a bit bummed out as well. One might almost say that if O3 was effectively GPT5 in terms of performance improvement, that we were all really hoping for a GPT6, and that's not here yet. I am pretty optimistic, based on the information I have, that we will see GPT6-class models which are correspondingly impressive. Not sure about GPT-7 though.
Honestly, I’m skeptical of that narrative. I think AI skeptics were always going to be shrill about how it was overhyped and thus this proves how right they were! Seriously, how good would GPT5 have had to be in order for Ed to NOT write this exact post?
I’m very happy with GPT5, especially as a heavy API user. It’s very cost effective for its capabilities. I’m sure GPT6 will be even better, and I’m sure Ed and all the other people who hate AI will call it a nothing burger too. So it goes.
> based on both frontier model performance in high-level math and CS competitions
IMO the only takeaway from those successes is that RL for reasoning works when you have a clear reward signal. Whether this RL-based approach to reasoning can be made to work in more general cases remains to be seen.
There is also a big disconnect between how these models do so well in benchmark tasks like these that they've been specifically trained for, and how easily they still fail in everyday tasks. Yesterday I had the just released Sonnet 4.5 fail to properly do a units conversion from radians to arcsec as part of a simple problem - it was off by a factor of 3. Not exactly a PhD level math performance!
I mean, I agree. There is not yet a clear path/story as to how a model can provide a consistently expert-performance on real-world tasks, and the various breakthroughs we hear about don't address that. I think the industry consensus is more just that we haven't correctly measured/targeted those abilities yet, and there is now a big push to do so. We'll see if that works out.
I agree. I mean, I can get o3 right from the API if I choose, but 5-Thinking is better than o3, and 5-Research is definitely better than o3 pro in both ergonomics and output quality. If you read reddit about 4o, the group that formed a parasocial relationship with 4o and relied on its sycophancy seems to be the main group complaining. Interesting from a product market fit perspective, but not worrying as to "Is 5 on the whole significantly better than 4 / o1 / o3?" It is. Well, 5-mini is a dumpster fire, and awful. But I do not use it. I'm sure it's super cheap to run.
Another way to think of oAI the business situation is: are customers using more inference minutes than a year ago? I definitely am. Most definitely. For multiple reasons: agent round trip interactions, multimodal parsing, parallel codex runs..
I feel like people speculating on the unsustainability of their losses probably
value what they know more than what they don't know.
In this case however, what you don't know is more relevant than what you do know.
Despite the author's knowledge of publicly available information, I believe there is more the author is not aware of that might sway their arguments. Most firms keep a lot of things under wraps. Sure they are making lots of noise - everyone does.
The numbers don't add up and there are typical signs of the Magnificent 7 engaging in behavior to hide financials/ economics from their official balance sheets and investors.
PE & M7s are teaming up creating SPACS which then build and operate data centers.
By wonders of regulation and financial alchemy, that debt/ expenditure doesn't need to be reported as infra invest in their books then.
It's like the subprime mortgage mix all over again just this time it's about selling lofty future promises to enterprises who're gonna be left holding the bag on outdated chips or compute capacity without a path to ROI.
And there are multiple financial industry analysts besides Ed Zitron who raise the same topics.
Enterprises are always selling lofty future promises.
And your subprime mortgage reference - suggesting they are manipulating information to inflate the value of the firm - doesn't cleanly apply here. For once, here is a company that seems to have faithfully represented their obscene losses and here we are already comparing them to the likes of enron. Enron never reported financial data that can be categorized as losses.
I see lots of people speculating about these losses and I really wish someone investing in openai could come out and say something vague about why they are investing.
Once again, I need not tell you, the information available to the general public is not the same as that which is available to anyone that has invested a significant amount into openai.
So once again, reign in your tendencies to draw conclusion from the obscene losses they have reported - especially since I'm positive you do not have the right context to be able to properly evaluate whether these losses make sense or not.
Well, in the one thing you are right is that obviously with the one private enterprise that is OpenAI, information is not public.
But so while you sure want to sound authoritative you are just as much speculating as I am.
And to re-iterate, professional financial industry analysts from major banks, PE funds, career investors, as well as now Jeff Bezos, as Sam Altman before, are all speaking of a bubble and raising warnings.
Furthermore, there is public information out there of publicly traded companies which engage with OpenAI. And there is a clear trend observable that they're seeking alternative means of financing compared to public markets to fund these endeavors, effectively obfuscating their spend.
Pair that with studies from MIT, IBM, McKinsey and so forth that there is hardly any ROI in enterprise AI projects, failure rates are above 90%.
You're welcome to draw your own conclusions from this, but I'd rather suggest you not lecture others about how they interpret publicly available data while you build your entire argument on "nobody (including me) knows anything".
So "boring" ? Definitely not.