There is one key way in which I believe the current AI bubble differs from the TMT bubble. As the author points out, much of the TMT bubble money was spent building infrastructure that benefited us many decades later.
But in the case of AI, that argument is much harder to make. The cost of compute hardware is astronomic relative to the pace of improvements. In other words, a million dollars of compute today will be technically obsolete (or surpassed on a performance/watt basis) much faster than the fiber optic cables laid by Global Crossing.
And the AI data centers specialized for Nvidia hardware today may not necessarily work with the Nvidia (or other) hardware five years from now—at least not without major, costly retrofits.
Arguably, any long-term power generation capacity put down for data centers of today would benefit data centers of tomorrow, but I'm not sure much such investment is really being made. There's talk of this and that project, but my hunch and impression is that much of it will end up being small-scale local power generation from gas turbines and the like, which is harmful for the local environment and would be quickly dismantled if the data center builders or operators hit the skids. In other words, if the bubble bursts I can't imagine who would be first in line to buy a half-built AI data center.
This leads me to believe this bubble has generated much less useful value to benefit us in future than the TMT bubble. The inference capacity we build today is too expensive and ages too fast. So the fall will be that much more painful for the hyperscalers.
Ok, so I'm not a programmer, just a knuckle dragger who is vibe coding myself a digital assistant to help me prioritize emails and do scheduling. And because I may use a cloud AI's API rather than local processing I already consider the following to be essential: Token estimation, agent state persistence, cost monitoring and rate limiting, circuit breakers, retry logic, context caching, deadlock detection.
Those are just some of the requirements for AI agent deployment that this article mentions. And hell, I'd want some of them even if I were running the agents on my own GPU...
How does anyone have the brass stones to go to production without at least the precautions that I found necessary within half a day of thinking about it?
Your 2006 MacBook was pre-retina, a.k.a. High-resolution, displays though. Any kind of smearing effect probably improved the perception of the image because it masked the very visible pixels in the LCD
AI and vibe coding does not prevent the creation of good, robust and durable code. All it takes is for the coder to think carefully about the functions and not fall for the temptation to make the LLM add a bunch of fluff and features "just because they can".
I agree but the problem is that the average developer uses an LLM is to avoid doing so. I know we’re all carefully examining the LLM output here on HN but that’s not how I see a lot of developers work in practice.
It’s a matter of degree. It all adds up. Surely you’ve heard of the great smog in London? That’s an extreme example but all major cities had appalling air pollution. Bad air alone killed untold numbers of people every year.