Feel free to use local services then, not every company has to support the entire world. Some are fine with a small slice. Expecting otherwise isn't sustainable for the sub trillion dollar non-monopolists companies, not without massive public support from the government at least.
If you're in the USA contact your state AG + Senator and present your case. Mention that Google is abusing small owners due to their ineptitude in security practices, construct the argument that makes it appear Google is squeezing small users like a mob boss/cartel.
Also before doing this save anything important that Google owns (gmail, youtube videos, anything in storage). The leaders at Google are vengeful enough to completely lock you out for challenging them.
Is it really hard to figure out that the owner of a company, who personally stands to make 100s of billions, would be doing marketing when talking about said company? Do they not teach critical thinking anymore in schools, did it go away with phonics too? Why would you ever ignore the MASSIVE conflict of interest here, it's just really foolish but it's endemic not just in tech journalism or journalism in general where people just take the words of others and not apply any critical analysis to them.
Those people tend to suffer from AI psychosis, and I don't think it's a thing you'd want to admit publicly that you don't interact with any humans and prefer the company of machines (let us also ignore that such people wouldn't be in public to begin with).
I can't image such people are living meaningful lives in any capacity. They're up there with consumers that think the only purpose in life is to cheerlead for a corporation and buy their wares.
The GP said more with LLMs than people - not no interactions at all with people and not preferring machines to people. I don't think it is that hard to spend more time talking with LLMs than people if you work in tech and I don't think that takes away from one's life meaningfulness.
Yes, this is called alienation of the work place and it has been discussed since the 1800s. Maybe tech workers will realize that their employers are literal enemies of humanity rather than their friends.
Employers want to mechanize humans and they'll force it even if it makes everyone miserable for their entire, short, lives.
Humans aren't benefiting from LLMs, only a few individuals are. Let's stop with the fake platitudes and realize that unless this technology isn't completely open sourced from top to bottom, it's a complete farce to think humans are going to benefit and not just the rich getting richer.
> Humans aren't benefiting from LLMs, only a few individuals are.
Honest question: how is this different from traditional Open Source? Linux powers most of the internet, yet the biggest beneficiaries are cloud providers, not individual users. Good open weights models already exist and people can run them locally. The gap between "open" and "everyone benefits equally" has always been there...
Because opposite is true for open source? It is actually for free, whether you contribute to it or not. Anyone can legally use it for free. Torwalds can not just wake up one day and decide to charge more.
If you feel like linux is a too much of a monopoly, you can actually fork it and compete.
Same is true about science as well. Taxpayer money is spent on research, but the outcomes of that research primarily benefits the corporate interests.
I'm the last person to cheer for unrestrained capitalism, but this anti-billionaire / anti-AI narrative is getting ridiculous even for general population standards, much less for HN. It's like people think their food or medicine or LLMs grow on fucking trees. No. Companies and corporations is how adults do stuff for other adults, at scale. Everyone understands that, except of a part of software industry, that by accidental confluence of factors, works by different rules than literally the rest of the world.
You must not be serious. Every single person using LLMs, whether paid or free tiers or open models, whether using them for chat or as part of some kind of data pipeline - so possibly without even knowing they're using them - benefits.
"Few individuals" get money mostly for providing LLMs as a service. As far as tech businesses go, this is refreshingly straightforward, literally just charging money for providing some useful service to people. Few tech companies have anything close to a honest business model like this.
Bro the planet is literally experiencing a climate disaster and you think the solution is to create more systems that are misaligned with the planet's ecosystem for humans?
I guess the great filter is a real thing and not just a thought experiment.
I assure you that voluntary meat consumption because "taste buds go brr" is a much bigger problem than AI that results in actual productivity gains (and potentially solve the very climate crisis you complain about.)
Yes that plus having tens of billions of gulf money certainly helps you subsidize your moronic failures with money that isn't yours while you continue, and fail to, achieve profitability in any time horizon within a single lifespan.
>> Also Claude owes its popularity mostly to the excellent model running behind the scenes.
It's a bit of both. Claude Code was the tool that made Anthropic's developer mindshare explode. Yes, the models are good, but before CC they were mostly just available via multiplexers like Cursor and Copilot, via the relatively expensive API.
I don't know if the comment was referring to this, but recently, people have been posting stuff about them requiring their new hire Jared Sumner, author of the Bun runtime, to first and foremost fix memory leaks that caused very high memory consumption for claude's CLI. The original source was them posting about the matter on X I think.
And at first glance, none of it was about complex runtime optimizations not present in Node, it was all "standard" closure-related JS/TS memory leak debugging (which can be a nightmare).
I don't have a link at hand because threads about it were mostly on Xitter. But I'm sure there are also more accessible retros about the posts on regular websites (HN threads, too).
After some experience, it feels to me (currently primarily a JS/TS developer) like most SPAs are ridden by memory leaks and insane memory usage. And, while it doesn't run in the browser, the same think seems to apply to Claude CLI.
Lexical closures used in long-living abstractions, especially when leveraging reactivity and similar ideas, seems to be a recipe for memory-devouring apps, regardless of browser rendering being involved or not.
The problems metastasize because most apps never run into scenarios where it matters, a page reload or exit always is close enough on the horizon to deprioritize memory usage issues.
But as soon as there are large allocations, such as the strings involved in LLM agent orchestration, or in non-trivial other scenarios, the "just ship it" approac requires careful revision.
Refactoring shit that used to "just work" with memory leaks is not always easy, no matter whose shit it is.
Code quality never really mattered to users of the software. You can have the most <whatever metric you care about> code and still have zero users or have high user frustration from users that you do have.
Code quality only matters in maintainability to developers. IMO it's a very subjective metric
The people who don’t love it probably stopped using it.
You don’t have to go far on this site to find someone that doesn’t like Claude code.
If you want an example of something moronic, look at the ram usage of Claude code. It can use gigabytes of memory to work with a few megabytes of text.
There’s a sample group issue here beyond the obvious limitations of your personal experience. If they didn’t love it, they likely left it for another LLM. If they have issues with LLM’s writ large, they’re going to dislike and avoid all of them regardless.
In the current market, most people using one LLM are likely going to have a positive view of it. Very little is forcing you to stick with one you dislike aside from corporate mandates.
There have certainly been periods of irrational exuberance in the tech industry, but there are also many companies that were criticized for being unprofitable which are now, as far as I can tell, quite profitable. Amazon, Uber, I'm sure many more. I'm curious what the basis is to say that Anthropic could never achieve profitability? Are the numbers that bad?
If my prediction is wrong these should be trillion dollar companies yesterday with what their liars proclaim, until then we know that Anthropic has only made $5billion total revenue to data due to the Pentagon lawsuits and that required $20billion.
Can't wait to see how much public money they need going forward! Hopefully our progeny don't die in the subsequent climate crisis before they can unleash true shareholder value.
Investors are getting antsy and are going to start demanding AI companies start producing real returns.
Anthropic et al. better figure it out sooner rather than later because this game they’re all playing where they want all of us to use basically beta-release tools (very generous in some cases) to discover the “real value” of these tools while they attempt to reduce their burn with unsustainable subscription prices can’t go on forever.
> how you're able to determine if the solution was correct
I had hundreds of unit tests that did not trigger an assertion I added for idempotency. Claude wrote one that triggered an assertion failure. Simple as that. A counterexample suffices.
reply