Hacker Newsnew | past | comments | ask | show | jobs | submit | shimman's commentslogin

Feel free to use local services then, not every company has to support the entire world. Some are fine with a small slice. Expecting otherwise isn't sustainable for the sub trillion dollar non-monopolists companies, not without massive public support from the government at least.

If you're in the USA contact your state AG + Senator and present your case. Mention that Google is abusing small owners due to their ineptitude in security practices, construct the argument that makes it appear Google is squeezing small users like a mob boss/cartel.

Also before doing this save anything important that Google owns (gmail, youtube videos, anything in storage). The leaders at Google are vengeful enough to completely lock you out for challenging them.


Is it really hard to figure out that the owner of a company, who personally stands to make 100s of billions, would be doing marketing when talking about said company? Do they not teach critical thinking anymore in schools, did it go away with phonics too? Why would you ever ignore the MASSIVE conflict of interest here, it's just really foolish but it's endemic not just in tech journalism or journalism in general where people just take the words of others and not apply any critical analysis to them.

It's all access journalism now, waste of time.


Those people tend to suffer from AI psychosis, and I don't think it's a thing you'd want to admit publicly that you don't interact with any humans and prefer the company of machines (let us also ignore that such people wouldn't be in public to begin with).

I can't image such people are living meaningful lives in any capacity. They're up there with consumers that think the only purpose in life is to cheerlead for a corporation and buy their wares.


The GP said more with LLMs than people - not no interactions at all with people and not preferring machines to people. I don't think it is that hard to spend more time talking with LLMs than people if you work in tech and I don't think that takes away from one's life meaningfulness.

Yes, this is called alienation of the work place and it has been discussed since the 1800s. Maybe tech workers will realize that their employers are literal enemies of humanity rather than their friends.

Employers want to mechanize humans and they'll force it even if it makes everyone miserable for their entire, short, lives.

Amazon is a good example of this.


Humans aren't benefiting from LLMs, only a few individuals are. Let's stop with the fake platitudes and realize that unless this technology isn't completely open sourced from top to bottom, it's a complete farce to think humans are going to benefit and not just the rich getting richer.

> Humans aren't benefiting from LLMs, only a few individuals are.

Honest question: how is this different from traditional Open Source? Linux powers most of the internet, yet the biggest beneficiaries are cloud providers, not individual users. Good open weights models already exist and people can run them locally. The gap between "open" and "everyone benefits equally" has always been there...


Because opposite is true for open source? It is actually for free, whether you contribute to it or not. Anyone can legally use it for free. Torwalds can not just wake up one day and decide to charge more.

If you feel like linux is a too much of a monopoly, you can actually fork it and compete.


Same is true about science as well. Taxpayer money is spent on research, but the outcomes of that research primarily benefits the corporate interests.

I'm the last person to cheer for unrestrained capitalism, but this anti-billionaire / anti-AI narrative is getting ridiculous even for general population standards, much less for HN. It's like people think their food or medicine or LLMs grow on fucking trees. No. Companies and corporations is how adults do stuff for other adults, at scale. Everyone understands that, except of a part of software industry, that by accidental confluence of factors, works by different rules than literally the rest of the world.


You must not be serious. Every single person using LLMs, whether paid or free tiers or open models, whether using them for chat or as part of some kind of data pipeline - so possibly without even knowing they're using them - benefits.

"Few individuals" get money mostly for providing LLMs as a service. As far as tech businesses go, this is refreshingly straightforward, literally just charging money for providing some useful service to people. Few tech companies have anything close to a honest business model like this.


Gemma4 is apache2 licensed.

I am unsure about the openness of the training data itself. That too should be required for a LLM to be considered 'open'.

Open source is the only way forward, I agree.


Yes, it's nice to have a trillion+ dollar monopoly able to subsidize loss leaders to put your competitors out of business.

Bro the planet is literally experiencing a climate disaster and you think the solution is to create more systems that are misaligned with the planet's ecosystem for humans?

I guess the great filter is a real thing and not just a thought experiment.


I assure you that voluntary meat consumption because "taste buds go brr" is a much bigger problem than AI that results in actual productivity gains (and potentially solve the very climate crisis you complain about.)

Completely agree. Meat should be priced to include externalities. People can get used to beans. Beans are great!

Yeah, git reset --hard is something I do like once a week! lol

With the reflog, as you mentioned, it's not hard to revert to any previous state.


Yes that plus having tens of billions of gulf money certainly helps you subsidize your moronic failures with money that isn't yours while you continue, and fail to, achieve profitability in any time horizon within a single lifespan.

Also Claude owes its popularity mostly to the excellent model running behind the scenes.

The tooling can be hacky and of questionable quality yet, with such a model, things can still work out pretty well.

The moat is their training and fine-tuning for common programming languages.


>> Also Claude owes its popularity mostly to the excellent model running behind the scenes.

It's a bit of both. Claude Code was the tool that made Anthropic's developer mindshare explode. Yes, the models are good, but before CC they were mostly just available via multiplexers like Cursor and Copilot, via the relatively expensive API.


Huh what moronic failure did Anthropic do? Every Claude Code user I know loves it.

I don't know if the comment was referring to this, but recently, people have been posting stuff about them requiring their new hire Jared Sumner, author of the Bun runtime, to first and foremost fix memory leaks that caused very high memory consumption for claude's CLI. The original source was them posting about the matter on X I think.

And at first glance, none of it was about complex runtime optimizations not present in Node, it was all "standard" closure-related JS/TS memory leak debugging (which can be a nightmare).

I don't have a link at hand because threads about it were mostly on Xitter. But I'm sure there are also more accessible retros about the posts on regular websites (HN threads, too).


Ah I believe codex has similar issues. Terrible code quality but goes to show it doesn't really matter in the end.

Yes that was pretty much my own takeaway, too.

After some experience, it feels to me (currently primarily a JS/TS developer) like most SPAs are ridden by memory leaks and insane memory usage. And, while it doesn't run in the browser, the same think seems to apply to Claude CLI.

Lexical closures used in long-living abstractions, especially when leveraging reactivity and similar ideas, seems to be a recipe for memory-devouring apps, regardless of browser rendering being involved or not.

The problems metastasize because most apps never run into scenarios where it matters, a page reload or exit always is close enough on the horizon to deprioritize memory usage issues.

But as soon as there are large allocations, such as the strings involved in LLM agent orchestration, or in non-trivial other scenarios, the "just ship it" approac requires careful revision.

Refactoring shit that used to "just work" with memory leaks is not always easy, no matter whose shit it is.


> it doesn't really matter in the end

if you have one of the top models in a disruptive new product category where everyone else is sprinting also, sure..


Code quality never really mattered to users of the software. You can have the most <whatever metric you care about> code and still have zero users or have high user frustration from users that you do have.

Code quality only matters in maintainability to developers. IMO it's a very subjective metric


It's not subjective at all. It's not art.

Code quality = less bugs long term.

Code quality = faster iteration and easier maintenance.

If things are bad enough it becomes borderline impossible to add features.

Users absolutely care about these things.


Okay, but I meant how you measure is subjective.

How do you measure code quality?

> Users absolutely care about these things.

No, users care about you adding new features, not in your ability to add new features or how much it cost you to add features.



Recently there was a bug where CC would consume day/week/month quota in just a few hours, or hundreds of dollars in API costs in a few prompts.

The people who don’t love it probably stopped using it.

You don’t have to go far on this site to find someone that doesn’t like Claude code.

If you want an example of something moronic, look at the ram usage of Claude code. It can use gigabytes of memory to work with a few megabytes of text.


I've used and hate it, it's garbage.

There’s a sample group issue here beyond the obvious limitations of your personal experience. If they didn’t love it, they likely left it for another LLM. If they have issues with LLM’s writ large, they’re going to dislike and avoid all of them regardless.

In the current market, most people using one LLM are likely going to have a positive view of it. Very little is forcing you to stick with one you dislike aside from corporate mandates.


There is right now another HN thread where a lot of users hate Claude Code.

To be fair, their complaints are about very recent changes that break their workflow, while previously they were quite content with it.


There have certainly been periods of irrational exuberance in the tech industry, but there are also many companies that were criticized for being unprofitable which are now, as far as I can tell, quite profitable. Amazon, Uber, I'm sure many more. I'm curious what the basis is to say that Anthropic could never achieve profitability? Are the numbers that bad?

your prediction is going to be wrong, even with all those caveats

Maybe. Maybe not.

But if this party is sustainable, what’s the deal with the rate limits that everyone is going on about?


If my prediction is wrong these should be trillion dollar companies yesterday with what their liars proclaim, until then we know that Anthropic has only made $5billion total revenue to data due to the Pentagon lawsuits and that required $20billion.

Can't wait to see how much public money they need going forward! Hopefully our progeny don't die in the subsequent climate crisis before they can unleash true shareholder value.


Investors are getting antsy and are going to start demanding AI companies start producing real returns.

Anthropic et al. better figure it out sooner rather than later because this game they’re all playing where they want all of us to use basically beta-release tools (very generous in some cases) to discover the “real value” of these tools while they attempt to reduce their burn with unsustainable subscription prices can’t go on forever.


This says more about you than the "intellect" of these nondeterministic probability programs.

Can you provide actual context to what was beyond your ability and how you're able to determine if the solution was correct?

Finding out that all these comments that reference the "magical incantation" tend to be full of hot air. Maybe yours is different.


> how you're able to determine if the solution was correct

I had hundreds of unit tests that did not trigger an assertion I added for idempotency. Claude wrote one that triggered an assertion failure. Simple as that. A counterexample suffices.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: