Hacker Newsnew | past | comments | ask | show | jobs | submit | wg0's commentslogin

I don't understand what's the threat from a CLI which is useless without AI models and Anthropic could be one of them?

Switching models is too easy and the models are turning into commodities. They want to own your dev environment, which they can ultimately charge more when compared to access to their model.

They’re afraid their customers will switch model provider in the future, so instead they made them switch now.

They want to be the next JetBrains.

I think the focus on OpenCode is distorting the story. If any tool tried to use the CC API instead of the regular API they’d block it.

Claude Code as a product doesn’t use their pay per call API, but they’ve never sold the Claude Code endpoint as a cheaper way to access their API without paying for the normal API


This is where EU needs to put its weight and at least in Europe - if you sell something but not willing to support - open source client, server and device all sorts of software.

While I'm a shameless freeloader with mostly backend skills - Adam has my utmost respect for out of the box innovation.

I did buy some of this books. Not the Tailwind UI though.

Adam, you gotta pay bills too. I understand that. And I respect that.

The day a product of mine starts making money, I'll come knocking your door.

Thank you.


Well. TBH for some maybe this is the wrong question to ask but I have been thinking where did those 250 billion tokens go? What tools/products/services came out of that?

EDIT: Typos


This has been my biggest question through this whole AI craze. If AI is making everyone X% more productive, where is the proof? Shouldn't we expect to see new startups now able to compete with large enterprises? Shouldn't we be seeing new amazing apps? New features being added? Shouldn't bugs be a thing of the past? Shouldn't uptime be a solved problem by now?

I look around and everything seems to be... the same? Apart from the availability of these AI tools, what has meaningfully changed since 2020?


AI coding improved a lot over 2025. In early 2025 LLMs still struggled with counting. Now they are capable of tool calling so they can just use a calculator. Frankly, I'd say AI coding may as well have not existed before mid-2025. The output wasn't really that good. Sure you could generate code but couldn't rely on a coding agent to make 2 line edits to a 1000 line file.


I don't doubt that they have improved a lot this year, but the same claims were being made last year as well. And the year before that. I still haven't seen anything that proves to me that people are truly that much more productive. They certainly _feel_ more productive, though.

Hell, the GP spent more than $50,000 this year on API calls alone and the results are... what again? Where is the innovation? Where are the tools that wouldn't have been possible to build pre-ChatGPT?

I'm constantly reminded of the Feynman quote: "The first principle is that you must not fool yourself, and you are the easiest person to fool."


Is there something similar that exists for Postgres?


Anti Gravity is a flop. I mean it uses Gemini under the hood.

But you cannot use it with an API key.

If you're on a workspace account, you can't have normal individual plan.

You have to have the team plan with $100/month or nothing.

Google's product management tier is beyond me.


OK, but Gmail, Google Maps, Google Docs, and Google Search etc are ubiquitous. `Google' has even become a verb. Google might take a shotgun approach, but it certainly does create widely used products.


I will add that there's also Gemini in Chrome. With Chrome being the largest browser by market share, that's a powerful de facto default.


  > With Chrome being the largest browser by market share, that's a powerful de facto default.
where art thou anti-trust enforcement...


Every personal computer user except Chromebook users went out of their way to download Chrome. What exactly do you want “anti trust” to do?


maybe not allow google to bundle gemini with chrome?


So should we also not allow OpenAI to bundle the OpenAI model with the ChatGPT app

Absolutely no one besides ChromeOS users are forced to use Chrome.


Anti-trust doesn’t have to involve force, but monopolistic behavior.

Google has spent over a decade advertising Chrome on all their properties and has an unlimited budget and active desire to keep Chrome competitive. Mozilla famously needs Google’s sponsorship to stay solvent. Apple maintains Safari to have no holes in their ecosystem.

Stop being silly defending trillion dollar companies that are actively making the internet worse, it’s not productive or funny.


And poor little under capitalized Microsoft and Apple couldn’t compete?


That doesn't negate my original point.


AI usage verboten? Or erlaubt?


Fake Brian Cox and Richard Fynemen are in abundance.

Imagine 50 years down the road impossible to tell which things Richard Feynman really said in his lectures and which are all made up.


I just blocked a fake Brian Cox one a couple of hours ago. There can be a certain time waste looking at the things before figuring they are rubbish.

Youtube should give you more options than just block or don't show me this. You should be able to click 'AI fake of real person' so they don't get inflicted on others unless they like that stuff.


Unfortunately it's not possible to block a channel on YouTube, or I'd have blocked thousands. All you can do is tell it not to recommend a channel.


I'm working on a short story that explores this! Different organizations producing fake videos about a real person/event, to nudge you in their desired direction.


Why not use SQLite then as database for package managers? A local copy could be replicated easily with delta fetch.


Could you elaborate on the storage engine and processing pipeline if not confidential?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: