Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

openrouter.ai does exactly that, and it lets you use models from OpenAI as well. I switch models often using openrouter.

But, talk to any (or almost any) non-developer and you'll find they 1/ mostly only use ChatGPT, sometimes only know of ChatGPT and have never heard of any other solution, and 2/ in the rare case they did switch to something else, they don't want to go back, they're gone for good.

Each provider has a moat that is its number of daily users; and although it's a little annoying to admit, OpenAI has the biggest moat of them all.



Non developers using Chatbots and being willing to pay is never going to be as big as the enterprise market or BigTech using AI in the background.

I would think that Gemini (the model) will add profit to Google way before OpenAI ever becomes profitable as they leverage it within their business.

Why would I pay for openrouter.ai and add another dependency? If I’m just using Amazon Bedrock hosted models, I can just use the AWS SDK and change the request format slightly based on the model family and abstract that into my library.


You don't need openrouter if you already have everything set up in your own AWS environment. But if you don't, openrouter is extremely straightforward, just open an account and you're done.


All google needs to do is bite the bullet on the cost and flip core search to AI and immediately dominate the user count. They can start by focusing first on questions that get asked in Google search. Boom


Core search has been using “AI” since they basically deprioritized PageRank.

I think the combination of AI overviews and a separate “AI mode” tab is good enough.


How is the number of users a moat when you are losing money on every user?


Inference is cash positive: it's research that takes up all the money. So, if you can get ahold of enough users, the volume eventually works in your favour.



A moat involves switching costs for users. It’s not related to profitability


A moat defends your business, if you lose more money the more users you have, the number of users is not a moat.


The idea is war of attrition and then as your potential competitors run out of money and it costs too much for a new entrant, you raise your prices to be profitable and/or enshittify your product.


Right, but unlike with social products (where the network of users is essential) or transportation/food delivery (where providers will follow the user volume) I just don’t see any stickiness benefit for OpenAI. A user’s conversation history is the only potentially valuable bit, but I think most users treat their ChatGPT history like their Google search history; disposable.


Do you use thinking functionality of these models, does every model have their own syntax for their API?


This is the documentation for using Amazon Bedrock hosted models from Python.

https://docs.aws.amazon.com/code-library/latest/ug/python_3_...

Every model family has its own request format.

When I said it was “trivial” to write a library, I should have been more honest. “It’s trivial to point ChatGPT to the documentation and have it one shot creating a Python library for the models you want to support”.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: