Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've really got to refactor my side project which I tailored to just use OpenAI API calls. I think the Anthropic APIs are a bit different so I just never put in the energy to support the changes. I think I remember reading that there are tools to simpify this kind of work, to support multiple LLM APIs? I'm sure I could do it manually but how do you all support multiple API providers that have some differences in the API design?



I built LLMRing (https://llmring.ai) for exactly this. Unified interface across OpenAI, Anthropic, Google, and Ollama - same code works with all providers.

The key feature: use aliases instead of hardcoding model IDs. Your code references "summarizer", and a version-controlled lockfile maps it to the actual model. Switch providers by changing the lockfile, not your code.

Also handles streaming, tool calling, and structured output consistently across providers. Plus a human-curated registry (https://llmring.github.io/registry/) that I keep updated with current model capabilities and pricing - helpful when choosing models.

MIT licensed, works standalone. I am using it in several projects, but it's probably not ready to be presented in polite society yet.


OpenRouter, Glama ( https://glama.ai/gateway/models/claude-sonnet-4-5-20250929 ), AWS Bedrock, all of them provide you access to all of the AI models via OpenAI compatible API.



LiteLLM is your friend.


or AI SDK


Why don't you ask LLM to do it for you?


I use LiteLLM as a proxy.


> think I remember reading that there are tools to simpify this kind of work, to support multiple LLM APIs

just ask Claude to generate a tool that does this, duh! and tell Claude to make the changes to your side project and then to have sex with your wife too since it's doing all the fun parts




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: