Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just hit that error a few minutes ago. I build my llama.cpp from source because I use CUDA on Linux. So I made the mistake of trying to run Gemma4 on an older version I had and I got the same error. It’s possible brew installs an older version which doens’t support Gemma4 yet.


Ah it was indeed just that!

I'm now on:

$ llama --version version: 8770 (82764d8) built with GNU 15.2.0 for Linux x86_64

(From Nix unstable)

And this works as advertised, nice chat interface, but no openai API I guess, so no opencode...


check on same port, there is an OpenAI API https://github.com/ggml-org/llama.cpp/tree/master/tools/serv...


Good stuff, thanx!


And that's exactly why llama.cpp is not usable by casual users. They follow the "move fast and break things" model. With ollama, you just have to make sure you're getting/building the latest version.


Its not possible to run the latest model architectures without 'moving fast'. The only thing broken here is that they are trying to use an old version with a new model.


and Ollama suffered the same fate when wanting to try new models


What fate?


the impedance mismatch between when models are released and the capability of Ollama and other servers capability for use.


I'm a bit unsure what that has to do with someone running an outdated version of the program while trying to use a model that is supported in the latest release.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: