I'm curious what kind of prompting or context you are providing before asking for a liquid soap script - or if you've tried using Cursor and providing a bunch of context with documentation about liquid soap as part of it. My guess was these kinds of things get the models to perform much better. I have seen this work with internal APIs / best practices / patterns.
Yes, I used Cursor and tried providing both the whole Liquidsoap book or the URL to the online reference just in case the book was too large for context or it was triggering some sort of RAG.
Not successful.
It's not that it didn't do what I wanted: most of the time it didn't even run. Iterating on the error messages just arrived at progressively dumber not-solutions and running in circles.
I'm on Pro two-week trial so I tried a mix of mainstream premium models (including reasoning ones) + letting Cursor route me to the "best" model or whatever they call it.