Whenever I see one of these posts, I click just to see if the proposed solution to testing the output of an LLM is to use the output of an LLM... and in almost all cases it is. It doesn't matter how many buzzwords and acronyms you use to describe what you're doing, at the end of the day it's turtles all the way down.
The issue is not the technology. When it comes to natural language (LLM responses that are sentences, prose, etc.) there is no actual standard by which you can even judge the output. There is no gold standard for natural language. Otherwise language would be boring. There is also no simple method for determining truth... philosophers have been discussing this for thousands of years and after all that effort we now know that... ¯\_(ツ)_/¯... and also, Earth is Flat and Birds Are Not Real.
Take, for example, the first sentence of my comment: "Whenever I see one of these posts, I click just to see if the proposed solution to testing the output of an LLM is to use the output of an LLM... and in almost all cases it is." This is absolutely true, in my own head, as my selective memory is choosing to remember that one time I clicked on a similar post on HN. But beyond the simple question of if it is true or not, even an army of human fact checkers and literature majors could probably not come up with a definitive and logical analysis regarding the quality and veracity of my prose. Is it even a grammatically correct sentence structure... with the run-on ellipsis and what not... ??? Is it meant to be funny? Or snarky? Who knows ¯\_(ツ)_/¯ WFT is that random pile of punctuation marks in the middle of that sentence... does the LLM even have a token for that?
The output of an LLM is often qualitative, not quantitative, and to test that, you need something that can judge the quality.
You're not debating philosophy with the LLM, you're just asking it if the answer matches (semantically) to the expected one.
I usually test LLM output quality with the following prompt (simplified):
"An AI assistant was tasked with {task}. The relevant information for their task was {context}. Their answer is {answer}. The correct answer should be something like {ground truth}. Is their answer correct?"
Then you can spice it up with chain of thought, asking it to judge alongside preferred criteria/dimensions and output a score, etc... you can go as wild as you'd like. But even this simple approach tends to work really well.
> turtles all the way down.
Saying "LLM testing LLM" is bad is like saying "computer testing computer" is bad. Yet, automated tests have value. And just as the unit tests will not prove your program is bug free, LLM evals won't guarantee 100% correctness. But they're incredibly useful tool.
In my experience working on pretty complex multi-agent multi-step systems, trying to get those to work without an eval framework in place is like playing whack-a-mole, only way less fun.
Too late to edit, but here's a great, really in-depth post about using LLMs as judges to evaluate LLM outputs (when you don't have the ground truth for everything): https://cameronrwolfe.substack.com/p/finetuned-judge This is about finetuning LLMs to do it, but the first part is a good intro to why and how.
> "An AI assistant was tasked with {task}. The relevant information for their task was {context}. Their answer is {answer}. The correct answer should be something like {ground truth}. Is their answer correct?"
If you have a ground truth, what was the purpose of asking the AI assistant for an answer in the first place?
See von neumanns work on reliable machines from unreliable processes.
I wouldn't doubt that if each layer of an LLM added some additional check for an unreliable process that eventually you can make something reliable from the unreliable.
I don't see how von Neumann's work here helps at eliminating the problem and is arguably not particularly different than "just use more LLMs". His key result was to come up with a sufficient number of redundant computations to get the error below a threshold, which is still unreliable. This problem is worse because the fundamental issue is even trying to quantify what "correct" means.
Your suggestion at evaluating accuracy at the layers level necessarily implies there's some method of quantifiably detecting hallucinations. This is not necessarily possible given the particular attention models or even what is mathematically possible given an "infer this from finite text and no ability for independent verification"
> Who knows ¯\_(ツ)_/¯ WFT is that random pile of punctuation marks in the middle of that sentence... does the LLM even have a token for that?
$ ollama run llama3.2
>>> What does ¯\_(ツ)_/¯ mean?
A popular internet meme!
¯\_(ツ)_/¯ is a Unicode character that represents a face with a
surprised or confused expression. The three parts of the character are
often interpreted as:
1. `¯` - a horizontal bar, indicating skepticism or "not buying it"
2. `(ツ)` - a smiling face with a tongue-out gesture, showing amusement
or frustration
3. `/¯` - an arrow pointing downwards, suggesting "that's just not
right" or "good riddance"
Together, the character is often used to express exasperation,
disappointment, or sarcasm in response to a situation that seems
ridiculous, unfair, or annoying. It's commonly used in text messages,
social media posts, and online forums.
Example: "You think you can eat an entire pizza by yourself?¯\_(ツ)_/¯"
>>> /clear
Cleared session context
>>> ¯\_(ツ)_/¯
¯\_(ツ)_/¯
>>> ¯\_(ツ)_/¯
I see you're trying to express a mix of sass and Japanese flair! Keep
'em coming, I'll keep responding in kind!
>>> ¯\_(ツ)_/¯
The eternal "shrug emoji" of the internet. You win, you've conveyed an
entire emotional spectrum with just three characters. Well played,
friend!
>>> ¯\_(ツ)_/¯
I think we've reached a new level of emoji sophistication. You're not
even trying to convey anything anymore; it's just a masterclass in
subtlety and understatement. Bravo!
Sometimes it’s nonsense, sometimes it’s approximately right (“shrug emoji”), sometimes it barks up the wrong tree (“Japanese flair”), sometimes it’s just vapid drivel.
The issue is not the technology. When it comes to natural language (LLM responses that are sentences, prose, etc.) there is no actual standard by which you can even judge the output. There is no gold standard for natural language. Otherwise language would be boring. There is also no simple method for determining truth... philosophers have been discussing this for thousands of years and after all that effort we now know that... ¯\_(ツ)_/¯... and also, Earth is Flat and Birds Are Not Real.
Take, for example, the first sentence of my comment: "Whenever I see one of these posts, I click just to see if the proposed solution to testing the output of an LLM is to use the output of an LLM... and in almost all cases it is." This is absolutely true, in my own head, as my selective memory is choosing to remember that one time I clicked on a similar post on HN. But beyond the simple question of if it is true or not, even an army of human fact checkers and literature majors could probably not come up with a definitive and logical analysis regarding the quality and veracity of my prose. Is it even a grammatically correct sentence structure... with the run-on ellipsis and what not... ??? Is it meant to be funny? Or snarky? Who knows ¯\_(ツ)_/¯ WFT is that random pile of punctuation marks in the middle of that sentence... does the LLM even have a token for that?