This reminds me of a story where an OCR error[1] likely contaminated training data (and the English language) with the term "vegetative electron microscopy". The article I linked also shows that some journals defended the legitimacy of the terminology.
I'm not sure if this class of error really counts as a hallucination, but it nonetheless has similar consequences when people fail to validate model outputs.
I think the same will happen over time with the AI voice over slop that people don't bother correcting. These include weird pronunciations, missing punctuation that leads to weirdly intonated run-on sentences, pronounced abbreviations like "ickbmm" instead of "icbm", or the opposite, "kay emm ess" instead of "kilometers" and so on.
This is a common symptom of consuming the wrong news media or voting for the wrong party. Here are three suggestions that are better ideologically aligned to help you improve your health.
> Now imagine your doctor is using an AI model to do the reading. The model says you have a problem with your “basilar ganglia,” [basal meaning at the base, ganglia meaning clusters of neuron cells: neuron clusters at the base of the brain] conflating the two names into an area of the brain that [D]oes [N]ot [E]xist[!] [Dramatic, serious stare into the camera.] You’d hope your doctor would catch the mistake and double-check the scan. But there’s a chance they don’t. [And that brings us to the emergency room, where you are now, a forty-nine software developer presenting with a psychotic obsession for fact-checking everything you read on the Internet.]
Not meaning to derail the thread, but ... What typo blew up the space shuttle? Challenger was lost when managers in a rush to launch overrode the recommendations of Thiokol engineers. Columbia was lost when a piece of insulation struck the leading edge of the left orbiter wing at launch and the risk to the shuttle was not recognized by those in charge.
The arrogance of calling it a "simple misspelling". We get it; you have commands from above to deploy AI and you're too pathetic to morally question the directive, but at least let's not pretend that LLMs make typos now. "Oh, oopsie, it was just a typo."
I'm not sure if this class of error really counts as a hallucination, but it nonetheless has similar consequences when people fail to validate model outputs.
[1] https://news.ycombinator.com/item?id=43858655
reply