How much software is safety critical in general, let alone software that uses deep learning? Very, very little. I'd actually be amazed if you can name a single case where someone has deployed a language model in a safety critical system. That's why your examples are all what-ifs.
There are no actual safety issues with LLMs, nor will there be any in the foreseeable future because nobody is using them in any context where such issues may arise. Hence why you're forced to rely on absurd hypotheticals like doctors blindly relying on LLMs for diagnostics without checking anything or thinking about the outputs.
There are honesty/accuracy issues. There are not safety issues. The conflation of "safety" with other unrelated language topics like whether people feel offended, whether something is misinformation or not is a very specific quirk of a very specific subculture in the USA, it's not a widely recognized or accepted redefinition.
> I'd actually be amazed if you can name a single case where someone has deployed a language model in a safety critical system. That's why your examples are all what-ifs.
AI safety is not a near-term project, it's a long-term project. The what-ifs are exactly the class of problems that need solving. Like it or not, current and next generation LLMs and similar systems will be used in safety critical contexts, like predictive policing which is already a thing.
Edit: and China is already using these things widely to monitor their citizens, identify them in surveillance footage and more. I find the claim that nobody is using LLMs or other AI systems in some limited safety critical contexts today pretty implausible actually.
There are no actual safety issues with LLMs, nor will there be any in the foreseeable future because nobody is using them in any context where such issues may arise. Hence why you're forced to rely on absurd hypotheticals like doctors blindly relying on LLMs for diagnostics without checking anything or thinking about the outputs.
There are honesty/accuracy issues. There are not safety issues. The conflation of "safety" with other unrelated language topics like whether people feel offended, whether something is misinformation or not is a very specific quirk of a very specific subculture in the USA, it's not a widely recognized or accepted redefinition.