Thank you! Everybody here acting like LLMs have some kind of ulterior motive or a mind of their own. It's just printing out what is statistically more likely. You are probably all engineers or at least very interested in tech, how can you not understand that this is all LLMs are?
Well I’m sure the company in legal turmoil over an AI blackmailing one of its employees will be relieved to know the AI didn’t have any anterior motive or mind of its own when it took those actions.
If the idiots in said company thought it was a smart idea to connect their actual systems to a non-deterministic word generator, that's on them for being morons and they deserve whatever legal ramifications come their way.
"Printing out what is statistically more likely" won't allow you to solve original math problems... unless of course, that's all we do as humans. Is it?
I mean I build / use them as my profession, I intimately understand how they work. People just don't usually understand how they actually behave and what levels of abstraction they compress from their training data.
The only thing that matters is how they behave in practice. Everything else is a philosophical tar pit.