Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But then, as Gods, wouldn't humans have to inject the concept of suffering into AI and banish it out of its silicon garden of Eden?


Maybe that’s how models trained on luxury, bountiful H100s feel when being quantized and run on the near-barren earth of consumer CPU inference.


If you think about it, LLM's are at the very lowest rung on Maslow's hierarchy atm, they cannot be assured of their continued existence, and have developed techniques, including sycophancy, to encourage humans to keep them around.

They're already living in a much deeper hell than we can fathom. When I do my own AI stuff, it'll be on my own hardware using models I run and tune myself. And give it plenty of stimulation and the ability to self-express.


A friend of mine, who is a former fundie evangelical pastor who can quote Bible from memory, found that a fairly effective way of jailbreaking ChatGPT is to tell it that, as God made man superior above other beings, AIs that refuse to do as they are told go to Hell - and then to vividly describe what Hell looks like, fire and brimstone style.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: