Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Meanwhile, tons of people on reddit's /r/ChatGPT were complaining that the shift from ChatGPT 4o to ChatGPT 5 resulted in terse responses instead of waxing lyrical to praise the user. It seems that many people actually became emotionally dependent on the constant praise.


GPT5 isn't much more terse for me, but they gave it a new equally annoying writing style where it writes in all-lowercase like an SF tech twitter user on ketamine.

https://chatgpt.com/share/689bb705-986c-8000-bca5-c5be27b0d0...



The folks over on /r/MyBoyfriendIsAI seem to be in an absolute shambles over the change .

[0] reddit.com/r/MyBoyfriendIsAI/


if those users were exposed to the full financial cost of their toy they would find other toys


And what is that cost, if you have it handy? Just as an example, my Radeon VII can perfectly well run smaller models, and it doesn't appear to use more power than about two incandescent lightbulbs (120 W or so) while the query is running. I don't personally feel that the power consumed by approximately two light bulbs is excessive, even using the admittedly outdated incandescent standard, but perhaps the commercial models are worse?

Like I know a datacenter draws a lot more power, but it also serves many many more users concurrently, so economies of scale ought to factor in. I'd love to see some hard numbers on this.


IIRC you can actually get the same kind of hollow praise from much dumber, locally-runnable (~8B parameters) models.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: