Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

https://chatgpt.com/share/68e366b2-0fdc-800f-9bf3-86974703b6...

GPT-5 Instant (no thinking) spirals wildly. Poor bot



Tagging on for something irrelevant but very silly:

https://chatgpt.com/share/fc175496-2d6e-4221-a3d8-1d82fa8496...

4o spirals incredibly when asked to make a prolog quine. For an added bonus, ask it to "read it aloud" via the "..." menu - it will read the text, and then descend into absolute word salad when trying to read the code. Fascinating stuff.


Very neat! A lot of small LLM's have a similar failure mode where they get stuck and repeat a token / get stuck in a 2-3 token loop until they hit the max message size cutoff. Very ironic that it's about a quine.


You mean an e-quine?


GPT-5 can't handle 2 things: an esoteric quine or an aquatic equine


You get the "more clever than GPT5" award today!


Mine spammed checkmark emojis at the end and gave up: https://chatgpt.com/share/68e36a84-0eb4-8010-af81-cf601f1dcf...


I think the funnier part is how it keeps pretending like it does that on purpose and saying things like "just kidding", "Alright, for real this time", "okay… Enough stalling"


It reminds me of Janet malfunctioning in the TV show The Good Place.


IIRC this is what drove Bing Sydney insane - it had a filter on top that added emojis, and its output was fed back to it, which meant it was constantly out of distribution.


I got the same, pages of check mark emojis at the end of a frantic search. Poor chat


https://chatgpt.com/share/68e3674f-c220-800f-888c-81760e161d...

With thinking it spirals internally, runs a google search and then works it out.


I love how it says "stop" multiple times after outputting the dragon emoji, as if it's actually getting annoyed and angry at it's own lm_head that keeps printing the wrong thing


That's unreal, I have never seen GPT-5 confused this hard


This is hilarious




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: