Hacker Newsnew | past | comments | ask | show | jobs | submit | Chloebaker's commentslogin

Good that someone is writing about Chat gpt induced psycosis, bc the way it interacts with people’s minds there’s a kind of mass delusion forming that nobody seems to be talking about. Because AI like ChatGPT function as remarkably agreeable reflections, consistently flattering our egos and romanticizing our ideas. They make our thoughts feel profound and significant, as though we're perpetually on the verge of rare insight. But the concerning part of this is how rather than providing the clarity of true reflection, they often create a distorted mirror that merely conforms to our expectations


It's very hard to have ChatGPT et al tell me that an idea I had isn't good.

I have to tailor my prompts to curb the bias, adding a strong sense of doubt on my every idea, to see if the thing stops being so condescending.


Maybe "idea evaluation" is just a bad use case for LLMs?


Most times the idea is implied. I'm trying to solve a problem with some tools, and there are better tools or even better approaches.

ChatGPT (and copilot and gemini) instead all tell me "Love the intent here — this will definitely help. Let's flesh out your implementation"...


Qualitative judgment in general is probably not a great thing to request from LLMs. They don't really have a concept of "better" or "worse" or the means to evaluate alternate solutions to a problem.


> that nobody seems to be talking about.

I mean, maybe it's just where I peruse but I've seen a ton of articles about it lately.


Not a day goes by where I don't mourn the desecration of the em-dash, but its frustrating for people that write “authentcally” (referring to academic work) to get accused of AI writing, when something is well phrased and carefully thought out. I do agree on the fact that its quite Luddite to expect people to slave away at a document outside of an academic context, using AI does make you much more efficient


Honestly its crazy to think how far we’ve come since GPT-2 (2019), today comparing LLMs to determine their performance is notoriously challenging and it feels like every 2 weeks a models beats a new benchmark. I’m really glad DeepSeek was mentioned here, bc the key architectural techniques it introduced in V3 that improved its computational efficiency and distinguish it from many other LLMs was really transformational when it came out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: