Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It being everywhere worries me a lot. It outputs a lot of false information and the typical person doesn’t have the time or inclination to vet the output. Maybe this is a problem that will be solved. I’m not optimistic on that front.


Same can be said about the results that pop up on your favorite search engine or asking other people questions.

If anything advances in AI & search tech will do a better job at providing citations that agree & disagree with the results given. But this can be a turtles all the way down problem.


There’s a real difference in scale and perceived authority: false search results already cause problems but many people have also been learning not to blindly trust the first hit and to check things like the site hosting it.

That’s not perfect but I think it’s a lot better than building things into Word will be. There’s almost no chance that people won’t trust suggestions there more than random web searches and the quality of the writing will make people more inclined to think it’s authoritative.

Consider what happened earlier this year when professor Tyler Cowen wrote an entire blog post on a fake citation. He certainly knows better but it’s so convenient to use the LLM emission rather than do more research…

https://www.thenation.com/article/culture/internet-archive-p...


No it won't and random search popup results are already a massive societal problem (and they're not even used like people are attempting to use AI - to make decisions over other peoples lives in insurance, banking, law enforcement and other areas where abuse is common when unchecked).


Low quality blogs etc stand out as low quality, LLMs can eloquently state truths with convincing sounding nonsense sprinkled through out. It's a different problem and many people already take low quality propaganda at face value.


I think this is a failure in how we fine-tuned and evaluated them in RLHF.

"In theory, the human labeler can include all the context they know with each prompt to teach the model to use only the existing knowledge. However, this is impossible in practice." [1] Therefore causing and forcing some connections that are not all there for the LLM. Extrapolate that across various subjects and types of queries and there you go.

1:https://huyenchip.com/2023/05/02/rlhf.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: