Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Which troubles me a bit, as 'bad' does not have same definition for everyone.


How is this any different from a search engine choosing how to rank any other content, including penalizing SEO spam? I may not agree with all of their priorities, but I would welcome the search engine filtering out low quality, low effort spam for me.


Yes, that's why we'll publish a blog post on this subject in the coming weeks. We've been working on this topic since the beginning of summer, and right now our focus is on exploring report patterns.

Matt also shared insights about the other signals we use for this evaluation here https://news.ycombinator.com/item?id=45920720

And we are still exploring other factors,

1/ is the reported content ai-generated?

2/ is most content in that domain ai-generated (+ other domain-level signals) ==> we are here

3/ is it unreviewed? (no human accountability, no sources, ...)

4/ is it mindlessly produced? (objective errors, wrong information, poor judgement, ...)


There’s a whole genre of websites out there that are a ToC and a series of ChatGPT responses.

I take it to mean they’re targeting that shit specifically and anything else that becomes similarly prevalent and a plague upon search results.


A simple definition would be: Its bad if it isn't labeled as AI content or if there is not a mechanism that allows you to filter out AI content.


That's fine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: