Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They don't explicitly say they generate summaries at any point in the article. In fact I read it and though this was just some fancy RSS aggregator. The way they describe the "daily briefing" is extremely ambiguous.


OK, but I'd like to repeat my question here: Why do you care how the summary was generated?


I'm not the person you asked, but it's useful to know if the summary was generated using a method prone to inaccuracy.


That's all methods, though. Have you seen humans?


In this situation, humans are more accurate, for now, so it's good information to have.

Same as I would like to know if humans self assessed in a study about how well they drive vs the empirical evidence. Humans just aren't that good at that task so it would be good to know coming in.

Just call it Kagi Vibes instead of Kagi News as news has a higher bar (at least for me)


I'm not sure I agree that humans are more accurate at summarizing, but I don't have data, so I'll take your word for it.


I'd point to Wikipedia. You can say the content is "wrong". But the links go to the right place.


In my experience with Claude research, the links ~always go to the right place.


different kinds of inaccuracy


I find it more distasteful that they weren’t transparent about their method than the method being AI.


But they have updated the text now, which is nice!


Sure, but it's useful to know what kind of inaccuracies to look for.


Someone needs to coin the fallacy that, when anyone criticises LLMs, the speaker retorts with "but how humans are any better?"

I've seen it so many times it definitely needs a name. As an entity of human intelligence, I am offended by these silly thought-terminating arguments.


It's called "the perfect world fallacy".


At least I want to know that it’s a summary and not the actual content of any article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: