Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AFAICS this has nothing to do with "open-source personal AI engines".

The recorded history is stored in a SQLite database and is quite trivial to examine[0][1]. A simple script could extract the information and feed them to your indexer of choice. Developing such a script isn't the task for an internet browser engineering team.

The question remains whether the indexer would really benefit from real-time ingestion while browsing.

[0] Firefox: https://www.foxtonforensics.com/browser-history-examiner/fir...

[1] Chrome: https://www.foxtonforensics.com/browser-history-examiner/chr...



Due to the dynamic nature of the Web, URLs don't map to what you've seen. If I visit a URL at a certain time, the content I see is different than the content you see or even if I visit the same URL later. For example, if we want to know the tweets I'm seeing are the same as the tweets you're seeing and haven't been subtly modified by an AI, how do you do that? In the age of AI programming people, this will be important.


I'm confused, do you want more than the browser history then? ...something like Microsoft's Recall? Browsers currently don't store what they've seen and for good reasons. I was with you for a sec, but good luck convincing Mozilla to propagate rendered pages to other processes then!


Being able to index and own your data changes the model of the Web.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: