Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It will struggle with understanding small parts of big programs without seeing the full context, though maybe you could get around that by making it generate some sort of summary for itself or something like that


My tests on small parts of big programs suggests if anything that it does far better than I expected, but you're probably right that it would struggle with that for many things if you tried turning it into a bigger tool, and having it generate summaries is probably essential. While we can "fake" some level of memory that way, I really would like to see how far LLM's can go if we give them a more flexible form of memory...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: