Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

would just caching llm responses work here?


Which responses? For how long? With what level of detail? Those are the questions we are all trying to figure out now, and the performance of your agent is highly dependent on the answer to that.


you mean like https://www.anthropic.com/news/prompt-caching

or just saving LLM chat message history?

If the latter, saving chat history is useless without some snapshot of the environment in which it was performed. Muscle Mem is an environment cache more than it is an LLM cache.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: