THEY SEND THE WHOLE CONTEXT EVERY TIME? Man that seems... not great. sometimes it will go off and spin on something.. seems like it would be a LOT better to roll back than to send a corrective message. hmmm...... this article is nerd-sniping on a massive scale ;D
Sending the whole context on each user message is essentially what the model remembers of this conversation. ie: it is entirely stateless.
I've written some agents that have their context altered by another llm to get it back on track. Let's say the agent is going off rails, then a supervisor agent will spot this and remove messages from the context where it went off rails, or alter those with correct information.
Really fun stuff but yeah, we're essentially still inventing this as we go along.
In the Responses API, you can implicitly chain messages with `previous_response_id` (I'm not sure how old a conversation you can resurrect that way). But I think Codex CLI actually sends the full context every time? And keep in mind, sending the whole context gives you fine-grained control over what does and doesn't appear in your context window.