To have a good mental model for modern AI agents you have to understand both the LLM and the other stuff that's built up around it. OP is correct about the behavior of LLMs, and that is valuable information to keep in mind. Then you layer on top of that an understanding that some implementations of agents will sometimes automatically feed search results into context, if you ask them to or are paying for an advanced tier or whatever the extra qualifications are for your particular tool.
If you skip this two-part understanding then you run the risk of missing when the agent decided not to do a search for some reason and is therefore entirely dependent on statistical probability in the training data. I've personally seen people without this mental model take an LLM at its word when it was wrong because they'd gotten used to it looking things up for them.
If you skip this two-part understanding then you run the risk of missing when the agent decided not to do a search for some reason and is therefore entirely dependent on statistical probability in the training data. I've personally seen people without this mental model take an LLM at its word when it was wrong because they'd gotten used to it looking things up for them.