To be clear, I don't think the guy in the room will end up understanding whichever Chinese language this thought experiment is being conducted in, either.
You have put your finger on the fundamental problem of the argument: Searle never gave a good justification for the tacit and question-begging premise that if the human operator did not understand the language, then nothing would (there is a second tacit premise at work here: that performing the room's task required something to understand the language. LLMs arguably suggest that the task could be performed without anything resembling a human's understanding of language.)
Searle's attempt to justify this premise (the 'human or nothing' one) against the so-called 'systems reply' is to have the operator memorize the book, so that the human is the whole system. Elsewhere [1] I have explained why I don't buy this.
"there is a second tacit premise at work here: that performing the room's task required something to understand the language. LLMs arguably suggest that the task could be performed without anything resembling a human's understanding of language."
Yeah. I used to assume that. But it's much less obvious now. Or just false or something.
It's actually kind of spooky how well Searle did capture/foreshadow something about LLMs decades ago. No part of the system seems to understand much of anything.
My theory is that Searle came up with the CR while complaining to his wife (for the hundredth time) about bright undergrads who didn't actually understand anything. She finally said "Hey, you should write that down!" Really she just meant "holy moly, stop telling it to me!" But he misunderstood her, and the rest is history.
You have put your finger on the fundamental problem of the argument: Searle never gave a good justification for the tacit and question-begging premise that if the human operator did not understand the language, then nothing would (there is a second tacit premise at work here: that performing the room's task required something to understand the language. LLMs arguably suggest that the task could be performed without anything resembling a human's understanding of language.)
Searle's attempt to justify this premise (the 'human or nothing' one) against the so-called 'systems reply' is to have the operator memorize the book, so that the human is the whole system. Elsewhere [1] I have explained why I don't buy this.
[1] https://news.ycombinator.com/item?id=45664129