No, I don't agree with this formalization. It's more that (some) humans have a "theory" of the program (in the same sense used by Ryle and Naur); let's take for granted that if one has a theory, then they have understanding; thus (some) humans have an understanding of the program. It's not equivocating between B and B', but rather observing that B implies B'.
Thus, if an LLM lacks understanding (Searle), then they don't have a theory either.
> LLM's build an internal representation that let's them efficiently and mostly successfully manipulate source code. Whether that internal representation is satisfies your criteria for a theory doesn't change that fact.
The entire point of Naur's paper is that the activity of programming, of software engineering, is not just "manipulating source code." It is, rather, building a theory of the software system (which implies an understanding of it), in a way that an LLM or an AI cannot, as posited by Searle.
> let's take for granted that if one has a theory, then they have understanding
Leaving aside what is actually meant by "theory" and "understanding". Could it not be argued that eventually LLMs will simulate understanding well enough that - for all intents and purposes - they might as well be said to have a theory?
The parallel I've got in my head is the travelling salesman problem. Yes, it's NP-Hard, which means we are unlikely to ever get a polynomial-time algorithm to solve it. But that doesn't stop us solving TSP problems near-optimally at an industrial scales.
Similarly, although LLMs may not literally have a theory, they could become powerful enough that the edge cases in which a theory is really needed are infinitesimally unlikely.
Thus, if an LLM lacks understanding (Searle), then they don't have a theory either.
> LLM's build an internal representation that let's them efficiently and mostly successfully manipulate source code. Whether that internal representation is satisfies your criteria for a theory doesn't change that fact.
The entire point of Naur's paper is that the activity of programming, of software engineering, is not just "manipulating source code." It is, rather, building a theory of the software system (which implies an understanding of it), in a way that an LLM or an AI cannot, as posited by Searle.