The first paragraph of the Wikipedia article on TDD is a correct definition. Write a minimal test; see it fail; make it pass with a minimal code change; refactor to improve the design, keeping the tests passing. There are things that people do alongside TDD; there are common ways to perform TDD. But the red-green-refactor cycle is what "TDD" means.
I don't how many you mean by "several collaborators" in your composition paragraph. I'll assume 8, because that's enough collaborators that TDD will really start to hurt.
All software (not even most, but all) is composed of aggregations of trivially simple components. At the machine level, everything is built from instructions; in a fully OO language, everything is built from method calls; in some functional languages, everything is built from unary functions.
Systems are built by aggregating the trivially simple primitives provided by the environment. We have full control of that aggregation process; it has no mind of its own. We can choose to aggregate four things at a time or eight things at a time. That's not a big difference. It's the difference between 8 and 4 + 4. Same result, different decomposition.
The "4 + 4" analogy is not a straw man: splitting an eight-object interaction into two four-object interactions aggregated together is a well-defined operation on the syntax tree. IDEs even automate it, and have been for a decade or more; this is what an automated "extract method" refactoring is. Doing it well requires years of practice, but all software is made of aggregations of trivially simple components, and we have full control of the aggregation.
Mocking is not required for TDD. It's often used to do isolated unit testing, which may be done within a TDD loop. But even isolated testing can be done without mocks. I did a talk called "Boundaries" about that topic. https://www.destroyallsoftware.com/talks/boundaries
So yes, you are missing something here: first, most people doing TDD aren't mocking, or are mocking very rarely. Second, mocking is not required for isolation; you can also isolate by structuring your software in certain ways, which is what Boundaries is about.
Addressing each of the three responses you anticipate:
1) "Falsifiability": Nothing in software practices is falsifiable in practice. I've never seen a single piece of experimental literature that I considered sound. There's not even experimental evidence saying that "structured programming is better than willy-nilly GOTOs". And I've never even heard of a meta-analysis or an experiment being reproduced several times independently.
Sometimes you really are doing it wrong. If you try Haskell, can't figure out how to write to a file, and throw up your hands, that doesn't mean that Haskell "has failed" or "is unfalsifiable"; it means you don't know how to use it. Haskell and TDD are both particularly difficult to get your head around at first. Maybe you don't want to spend the effort. That's totally fine. This is why I don't actually know Haskell well.
2) "Your system is poorly written." This is a hugely subjective claim and you have to treat it as such. You have to silently append "by my standards of design" to the end of it. You also have to realize that everyone's standards of design are informed by the practices that they've used while doing the design.
If you primarily work on systems composed of functions with, say, ten collaborators (meaning a total of ten arguments and referenced globals/functions/etc.), then yes, I will say "your system is poorly designed". It doesn't mean that I think that you're a bad person; it does mean that I won't work with you to continue building your system in that way. Doing isolated unit testing on that system will be very difficult. Doing integrated unit testing, with or without TDD, will be less difficult. If we crank the collaborators up to 20 or 30, all programming tasks will be difficult, testing or not.
In the second part of your issue (2), you conflate TDD with testing, which is not correct. TDD is a loop of actions that produces tests. There are many other ways to produce tests.
3) "Your understanding is incomplete." If want to understand these ideas, read "TDD By Example" by Beck to learn about the TDD process, then "Growing Object-Oriented Software Guided by Tests" by Freeman and Price to learn about TDD design feedback and the careful use of mocks. Yes, it'll take time. (But less time than learning Haskell.)
If you want to understand what I mean about isolation without mocks, watch my "Boundaries" talk (I'd recommend doing that after reading the two books above). If you want to see live examples of TDD, and the trade-offs inherent in TDD being made and dicussed, watch my Destroy All Software screencasts.
However, if you don't want to do the work to tease these ideas apart, then I think that you should acknowledge that you're not willing to put the effort in. This is a different path than saying "people haven't said the right things to me for me to believe it works", which is the vibe I get right now. I learned TDD by doing it, incorrectly, and painfully, over and over again. You have the advantage of being able to read a couple of books to jump past my first year of learning. That's a huge efficiency gain, but the process can't be compacted into a comment on Hacker News that transmits the better part of a decade of experience.
(Finally, somewhat tangentially, I recommend that all programmers disabuse themselves of any belief that we have experimental evidence about programming practices by reading "The Leprechauns of Software Engineering".)
I don't how many you mean by "several collaborators" in your composition paragraph. I'll assume 8, because that's enough collaborators that TDD will really start to hurt.
All software (not even most, but all) is composed of aggregations of trivially simple components. At the machine level, everything is built from instructions; in a fully OO language, everything is built from method calls; in some functional languages, everything is built from unary functions.
Systems are built by aggregating the trivially simple primitives provided by the environment. We have full control of that aggregation process; it has no mind of its own. We can choose to aggregate four things at a time or eight things at a time. That's not a big difference. It's the difference between 8 and 4 + 4. Same result, different decomposition.
The "4 + 4" analogy is not a straw man: splitting an eight-object interaction into two four-object interactions aggregated together is a well-defined operation on the syntax tree. IDEs even automate it, and have been for a decade or more; this is what an automated "extract method" refactoring is. Doing it well requires years of practice, but all software is made of aggregations of trivially simple components, and we have full control of the aggregation.
Mocking is not required for TDD. It's often used to do isolated unit testing, which may be done within a TDD loop. But even isolated testing can be done without mocks. I did a talk called "Boundaries" about that topic. https://www.destroyallsoftware.com/talks/boundaries
So yes, you are missing something here: first, most people doing TDD aren't mocking, or are mocking very rarely. Second, mocking is not required for isolation; you can also isolate by structuring your software in certain ways, which is what Boundaries is about.
Addressing each of the three responses you anticipate:
1) "Falsifiability": Nothing in software practices is falsifiable in practice. I've never seen a single piece of experimental literature that I considered sound. There's not even experimental evidence saying that "structured programming is better than willy-nilly GOTOs". And I've never even heard of a meta-analysis or an experiment being reproduced several times independently.
Sometimes you really are doing it wrong. If you try Haskell, can't figure out how to write to a file, and throw up your hands, that doesn't mean that Haskell "has failed" or "is unfalsifiable"; it means you don't know how to use it. Haskell and TDD are both particularly difficult to get your head around at first. Maybe you don't want to spend the effort. That's totally fine. This is why I don't actually know Haskell well.
2) "Your system is poorly written." This is a hugely subjective claim and you have to treat it as such. You have to silently append "by my standards of design" to the end of it. You also have to realize that everyone's standards of design are informed by the practices that they've used while doing the design.
If you primarily work on systems composed of functions with, say, ten collaborators (meaning a total of ten arguments and referenced globals/functions/etc.), then yes, I will say "your system is poorly designed". It doesn't mean that I think that you're a bad person; it does mean that I won't work with you to continue building your system in that way. Doing isolated unit testing on that system will be very difficult. Doing integrated unit testing, with or without TDD, will be less difficult. If we crank the collaborators up to 20 or 30, all programming tasks will be difficult, testing or not.
In the second part of your issue (2), you conflate TDD with testing, which is not correct. TDD is a loop of actions that produces tests. There are many other ways to produce tests.
3) "Your understanding is incomplete." If want to understand these ideas, read "TDD By Example" by Beck to learn about the TDD process, then "Growing Object-Oriented Software Guided by Tests" by Freeman and Price to learn about TDD design feedback and the careful use of mocks. Yes, it'll take time. (But less time than learning Haskell.)
If you want to understand what I mean about isolation without mocks, watch my "Boundaries" talk (I'd recommend doing that after reading the two books above). If you want to see live examples of TDD, and the trade-offs inherent in TDD being made and dicussed, watch my Destroy All Software screencasts.
However, if you don't want to do the work to tease these ideas apart, then I think that you should acknowledge that you're not willing to put the effort in. This is a different path than saying "people haven't said the right things to me for me to believe it works", which is the vibe I get right now. I learned TDD by doing it, incorrectly, and painfully, over and over again. You have the advantage of being able to read a couple of books to jump past my first year of learning. That's a huge efficiency gain, but the process can't be compacted into a comment on Hacker News that transmits the better part of a decade of experience.
(Finally, somewhat tangentially, I recommend that all programmers disabuse themselves of any belief that we have experimental evidence about programming practices by reading "The Leprechauns of Software Engineering".)