not only that, but, if a LLM can actually complete a code task it's a good sign that that problem has a public code repository solving it (usually in a way that is better than what the LLM offers)
Having read the above conversation excerpt and the page you linked... how do you get to it feeling like plagiarism? Given a constrained set of information here, there's only so many ways to present the information. They roughly discuss the same data points, but the writing is different in both. Is this disallowed?
There's no reason LLMs don't also do this with code by the way.
[1] https://emojipedia.org/seahorse-emoji-mandela-effect