The thing is, compilers are pretty amazing beasts. CPUs are pretty amazing beasts. "10,000" is a very small value of "N", given those two factors.
I've worked with a lot of engineers that considered anything O(n^2) to be a red flag, and half the time the actual performance profiling favored the naive method simply because the compiler optimized it better.
That means that if you actually care about performance, you've got to spend 30 minutes profiling for most real world scenarios. Yeah, O(n^2) is obviously a crazy bad idea if you ever expect to scale to ten million records, but the vast majority of software being written is handing tiny 10K files, and a very large chunk of it doesn't care at all about performance because network latency eclipses any possible performance gain.
Unless you are in a very hot path and know with absolute certainty that n will remain very low, I'd say you are doing clear premature optimization by comparing and choosing the O(n^2).
I say very small because to me, n=10_000 sounds like a number that could easily and quickly grow higher since yoy are past a basic enumeration of a few choices.
And this is how we end up with situations like GTA5 taking several minutes to parse a JSON in production, because nobody did actually test it with real-world values of n (or at least didn't expect it to increase over the product lifetime)
The right lesson to learn here is "manage and maintain your software as it's used in the real world, as that will tell you where you need to invest time" not "prematurely optimize every algorithm just in case it becomes a problem".
No, the right lesson here is that quadratic algorithms are a particularly nasty middle ground of seeming to work for the small n of development and initial rollout but falling over for the medium to large n of living in production.
Luckily they’re typically easy to notice, and better approaches are often as easy as sorting your input first or caching items in a hash table.
An engineer should therefore catch these things when writing them or in code review and make sure a slightly better option is used in any situation where the size of input has a reasonable chance of growing.
The cost is nearly nothing and the benefit is not shipping code that will virtually guarantee slow response times and bloated scaling costs at best and humans being woken up at night for outages at worst.
I've worked with a lot of engineers that considered anything O(n^2) to be a red flag, and half the time the actual performance profiling favored the naive method simply because the compiler optimized it better.
That means that if you actually care about performance, you've got to spend 30 minutes profiling for most real world scenarios. Yeah, O(n^2) is obviously a crazy bad idea if you ever expect to scale to ten million records, but the vast majority of software being written is handing tiny 10K files, and a very large chunk of it doesn't care at all about performance because network latency eclipses any possible performance gain.