It doesn't always, but I'll be blunt: if you find yourself frequently writing O(n^2) or O(m * n) algorithms where n, m aren't fundamentally small values, you are doing a massive disservice to your coworkers. There is very rarely any meaningful cost to using something better, and that better thing should come more or less instinctively for the overwhelming majority of cases by the time you have even a little experience as a software engineer. These types of algorithms should immediately jump out at you as a least an orange flag if not a red one.
If you aren't doing better because you don't care, you're inflicting the costs your apathy on everyone else around you. If you aren't doing better because nested for loops is the most ergonomic solution for virtually every problem in your language of choice, you need to reevaluate your choices.
Everything doesn't have to be overengineered to death, but that's a far cry from being okay with regularly and intentionally choosing the actual worst solution that is all but guaranteed to blow up under any kind of growth.
I said “regularly or intentionally”, because I wanted to differentiate that that is what I have a particular problem with.
And yes, a really one-off business report is fine. As long as it runs and produces an anccurate answer, that is fine. Which is why I distinguished that if you find yourself regularly writing these kinds of algorithms and not having alarm bells going off in your head, that is almost certainly a problem.
I have never in 25 years encountered someone who spent noticeably too long optimizing running time for software that wasn’t performance critical. I have had to respond to at least two to four cases per year where somebody assumed a small case would remain small forever (or just didn’t think about it at all) and caused an outage, a severely bloated AWS bill (unbounded autoscaling + quadratic), or some other completely avoidable hell.
The thing is, compilers are pretty amazing beasts. CPUs are pretty amazing beasts. "10,000" is a very small value of "N", given those two factors.
I've worked with a lot of engineers that considered anything O(n^2) to be a red flag, and half the time the actual performance profiling favored the naive method simply because the compiler optimized it better.
That means that if you actually care about performance, you've got to spend 30 minutes profiling for most real world scenarios. Yeah, O(n^2) is obviously a crazy bad idea if you ever expect to scale to ten million records, but the vast majority of software being written is handing tiny 10K files, and a very large chunk of it doesn't care at all about performance because network latency eclipses any possible performance gain.
Unless you are in a very hot path and know with absolute certainty that n will remain very low, I'd say you are doing clear premature optimization by comparing and choosing the O(n^2).
I say very small because to me, n=10_000 sounds like a number that could easily and quickly grow higher since yoy are past a basic enumeration of a few choices.
And this is how we end up with situations like GTA5 taking several minutes to parse a JSON in production, because nobody did actually test it with real-world values of n (or at least didn't expect it to increase over the product lifetime)
The right lesson to learn here is "manage and maintain your software as it's used in the real world, as that will tell you where you need to invest time" not "prematurely optimize every algorithm just in case it becomes a problem".
No, the right lesson here is that quadratic algorithms are a particularly nasty middle ground of seeming to work for the small n of development and initial rollout but falling over for the medium to large n of living in production.
Luckily they’re typically easy to notice, and better approaches are often as easy as sorting your input first or caching items in a hash table.
An engineer should therefore catch these things when writing them or in code review and make sure a slightly better option is used in any situation where the size of input has a reasonable chance of growing.
The cost is nearly nothing and the benefit is not shipping code that will virtually guarantee slow response times and bloated scaling costs at best and humans being woken up at night for outages at worst.
If you aren't doing better because you don't care, you're inflicting the costs your apathy on everyone else around you. If you aren't doing better because nested for loops is the most ergonomic solution for virtually every problem in your language of choice, you need to reevaluate your choices.
Everything doesn't have to be overengineered to death, but that's a far cry from being okay with regularly and intentionally choosing the actual worst solution that is all but guaranteed to blow up under any kind of growth.