I've always thought of "premature optimisation" as optimising something that's not your "hot path". If there's no clear hot path, everything is the hot path, and small optimisation gains everywhere are the only thing you're going to get. So at this point, it's not premature.
You could also rewrite your code so that there is a clear hot path, but in that case it seems to be React rendering, that's optimised by using memo and avoiding it completely.
I'm not terribly convinced with memoization though. You're using extra memory, so it's not free optimization. We have Redux memoized selectors everywhere. I can't help but wonder how much of that is actually a memory leak (i.e. it's never used more than once). Granted, components are a bit different.
I always do cringe when I see a lint rule forcing you to use a spread operator in an array reduce(). It's such a stupid self-inflicted way to turn an O(N) into an O(N^2) while adding GC memory pressure. All to serve some misguided dogma of immutability. I feel there is a need for a corollary to the "premature optimization is the root of all evil" rule.
> I always do cringe when I see a lint rule forcing you to use a spread operator in an array reduce(). It's such a stupid self-inflicted way to turn an O(N) into an O(N^2) while adding GC memory pressure. All to serve some misguided dogma of immutability. I feel there is a need for a corollary to the "premature optimization is the root of all evil" rule.
I think a rule of "don't try to use X as if it was Y" would be reasonable. I love immutability, but the performance cost in JS is really high. Many people are fine with using Typescript to enforce types at compile time and not at runtime. Maybe many people would be fine with enforced immutability at compile time (Elm, Rescript, OCaml, ...) and not runtime?
How could you not have a hot path? You're saying that you've measured actual usage and discovered that each thing happens to be called exactly the same number of times? That strikes me as extraordinarily improbable.
That's not exactly it. It's more of a "If you have nothing that takes more than 1% of your resources, no single optimisation can get you more than a 1% reduction in your resources". That seem to be how most web apps are: you parse a little bit of HTTP, a little bit of JSON, you validate a few things, you call the database, that does a few things too, you have a bit of business logic, you call the database again, then have a bit of glue code here and there, and finally respond to the user with a little bit of HTTP and maybe some HTML, maybe some JSON.
If that's how your app works and nothing can be optimised significantly, that's usually here where you can make big gains in performance by changing a big thing. One of these big things might be to put a cache in front of it, because a cache hit will be way faster than responding again to the same request. Another could be to change language. For example, from Python to Go. Since Go is (most of the time) a bit faster on everything, you end up being faster everywhere. Or even from Python to PyPy, a faster implementation. Another could be redesigning your program so that you have one single obvious hot path, and then optimising that.
That seem to be the case for them here: no component is taking all of the resources, but by using memo everywhere, all of them take less resources, which leads to a good reduction of resources in general.
You could also rewrite your code so that there is a clear hot path, but in that case it seems to be React rendering, that's optimised by using memo and avoiding it completely.