When you’re designing a performance-sensitive
computer system...
Neil then suggests two tools for achieving such designs, intuition based back of envelop calculations and microbenchmarking to identify bottlenecks.
Good intuition is the result of completing Norvig's ten year course. Identifying performance bottlenecks requires running code.
Neither is relevant during the "Template administration." Back of envelop calculations are about determining architecture, not the specific lines of text to be compiled. It's a look at efficient gross resource consumption, not tweeking handfuls of bytes. Tweeking, when it must occur - per your experience more than an order of magnitude less frequently than non-performance sensitive applications - is well after the "Template adminstration".
My criticism is literary. The article throws in byte sized boogie men. These are the sort of things that Google pays gurus to deal with. Not the sort of work for which they hire C++ programmers who need an idiot's guide to templates.
To a first approximation, memory isn't precious any more. Even when programming in C++. Writing as if it is, develops bad intuitions of the sort that lead to concern over templates during the Design phase.
I suspect that your first line of defense against running out of memory was not looking template construction, but compiler optimizations. Managing memory manually in C++ is hard enough without creating a mindset which encourages premature optimization.
I do not make the decision to write performance critical code in C++ until I have an analysis of the problem which includes a prototype in a slower language. Therefore I do not start writing C++ templates until after I already have a specific problem, and a pretty detailed layout of how I want my data to be laid out in memory. The details of how my code gets laid out in memory I gladly leave to the compiler. But it is important for me to know that I can use templates and I won't have to worry about it either.
I suspect that your first line of defense against running out of memory was not looking template construction, but compiler optimizations. Managing memory manually in C++ is hard enough without creating a mindset which encourages premature optimization.
Not true!
In the latest project that this came up in, one of the first templates that I wrote in C++ allows me to replace things that can go into a std::map with 4-byte objects that are simply indexes into a deduped vector. (This is on Linux, so I don't have to worry about dword alignment. Thus 4-byte objects only take 4 bytes.) Given that I have a very large number of these, often need to compare two to know if they are the same or different, and only seldom need to get at the actual value, this gives huge memory and performance benefits.
Of course doing this kind of thing for a general-purpose application would be horribly premature optimization. Which is why I only considered going to C++ after analyzing where the issues were in my first two Perl versions.
And I am an example showing that it isn't just C++ gurus who find themselves needing to write performance critical code in C++. In fact my desire to avoid premature optimization means that when I do need to write performance critical code in C++, I do not know the language very well because I use it so seldom!
The disagreeable anecdote shows that the scale upon which memory efficiency matters is significantly greater than that of the example where the issue is raised. It matters for "very large number[s]" not when a template is used to overload a function for numeric types - it matters when memory constraints appear on the back of the envelop...or in the profile.
What bothered me about that section of the article is related to your comment "I won't have to worry about it either."
Templates are syntactic sugar. Cancer of the semicolon I can understand. But, why would one be fearful that a standard feature of C++ is grossly memory inefficient?
The answer I believe goes back to the time of expensive kilobytes - when allocating an array of 500 versus 200 was usually a big deal (whereas today it rarely is). CPU caches for performance driven applications are often larger than the total memory of systems from the time when C++ was designed. An idiot's guide should focus on draining the swamp and not seek to raise worries around the incubation of alligator eggs.
Even if that is the template for articles about the language.
The disagreeable anecdote shows that the scale upon which memory efficiency matters is significantly greater than that of the example where the issue is raised. It matters for "very large number[s]" not when a template is used to overload a function for numeric types - it matters when memory constraints appear on the back of the envelop...or in the profile.
Wrong.
It is true that computers are fast enough that lack of performance will not be an issue unless the volume of work to be done is very, very large. However once you look at performance, you find that memory access patterns in the small matter a great deal.
For instance take a recent Intel Sandy Bridge CPU has, per core, 64KB L1 cache, 256 KB L2 cache, and 1 to 8 MB L3 cache. Half of each cache is available for code, the other half for data. If hyperthreading is turned on, these caches may be split by two concurrent processes. So if either the data accessed in, or the code for, a particular loop exceeds 32KB, you will experience a noticeable slowdown, and if possible you'd like to keep it under 16 KB. So it is better to do sequential access of large data structures, and avoid random access.
Note that is KB, not MB. Memory efficiency matters on very small data structures. You won't notice until you have a lot of data. But when you have to fix it, you have to think at multiple scales.
Templates are syntactic sugar. Cancer of the semicolon I can understand. But, why would one be fearful that a standard feature of C++ is grossly memory inefficient?
There historically have been a lot of complaints floating around about how nice templated code blew up into monstrosities when compiled. Given that, rumor control does not seem unwarranted.
I've seen dramatic speedups reducing an array from ~10mb to ~1mb. The problem was that the algorithm did multiple passes over the array, and each pass it would pull in blocks from main memory, only to evict them later on before they could be reused.
It depends a lot on the workload, but you'll see dramatic performance differences well before you fill up main memory.
Good intuition is the result of completing Norvig's ten year course. Identifying performance bottlenecks requires running code.
Neither is relevant during the "Template administration." Back of envelop calculations are about determining architecture, not the specific lines of text to be compiled. It's a look at efficient gross resource consumption, not tweeking handfuls of bytes. Tweeking, when it must occur - per your experience more than an order of magnitude less frequently than non-performance sensitive applications - is well after the "Template adminstration".
My criticism is literary. The article throws in byte sized boogie men. These are the sort of things that Google pays gurus to deal with. Not the sort of work for which they hire C++ programmers who need an idiot's guide to templates.
To a first approximation, memory isn't precious any more. Even when programming in C++. Writing as if it is, develops bad intuitions of the sort that lead to concern over templates during the Design phase.
I suspect that your first line of defense against running out of memory was not looking template construction, but compiler optimizations. Managing memory manually in C++ is hard enough without creating a mindset which encourages premature optimization.