A current-generation Xeon E7 has a memory bandwidth of up to 102 GB/s. The article says 90, which is probably a realistic estimate of achievable throughput. But the data is 500GB. So the problem is not with loading it from disk (that can be done beforehand), but getting it from CPU to RAM fast enough.
Inaccurate; the very point of columnar stores is that if I'm only interested in a column I only expend the memory bandwidth required for that very column and nothing else. Hence typical queries to a columnar store would never stream the whole 500 GB for processing.
Right, a columnar store would touch only a couple (1.1bn * three numerical columns) of those 500 GB. But the question was about Redis :)
The author has an overview of the benchmarks with various systems at http://tech.marksblogg.com/benchmarks.html – but one can't just compare rows as many factors vary (esp. hardware).