This is a great master class, if you will, on iterative optimization for core loops. They progressively make improvements to their method until they approach the performance of standard, optimized BLAS implementations. (About as fast as BLIS, less than Intel's MKL, which is about as fast as OpenBLAS. [0]) Timing Eigen without linking against BLAS is a little misleading, since Eigen is meant to be linked against a system BLAS.
You wouldn't want to use this code, but it shows you the sorts of things to start paying attention to in this performance-critical sections. I was most surprised by the fact that reordering operations to spread the same instructions apart made a significant difference.
(As an aside, your best bet in practical tools is using a metaprogramming library [Blaze seems to be the best], wrapping core operations in a fast BLAS implementation. I personally choose to use Blaze on top of OpenBLAS.)
Note that OpenBLAS doesn't have avx512 kernels, and the fallback to avx2 (unreleased) is a factor of three slower than MKL dgemm on Knights Landing. It looks as if the next release of BLIS will remedy the lack of free BLAS avx512 support with micro-architecture dynamic dispatch which includes fairly competitive avx512 (>80% of MKL dgemm on KNL).
I don't know about the competition on non-x86 architectures, but BLIS currently has more limited support than OpenBLAS.
Good to know! I wasn't aware. It doesn't matter too much for me, since the machines I'm using only support AVX2, but I do write my code to work generically on all vectorizations up to AVX512, and I'm likely to use MKL on Knights Landing+.
Very timely, I just spent the last two days getting Eigen, BLAS, Suitesparse, and Ceres playing nice together on Windows. I recognized the GEMM call from the Cmake module that checks for BLAS. Amazing the amount of work and optimization going into these libraries that underpin countless of today's leading ML and CV programs.
All BLAS libraries currently only care about float32 and float64, I wanted very fast routines for integer matrix multiplication and use this as a fallback for integers while
using OpenBLAS/MKL/CLBlast (OpenCL)/CuBLAS (CUDA) for floats.
I don't know which operations are implemented for those types, but the "dnn" support in the libxsmm ("small matrix multiplication") library has I32, I16, and I8 <https://github.com/hfp/libxsmm>.
I did stumble upon libxsmm while looking for a BLAS library, I didn't see they had integers.
Do you know what libxsmm means by "small", there is nothing in their README/wiki. However it does say that libxsmm doesn't support 32-bit OS. My library is used in IoT devices.
There is indeed no support for 32-bit. Regarding integers, LIBXSMM supports lower-precision GEMM. However, we haven't fully represented this in our usual sample code (samples/xgemm, samples/bgemm).
Put aside a tiling scheme for big matrices - the code generation for low-precision GEMM kernels is here:
I appreciate the work done at the hardware level for streaming computations of structured data. But from the high level view, it's more critical to be able to compute efficiently matrix decomposition, it helps to solve systems of linear equations and obviously discretised PDEs.
You wouldn't want to use this code, but it shows you the sorts of things to start paying attention to in this performance-critical sections. I was most surprised by the fact that reordering operations to spread the same instructions apart made a significant difference.
(As an aside, your best bet in practical tools is using a metaprogramming library [Blaze seems to be the best], wrapping core operations in a fast BLAS implementation. I personally choose to use Blaze on top of OpenBLAS.)
[0] https://news.ycombinator.com/item?id=10114830