How to Optimize Matrix Multiplication (Matmul) Code to Run Fast on a Single Processor Core

Improving the performance of Matrix Multiplication

There are many, many things you can do to improve the efficiency of matrix multiplication.

To examine how to improve the basic algorithm, let's first take a look at our current options. The naive implementation, of course, has 3 loops with a time complexity of the order of O(n^3). There is another method called Strassen's Method which achieves a appreciable speedup and has the order of O(n^2.73) (but in practice is useless since it offers no appreciable means of optimization).

This is in theory. Now consider how matrices are stored in memory. Row major is the standard, but you find column major too. Depending on the scheme, transposing your matrix might improve speed due to fewer cache misses. Matrix multiplication in theory is just a bunch of vector dot products and addition. The same vector is to be operated upon by multiple vectors, thus it makes sense to keep that vector in cache for faster access.

Now, with the introduction of multiple cores, parallelism and the cache concept, the game changes. If we look a little closely, we see that a dot product is nothing but a bunch of multiplications followed by summations. These multiplications can be done in parallel. Hence, we can now look at parallel loading of numbers.

Now let's make things a little more complicated. When talking about matrix multiplication, there is a distinction between single floating point and double floating point in their size. Often the former is 32 bits while the latter, 64 (of course, this depends on the CPU). Each CPU only has a fixed number of registers, meaning the bigger your numbers, the lesser you can fit in the CPU. Moral of the story is, stick to single floating point unless you really need double.

Now that we've gone through the basics of how we can tune matrix multiplication, worry not. You need do nothing of what has been discussed above since there are already subroutines to do it. As mentioned in the comments, there's GotoBLAS, OpenBLAS, Intel's MKL, and Apple's Accelerate framework. MKL/Accelerate are proprietary, but OpenBLAS is a very competitive alternative.

Here's a nice little example that multiplies 2 8k x 8k matrices in a few milliseconds on my Macintosh:

#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <Accelerate/Accelerate.h>

int SIZE = 8192;

typedef float point_t;

point_t* transpose(point_t* A) {
point_t* At = (point_t*) calloc(SIZE * SIZE, sizeof(point_t));
vDSP_mtrans(A, 1, At, 1, SIZE, SIZE);

return At;
}

point_t* dot(point_t* A, point_t* B) {
point_t* C = (point_t*) calloc(SIZE * SIZE, sizeof(point_t));
int i;
int step = (SIZE * SIZE / 4);

cblas_sgemm (CblasRowMajor,
CblasNoTrans, CblasNoTrans, SIZE/4, SIZE, SIZE,
1.0, &A[0], SIZE, B, SIZE, 0.0, &C[0], SIZE);

cblas_sgemm (CblasRowMajor,
CblasNoTrans, CblasNoTrans, SIZE/4, SIZE, SIZE,
1.0, &A[step], SIZE, B, SIZE, 0.0, &C[step], SIZE);

cblas_sgemm (CblasRowMajor,
CblasNoTrans, CblasNoTrans, SIZE/4, SIZE, SIZE,
1.0, &A[step * 2], SIZE, B, SIZE, 0.0, &C[step * 2], SIZE);

cblas_sgemm (CblasRowMajor,
CblasNoTrans, CblasNoTrans, SIZE/4, SIZE, SIZE,
1.0, &A[step * 3], SIZE, B, SIZE, 0.0, &C[step * 3], SIZE);

return C;
}

void print(point_t* A) {
int i, j;
for(i = 0; i < SIZE; i++) {
for(j = 0; j < SIZE; j++) {
printf("%f ", A[i * SIZE + j]);
}
printf("\n");
}
}

int main() {
for(; SIZE <= 8192; SIZE *= 2) {
point_t* A = (point_t*) calloc(SIZE * SIZE, sizeof(point_t));
point_t* B = (point_t*) calloc(SIZE * SIZE, sizeof(point_t));

srand(getpid());

int i, j;
for(i = 0; i < SIZE * SIZE; i++) {
A[i] = ((point_t)rand() / (double)RAND_MAX);
B[i] = ((point_t)rand() / (double)RAND_MAX);
}

struct timeval t1, t2;
double elapsed_time;

gettimeofday(&t1, NULL);
point_t* C = dot(A, B);
gettimeofday(&t2, NULL);

elapsed_time = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms
elapsed_time += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms

printf("Time taken for %d size matrix multiplication: %lf\n", SIZE, elapsed_time/1000.0);

free(A);
free(B);
free(C);

}
return 0;
}

At this point I should also mention SSE (Streaming SIMD Extension), which is basically something you shouldn't do unless you've worked with assembly. Basically, you're vectorising your C code, to work with vectors instead of integers. This means you can operate on blocks of data instead of single values. The compiler gives up and just translates your code as is without doing its own optimizations. If done right, it can speed up your code like nothing before - you can touch the theoretical floor of O(n^2) even! But it is easy to abuse SSE, and most people unfortunately do, making the end result worse than before.

I hope this motivates you to dig deeper. The world of matrix multiplication is a large and fascinating one. Below, I attach links for further reading.

  1. OpenBLAS
  2. More about SSE
  3. Intel Intrinsics

how to optimize matrix multiplication in cpp in terms of time complexity?

If you are curious if they exist in theory, then yes. For example, Strassen algorithm (see https://en.wikipedia.org/wiki/Strassen_algorithm). And it's not even the fastest we know. As far as I'm concerned the best for now is Coppersmith–Winograd algorithm (see https://en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm) and it is something like O(n^{2.37}) (Strassen's time complexity is something like O(n^{2.8}).

But in practice they are much harder to implement than the one you wrote and also they have pretty large time constant hidden under O() so O(n^3) algorithm you wrote is even better on low values of n and much easier to implement.

Also there is a Strassen's hypothesis which claims that for every eps > 0 there is an algorithm which multiplies two matrixes with time complexity O(n^{2 + eps}). But as you might have noticed it is just an hypothesis for now.

Speeding up matrix multiplication using SIMD and openMP

Parallelism can only give you so much. Furthermore, the more the sequential code is optimized the lesser will be the perceptual gains from parallelism you are likely to get. Nevertheless, what you can improve -- I have done it in the past and help to improve a lot -- is to divide the matrix multiplication into smaller blocks. Hence, the matrix multiplication is sub-divided into the multiplication of smaller matrices (tiles).

So by dividing the matrix multiplication into smaller blocks where you perform the matrix multiplication of smaller sub-matrices you are improving the use of the cache both the temporal locality and spatial locality. You need to divide the matrices according to the size of the cache levels (e.g., L1, L2, and L3) of the architecture that you are using. You can look in more detail in these slides about cache blocking and Matrix Multiplication. What Every Programmer Should Know About Memory? also has a vectorized cache-blocked matmul in an appendix.

For example, if you have a Cache L3, which is shared among cores you can load multiple columns of the matrix B into the L3 cache, and reuse those values to perform the matrix-multiplication of smaller tiles that will fit in cache L1 and L2. You can go even further, and inside those tiles divide the tiles even further so that you can take advantage of the registers.

After having optimized the memory usage of the matrix-multiplication, you can try to get that extra speedup from multithreading. If you have a cluster of multi-cores you can try to parallelize using MPI + OpenMP, of course at this point you will encounter another bottleneck, communication among processes.

This all depends on the architecture where your code is running, if you have a NUMA architecture then you have to also factor in things like local and non-local memory. You can also explore the GPU route: Matrix-Matrix Multiplication on the GPU with Nvidia CUDA.

Have a look at BLAS to have a good inside into efficient code.

Efficient 4x4 matrix multiplication (C vs assembly)

There is a way to accelerate the code and outplay the compiler. It does not involve any sophisticated pipeline analysis or deep code micro-optimisation (which doesn't mean that it couldn't further benefit from these). The optimisation uses three simple tricks:

  1. The function is now 32-byte aligned (which significantly boosted performance),

  2. Main loop goes inversely, which reduces comparison to a zero test (based on EFLAGS),

  3. Instruction-level address arithmetic proved to be faster than the "external" pointer calculation (even though it requires twice as much additions «in 3/4 cases»). It shortened the loop body by four instructions and reduced data dependencies within its execution path. See related question.

Additionally, the code uses a relative jump syntax which suppresses symbol redefinition error, which occurs when GCC tries to inline it (after being placed within asm statement and compiled with -O3).

    .text
.align 32 # 1. function entry alignment
.globl matrixMultiplyASM # (for a faster call)
.type matrixMultiplyASM, @function
matrixMultiplyASM:
movaps (%rdi), %xmm0
movaps 16(%rdi), %xmm1
movaps 32(%rdi), %xmm2
movaps 48(%rdi), %xmm3
movq $48, %rcx # 2. loop reversal
1: # (for simpler exit condition)
movss (%rsi, %rcx), %xmm4 # 3. extended address operands
shufps $0, %xmm4, %xmm4 # (faster than pointer calculation)
mulps %xmm0, %xmm4
movaps %xmm4, %xmm5
movss 4(%rsi, %rcx), %xmm4
shufps $0, %xmm4, %xmm4
mulps %xmm1, %xmm4
addps %xmm4, %xmm5
movss 8(%rsi, %rcx), %xmm4
shufps $0, %xmm4, %xmm4
mulps %xmm2, %xmm4
addps %xmm4, %xmm5
movss 12(%rsi, %rcx), %xmm4
shufps $0, %xmm4, %xmm4
mulps %xmm3, %xmm4
addps %xmm4, %xmm5
movaps %xmm5, (%rdx, %rcx)
subq $16, %rcx # one 'sub' (vs 'add' & 'cmp')
jge 1b # SF=OF, idiom: jump if positive
ret

This is the fastest x86-64 implementation I have seen so far. I will appreciate, vote up and accept any answer providing a faster piece of assembly for that purpose!

How does BLAS get such extreme performance?

A good starting point is the great book The Science of Programming Matrix Computations by Robert A. van de Geijn and Enrique S. Quintana-Ortí. They provide a free download version.

BLAS is divided into three levels:

  • Level 1 defines a set of linear algebra functions that operate on vectors only. These functions benefit from vectorization (e.g. from using SSE).

  • Level 2 functions are matrix-vector operations, e.g. some matrix-vector product. These functions could be implemented in terms of Level1 functions. However, you can boost the performance of this functions if you can provide a dedicated implementation that makes use of some multiprocessor architecture with shared memory.

  • Level 3 functions are operations like the matrix-matrix product. Again you could implement them in terms of Level2 functions. But Level3 functions perform O(N^3) operations on O(N^2) data. So if your platform has a cache hierarchy then you can boost performance if you provide a dedicated implementation that is cache optimized/cache friendly. This is nicely described in the book. The main boost of Level3 functions comes from cache optimization. This boost significantly exceeds the second boost from parallelism and other hardware optimizations.

By the way, most (or even all) of the high performance BLAS implementations are NOT implemented in Fortran. ATLAS is implemented in C. GotoBLAS/OpenBLAS is implemented in C and its performance critical parts in Assembler. Only the reference implementation of BLAS is implemented in Fortran. However, all these BLAS implementations provide a Fortran interface such that it can be linked against LAPACK (LAPACK gains all its performance from BLAS).

Optimized compilers play a minor role in this respect (and for GotoBLAS/OpenBLAS the compiler does not matter at all).

IMHO no BLAS implementation uses algorithms like the Coppersmith–Winograd algorithm or the Strassen algorithm. The likely reasons are:

  • Maybe its not possible to provide a cache optimized implementation of these algorithms (i.e. you would loose more then you would win)
  • These algorithms are numerically not stable. As BLAS is the computational kernel of LAPACK this is a no-go.
  • Although these algorithms have a nice time complexity on paper, the Big O notation hides a large constant, so it only starts to become viable for extremely large matrices.

Edit/Update:

The new and ground breaking paper for this topic are the BLIS papers. They are exceptionally well written. For my lecture "Software Basics for High Performance Computing" I implemented the matrix-matrix product following their paper. Actually I implemented several variants of the matrix-matrix product. The simplest variants is entirely written in plain C and has less than 450 lines of code. All the other variants merely optimize the loops

    for (l=0; l<MR*NR; ++l) {
AB[l] = 0;
}
for (l=0; l<kc; ++l) {
for (j=0; j<NR; ++j) {
for (i=0; i<MR; ++i) {
AB[i+j*MR] += A[i]*B[j];
}
}
A += MR;
B += NR;
}

The overall performance of the matrix-matrix product only depends on these loops. About 99.9% of the time is spent here. In the other variants I used intrinsics and assembler code to improve the performance. You can see the tutorial going through all the variants here:

ulmBLAS: Tutorial on GEMM (Matrix-Matrix Product)

Together with the BLIS papers it becomes fairly easy to understand how libraries like Intel MKL can gain such a performance. And why it does not matter whether you use row or column major storage!

The final benchmarks are here (we called our project ulmBLAS):

Benchmarks for ulmBLAS, BLIS, MKL, openBLAS and Eigen

Another Edit/Update:

I also wrote some tutorial on how BLAS gets used for numerical linear algebra problems like solving a system of linear equations:

High Performance LU Factorization

(This LU factorization is for example used by Matlab for solving a system of linear equations.)

I hope to find time to extend the tutorial to describe and demonstrate how to realise a highly scalable parallel implementation of the LU factorization like in PLASMA.

Ok, here you go: Coding a Cache Optimized Parallel LU Factorization

P.S.: I also did make some experiments on improving the performance of uBLAS. It actually is pretty simple to boost (yeah, play on words :) ) the performance of uBLAS:

Experiments on uBLAS.

Here a similar project with BLAZE:

Experiments on BLAZE.



Related Topics



Leave a reply



Submit