Loop Unrolling to Achieve Maximum Throughput with Ivy Bridge and Haswell

Loop unrolling to achieve maximum throughput with Ivy Bridge and Haswell

For Sandy/Ivy Bridge you need to unroll by 3:

  • Only FP Add has dependency on the previous iteration of the loop
  • FP Add can issue every cycle
  • FP Add takes three cycles to complete
  • Thus unrolling by 3/1 = 3 completely hides the latency
  • FP Mul and FP Load do not have a dependency on the previous iteration and you can rely on the OoO core to issue them in the near-optimal order. These instructions could affect the unroll factor only if they lowered the throughput of FP Add (not the case here, FP Load + FP Add + FP Mul can issue every cycle).

For Haswell you need to unroll by 10:

  • Only FMA has dependency on the previous iteration of the loop
  • FMA can double-issue every cycle (i.e. on average independent instructions take 0.5 cycles)
  • FMA has latency of 5
  • Thus unrolling by 5/0.5 = 10 completely hides FMA latency
  • The two FP Load microoperations do not have a dependency on the previous iteration, and can co-issue with 2x FMA, so they don't affect the unroll factor.

loop unrolling not giving expected speedup for floating-point dot product

Your unroll doesn't help with the FP latency bottleneck:

sum + x + y + z without -ffast-math is the same order of operations as sum += x; sum += y; ... so you haven't done anything about the single dependency chain running through all the + operations. Loop overhead (or front-end throughput) is not the bottleneck, it's the 3 cycle latency of addss on Haswell, so this unroll makes basically no difference.

What would work is sum += u[i]*v[i] + u[i+1]*v[i+1] + ... as a way to unroll without multiple accumulators, because then the sum of each group of elements is independent.

It costs slightly more math operations that way, like starting with a mul and ending with an add, but the middle ones can still contract into FMAs if you compile with -march=haswell. See comments on AVX performance slower for bitwise xor op and popcount for an example of GCC turning a naive unroll like sum += u0*v0; sum += u1*v1 into sum += u0*v0 + u1*v1;. In that case the problem was slightly different: sum of squared differences like sum += (u0-v0)**2 + (u1-v1)**2;, but it boils down to the same latency problem of ultimately doing some multiplies and adds.

The other way to solve the problem is with multiple accumulators, allowing all the operations to be FMAs. But Haswell has 5-cycle latency FMA, and 3-cycle latency addss, so doing the sum += ... addition on its own, not as part of an FMA, actually helps with the latency bottleneck on Haswell (unlike on Skylake add/sub/mul are all 4 cycle latency). The following all show unrolling with multiple accumulators, instead of with adding groups together like the first towards pairwise summation like you're doing:

  • Why does mulss take only 3 cycles on Haswell, different from Agner's instruction tables? (Unrolling FP loops with multiple accumulators)
  • When, if ever, is loop unrolling still useful?
  • Loop unrolling to achieve maximum throughput with Ivy Bridge and Haswell

FP math instruction throughput isn't the bottleneck for a big dot product on modern CPUs, only latency. Or load throughput if you unroll enough.

Explain why any (scalar) version of an inner product procedure running on an Intel Core i7 Haswell processor cannot achieve a CPE less than 1.00.

Each element takes 2 loads, and with only 2 load ports, that's a hard throughput bottleneck. (https://agner.org/optimize/ / https://www.realworldtech.com/haswell-cpu/5/)

I'm assuming you're counting an "element" as an i value, a pair of floats, one each from udata[i] and vdata[i]. The FP FMA throughput bottleneck is also 2/clock on Haswell (whether they're scalar, 128-bit, or 256-bit vectors), but dot product takes 2 loads per FMA. In theory, even Sandybridge or maybe even K8 could achieve 1 element per clock, with separate mul and add instructions, since they both support 2 loads per clock, and have a wide enough pipeline to get load / mulss / addss through the pipeline with some room to spare.

How to reach AVX computation throughput for a simple loop

Improving performance for the codes like yours is "well explored" and still popular area. Take a look at dot-product (perfect link provided by Z Boson already) or at some (D)AXPY optimization discussions (https://scicomp.stackexchange.com/questions/1932/are-daxpy-dcopy-dscal-overkills)

In general , key topics to explore and consider applying are:

  • AVX2 advantage over AVX due to FMA and better load/store ports u-architecture on Haswell
  • Pre-Fetching. "Streaming stores", "non-temporal stores" for some platforms.
  • Threading parallelism to reach max sustained bandwidth
  • Unrolling (already done by you; Intel Compiler is also capable to do that with #pragma unroll (X) ). Not a big difference for "streaming" codes.
  • Finally deciding what is a set of hardware platforms you want to optimize your code for

Last bullet is particularly important, because for "streaming" and overall memory-bound codes - it's important to know more about target memory-sybsystems; for example, with existing and especially future high-end HPC servers (2nd gen Xeon Phi codenamed Knights Landing as an example) you may have very different "roofline model" balance between bandwidth and compute, and even different techniques than in case of optimizing for average desktop machine.

When the compiler reorders AVX instructions on Sandy, does it affect performance?

With x86 CPUs many people expect to get the maximum FLOPS from the dot product

for(int i=0; i<n; i++) sum += a[i]*b[i];

but this turns out not to be the case.

What can give the maximum FLOPS is this

for(int i=0; i<n; i++) sum += k*a[i];

where k is a constant. Why is the CPU not optimized for the dot product? I can speculate. One of the things CPUs are optimized for is BLAS. BLAS is considering a building block of many other routines.

The Level-1 and Level-2 BLAS routines become memory bandwidth bound as n increases. It's only the Level-3 routines (e.g. Matrix Multiplication) which are capable of being compute bound. This is because the Level-3 computations go as n^3 and the reads as n^2. So the CPU is optimized for the Level-3 routines. The Level-3 routines don't need to optimize for a single dot product. They only need to read from one matrix per iteration (sum += k*a[i]).

From this we can conclude that the number of bits needed to be read each cycle to get the maximum FLOPS for the Level-3 routines is

read_size = SIMD_WIDTH * num_MAC

where num_MAC is the number of multiply–accumulate operations that can be done each cycle.

                   SIMD_WIDTH (bits)   num_MAC  read_size (bits)  ports used
Nehalem 128 1 128 128-bits on port 2
Sandy Bridge 256 1 256 128-bits port 2 and 3
Haswell 256 2 512 256-bits port 2 and 3
Skylake 512 2 1024 ?

For Nehalem-Haswell this agrees with what the hardware is capable of. I don't actually know that Skylake will be able to read 1024-bits per clock cycle but if it can't AVX512 won't be very interesting so I'm confident in my guess. A nice plot for Nahalem, Sandy Bridge, and Haswell for each port can be found at http://www.anandtech.com/show/6355/intels-haswell-architecture/8

So far I have ignored latency and dependency chains. To really get the maximum FLOPS you need to unroll the loop at least three times on Sandy Bridge (I use four because I find it inconvenient to work with multiples of three)

The best way to answer your question about performance is to find the theoretic best performance you expect for your operation and then compare how close your code get to this. I call this the efficiency. Doing this you will find that despite the reordering of the instructions you see in the assembly the performance is still good. But there are many other subtle issues you may need to consider. Here are three issues I encountered:

l1-memory-bandwidth-50-drop-in-efficiency-using-addresses-which-differ-by-4096.

obtaining-peak-bandwidth-on-haswell-in-the-l1-cache-only-getting-62%

difference-in-performance-between-msvc-and-gcc-for-highly-optimized-matrix-multp.

I also suggest you consider using IACA to study the performance.

Why does mulss take only 3 cycles on Haswell, different from Agner's instruction tables? (Unrolling FP loops with multiple accumulators)

Related:

  • AVX2: Computing dot product of 512 float arrays has a good manually-vectorized dot-product loop using multiple accumulators with FMA intrinsics. The rest of the answer explains why that's a good thing, with cpu-architecture / asm details.
  • Dot Product of Vectors with SIMD shows that with the right compiler options, some compilers will auto-vectorize that way.
  • Loop unrolling to achieve maximum throughput with Ivy Bridge and Haswell another version of this Q&A with more focus on unrolling to hide latency (and bottleneck on throughput), less background on what that even means. And with examples using C intrinsics.
  • Latency bounds and throughput bounds for processors for operations that must occur in sequence - a textbook exercise on dependency chains, with two interlocking chains, one reading from earlier in the other.

Look at your loop again: movss xmm1, src has no dependency on the old value of xmm1, because its destination is write-only. Each iteration's mulss is independent. Out-of-order execution can and does exploit that instruction-level parallelism, so you definitely don't bottleneck on mulss latency.

Optional reading: In computer architecture terms: register renaming avoids the WAR anti-dependency data hazard of reusing the same architectural register. (Some pipelining + dependency-tracking schemes before register renaming didn't solve all the problems, so the field of computer architecture makes a big deal out of different kinds of data hazards.

Register renaming with Tomasulo's algorithm makes everything go away except the actual true dependencies (read after write), so any instruction where the destination is not also a source register has no interaction with the dependency chain involving the old value of that register. (Except for false dependencies, like popcnt on Intel CPUs, and writing only part of a register without clearing the rest (like mov al, 5 or sqrtss xmm2, xmm1). Related: Why do x86-64 instructions on 32-bit registers zero the upper part of the full 64-bit register?).


Back to your code:

.L13:
movss xmm1, DWORD PTR [rdi+rax*4]
mulss xmm1, DWORD PTR [rsi+rax*4]
add rax, 1
cmp cx, ax
addss xmm0, xmm1
jg .L13

The loop-carried dependencies (from one iteration to the next) are each:

  • xmm0, read and written by addss xmm0, xmm1, which has 3 cycle latency on Haswell.
  • rax, read and written by add rax, 1. 1c latency, so it's not the critical-path.

It looks like you measured the execution time / cycle-count correctly, because the loop bottlenecks on the 3c addss latency.

This is expected: the serial dependency in a dot product is the addition into a single sum (aka the reduction), not the multiplies between vector elements. (Unrolling with multiple sum accumulator variables / registers can hide that latency.)

That is by far the dominant bottleneck for this loop, despite various minor inefficiencies:


short i produced the silly cmp cx, ax, which takes an extra operand-size prefix. Luckily, gcc managed to avoid actually doing add ax, 1, because signed-overflow is Undefined Behaviour in C. So the optimizer can assume it doesn't happen. (update: integer promotion rules make it different for short, so UB doesn't come into it, but gcc can still legally optimize. Pretty wacky stuff.)

If you'd compiled with -mtune=intel, or better, -march=haswell, gcc would have put the cmp and jg next to each other where they could macro-fuse.

I'm not sure why you have a * in your table on the cmp and add instructions. (update: I was purely guessing that you were using a notation like IACA does, but apparently you weren't). Neither of them fuse. The only fusion happening is micro-fusion of mulss xmm1, [rsi+rax*4].

And since it's a 2-operand ALU instruction with a read-modify-write destination register, it stays macro-fused even in the ROB on Haswell. (Sandybridge would un-laminate it at issue time.) Note that vmulss xmm1, xmm1, [rsi+rax*4] would un-laminate on Haswell, too.

None of this really matters, since you just totally bottleneck on FP-add latency, much slower than any uop-throughput limits. Without -ffast-math, there's nothing compilers can do. With -ffast-math, clang will usually unroll with multiple accumulators, and it will auto-vectorize so they will be vector accumulators. So you can probably saturate Haswell's throughput limit of 1 vector or scalar FP add per clock, if you hit in L1D cache.

With FMA being 5c latency and 0.5c throughput on Haswell, you would need 10 accumulators to keep 10 FMAs in flight and max out FMA throughput by keeping p0/p1 saturated with FMAs. (Skylake reduced FMA latency to 4 cycles, and runs multiply, add, and FMA on the FMA units. So it actually has higher add latency than Haswell.)

(You're bottlenecked on loads, because you need two loads for every FMA. In other cases, you can actually gain add throughput by replacing some a vaddps instruction with an FMA with a multiplier of 1.0. This means more latency to hide, so it's best in a more complex algorithm where you have an add that's not on the critical path in the first place.)


Re: uops per port:

there are 1.19 uops per loop in the port 5, it is much more than expect 0.5, is it the matter about the uops dispatcher trying to make uops on every port same

Yes, something like that.

The uops are not assigned randomly, or somehow evenly distributed across every port they could run on. You assumed that the add and cmp uops would distribute evenly across p0156, but that's not the case.

The issue stage assigns uops to ports based on how many uops are already waiting for that port. Since addss can only run on p1 (and it's the loop bottleneck), there are usually a lot of p1 uops issued but not executed. So few other uops will ever be scheduled to port1. (This includes mulss: most of the mulss uops will end up scheduled to port 0.)

Taken-branches can only run on port 6. Port 5 doesn't have any uops in this loop that can only run there, so it ends up attracting a lot of the many-port uops.

The scheduler (which picks unfused-domain uops out of the Reservation Station) isn't smart enough to run critical-path-first, so this is assignment algorithm reduces resource-conflict latency (other uops stealing port1 on cycles when an addss could have run). It's also useful in cases where you bottleneck on the throughput of a given port.

Scheduling of already-assigned uops is normally oldest-ready first, as I understand it. This simple algorithm is hardly surprising, since it has to pick a uop with its inputs ready for each port from a 60-entry RS every clock cycle, without melting your CPU. The out-of-order machinery that finds and exploits the ILP is one of the significant power costs in a modern CPU, comparable to the execution units that do the actual work.

Related / more details: How are x86 uops scheduled, exactly?



More performance analysis stuff:

Other than cache misses / branch mispredicts, the three main possible bottlenecks for CPU-bound loops are:

  • dependency chains (like in this case)
  • front-end throughput (max of 4 fused-domain uops issued per clock on Haswell)
  • execution port bottlenecks, like if lots of uops need p0/p1, or p2/p3, like in your unrolled loop. Count unfused-domain uops for specific ports. Generally you can assuming best-case distribution, with uops that can run on other ports not stealing the busy ports very often, but it does happen some.

A loop body or short block of code can be approximately characterized by 3 things: fused-domain uop count, unfused-domain count of which execution units it can run on, and total critical-path latency assuming best-case scheduling for its critical path. (Or latencies from each of input A/B/C to the output...)

For example of doing all three to compare a few short sequences, see my answer on What is the efficient way to count set bits at a position or lower?

For short loops, modern CPUs have enough out-of-order execution resources (physical register file size so renaming doesn't run out of registers, ROB size) to have enough iterations of a loop in-flight to find all the parallelism. But as dependency chains within loops get longer, eventually they run out. See Measuring Reorder Buffer Capacity for some details on what happens when a CPU runs out of registers to rename onto.

See also lots of performance and reference links in the x86 tag wiki.



Tuning your FMA loop:

Yes, dot-product on Haswell will bottleneck on L1D throughput at only half the throughput of the FMA units, since it takes two loads per multiply+add.

If you were doing B[i] = x * A[i] + y; or sum(A[i]^2), you could saturate FMA throughput.

It looks like you're still trying to avoid register reuse even in write-only cases like the destination of a vmovaps load, so you ran out of registers after unrolling by 8. That's fine, but could matter for other cases.

Also, using ymm8-15 can slightly increase code-size if it means a 3-byte VEX prefix is needed instead of 2-byte. Fun fact: vpxor ymm7,ymm7,ymm8 needs a 3-byte VEX while vpxor ymm8,ymm8,ymm7 only needs a 2-byte VEX prefix. For commutative ops, sort source regs from high to low.

Our load bottleneck means the best-case FMA throughput is half the max, so we need at least 5 vector accumulators to hide their latency. 8 is good, so there's plenty of slack in the dependency chains to let them catch up after any delays from unexpected latency or competition for p0/p1. 7 or maybe even 6 would be fine, too: your unroll factor doesn't have to be a power of 2.

Unrolling by exactly 5 would mean that you're also right at the bottleneck for dependency chains. Any time an FMA doesn't run in the exact cycle its input is ready means a lost cycle in that dependency chain. This can happen if a load is slow (e.g. it misses in L1 cache and has to wait for L2), or if loads complete out of order and an FMA from another dependency chain steals the port this FMA was scheduled for. (Remember that scheduling happens at issue time, so the uops sitting in the scheduler are either port0 FMA or port1 FMA, not an FMA that can take whichever port is idle).

If you leave some slack in the dependency chains, out-of-order execution can "catch up" on the FMAs, because they won't be bottlenecked on throughput or latency, just waiting for load results. @Forward found (in an update to the question) that unrolling by 5 reduced performance from 93% of L1D throughput to 89.5% for this loop.

My guess is that unroll by 6 (one more than the minimum to hide the latency) would be ok here, and get about the same performance as unroll by 8. If we were closer to maxing out FMA throughput (rather than just bottlenecked on load throughput), one more than the minimum might not be enough.

update: @Forward's experimental test shows my guess was wrong. There isn't a big difference between unroll5 and unroll6. Also, unroll15 is twice as close as unroll8 to the theoretical max throughput of 2x 256b loads per clock. Measuring with just independent loads in the loop, or with independent loads and register-only FMA, would tell us how much of that is due to interaction with the FMA dependency chain. Even the best case won't get perfect 100% throughput, if only because of measurement errors and disruption due to timer interrupts. (Linux perf measures only user-space cycles unless you run it as root, but time still includes time spent in interrupt handlers. This is why your CPU frequency might be reported as 3.87GHz when run as non-root, but 3.900GHz when run as root and measuring cycles instead of cycles:u.)


We aren't bottlenecked on front-end throughput, but we can reduce the fused-domain uop count by avoiding indexed addressing modes for non-mov instructions. Fewer is better and makes this more hyperthreading-friendly when sharing a core with something other than this.

The simple way is just to do two pointer-increments inside the loop. The complicated way is a neat trick of indexing one array relative to the other:

;; input pointers for x[] and y[] in rdi and rsi
;; size_t n in rdx

;;; zero ymm1..8, or load+vmulps into them

add rdx, rsi ; end_y
; lea rdx, [rdx+rsi-252] to break out of the unrolled loop before going off the end, with odd n

sub rdi, rsi ; index x[] relative to y[], saving one pointer increment

.unroll8:
vmovaps ymm0, [rdi+rsi] ; *px, actually py[xy_offset]
vfmadd231ps ymm1, ymm0, [rsi] ; *py

vmovaps ymm0, [rdi+rsi+32] ; write-only reuse of ymm0
vfmadd231ps ymm2, ymm0, [rsi+32]

vmovaps ymm0, [rdi+rsi+64]
vfmadd231ps ymm3, ymm0, [rsi+64]

vmovaps ymm0, [rdi+rsi+96]
vfmadd231ps ymm4, ymm0, [rsi+96]

add rsi, 256 ; pointer-increment here
; so the following instructions can still use disp8 in their addressing modes: [-128 .. +127] instead of disp32
; smaller code-size helps in the big picture, but not for a micro-benchmark

vmovaps ymm0, [rdi+rsi+128-256] ; be pedantic in the source about compensating for the pointer-increment
vfmadd231ps ymm5, ymm0, [rsi+128-256]
vmovaps ymm0, [rdi+rsi+160-256]
vfmadd231ps ymm6, ymm0, [rsi+160-256]
vmovaps ymm0, [rdi+rsi-64] ; or not
vfmadd231ps ymm7, ymm0, [rsi-64]
vmovaps ymm0, [rdi+rsi-32]
vfmadd231ps ymm8, ymm0, [rsi-32]

cmp rsi, rdx
jb .unroll8 ; } while(py < endy);

Using a non-indexed addressing mode as the memory operand for vfmaddps lets it stay micro-fused in the out-of-order core, instead of being un-laminated at issue. Micro fusion and addressing modes

So my loop is 18 fused-domain uops for 8 vectors. Yours takes 3 fused-domain uops for each vmovaps + vfmaddps pair, instead of 2, because of un-lamination of indexed addressing modes. Both of them still of course have 2 unfused-domain load uops (port2/3) per pair, so that's still the bottleneck.

Fewer fused-domain uops lets out-of-order execution see more iterations ahead, potentially helping it absorb cache misses better. It's a minor thing when we're bottlenecked on an execution unit (load uops in this case) even with no cache misses, though. But with hyperthreading, you only get every other cycle of front-end issue bandwidth unless the other thread is stalled. If it's not competing too much for load and p0/1, fewer fused-domain uops will let this loop run faster while sharing a core. (e.g. maybe the other hyper-thread is running a lot of port5 / port6 and store uops?)

Since un-lamination happens after the uop-cache, your version doesn't take extra space in the uop cache. A disp32 with each uop is ok, and doesn't take extra space. But bulkier code-size means the uop-cache is less likely to pack as efficiently, since you'll hit 32B boundaries before uop cache lines are full more often. (Actually, smaller code doesn't guarantee better either. Smaller instructions could lead to filling a uop cache line and needing one entry in another line before crossing a 32B boundary.) This small loop can run from the loopback buffer (LSD), so fortunately the uop-cache isn't a factor.


Then after the loop: Efficient cleanup is the hard part of efficient vectorization for small arrays that might not be a multiple of the unroll factor or especially the vector width

    ...
jb

;; If `n` might not be a multiple of 4x 8 floats, put cleanup code here
;; to do the last few ymm or xmm vectors, then scalar or an unaligned last vector + mask.

; reduce down to a single vector, with a tree of dependencies
vaddps ymm1, ymm2, ymm1
vaddps ymm3, ymm4, ymm3
vaddps ymm5, ymm6, ymm5
vaddps ymm7, ymm8, ymm7

vaddps ymm0, ymm3, ymm1
vaddps ymm1, ymm7, ymm5

vaddps ymm0, ymm1, ymm0

; horizontal within that vector, low_half += high_half until we're down to 1
vextractf128 xmm1, ymm0, 1
vaddps xmm0, xmm0, xmm1
vmovhlps xmm1, xmm0, xmm0
vaddps xmm0, xmm0, xmm1
vmovshdup xmm1, xmm0
vaddss xmm0, xmm1
; this is faster than 2x vhaddps

vzeroupper ; important if returning to non-AVX-aware code after using ymm regs.
ret ; with the scalar result in xmm0

For more about the horizontal sum at the end, see Fastest way to do horizontal SSE vector sum (or other reduction). The two 128b shuffles I used don't even need an immediate control byte, so it saves 2 bytes of code size vs. the more obvious shufps. (And 4 bytes of code-size vs. vpermilps, because that opcode always needs a 3-byte VEX prefix as well as an immediate). AVX 3-operand stuff is very nice compared the SSE, especially when writing in C with intrinsics so you can't as easily pick a cold register to movhlps into.



Related Topics



Leave a reply



Submit