Intel Avx: 256-Bits Version of Dot Product for Double Precision Floating Point Variables

Intel AVX : Why is there no 256-bits version of dot product for double precision floating point variables?

The underlying reason for this and various other AVX limitations is that architecturally AVX is little more than two SSE execution units side by side - you will notice that virtually no AVX instructions operate horizontally across the boundary between the two 128 bit halves of a vector (which is particularly annoying in the case of vpalignr). In general you effectively just get two 128 bit SSE operations in parallel, which is useful for the majority of instructions which just operate in an element-wise fashion, but not as useful as a proper 256 bit SIMD implementation.

Intel AVX: 256-bits version of dot product for double precision floating point variables

I would use a 4*double multiplication, then a hadd (which unfortunately adds only 2*2 floats in the upper and lower half), extract the upper half (a shuffle should work equally, maybe faster) and add it to the lower half.

The result is in the low 64 bit of dotproduct.

__m256d xy = _mm256_mul_pd( x, y );
__m256d temp = _mm256_hadd_pd( xy, xy );
__m128d hi128 = _mm256_extractf128_pd( temp, 1 );
__m128d dotproduct = _mm_add_pd( (__m128d)temp, hi128 );

Edit:

After an idea of Norbert P. I extended this version to do 4 dot products at one time.

__m256d xy0 = _mm256_mul_pd( x[0], y[0] );
__m256d xy1 = _mm256_mul_pd( x[1], y[1] );
__m256d xy2 = _mm256_mul_pd( x[2], y[2] );
__m256d xy3 = _mm256_mul_pd( x[3], y[3] );

// low to high: xy00+xy01 xy10+xy11 xy02+xy03 xy12+xy13
__m256d temp01 = _mm256_hadd_pd( xy0, xy1 );

// low to high: xy20+xy21 xy30+xy31 xy22+xy23 xy32+xy33
__m256d temp23 = _mm256_hadd_pd( xy2, xy3 );

// low to high: xy02+xy03 xy12+xy13 xy20+xy21 xy30+xy31
__m256d swapped = _mm256_permute2f128_pd( temp01, temp23, 0x21 );

// low to high: xy00+xy01 xy10+xy11 xy22+xy23 xy32+xy33
__m256d blended = _mm256_blend_pd(temp01, temp23, 0b1100);

__m256d dotproduct = _mm256_add_pd( swapped, blended );

How can i optimize my AVX implementation of dot product?

There are two big inefficiencies in your loop that are immediately apparent:

(1) these two chunks of scalar code:

__declspec(align(32)) double ar[4] = { xb[i].x, xb[i + 1].x, xb[i + 2].x, xb[i + 3].x };
...
__m256d y = _mm256_load_pd(ar);

and

__declspec(align(32)) double arr[4] = { xb[i].x, xb[i + 1].x, xb[i + 2].x, xb[i + 3].x };
...
__m256d w = _mm256_load_pd(arr);

should be implemented using SIMD loads and shuffles (or at the very least use _mm256_set_pd and give the compiler a chance to do a half-reasonable job of generating code for a gathered load).

(2) the horizontal summation at the end of the loop:

for (int i = 0; i < n; i++)
{
...
__m256d xy = _mm256_mul_pd(x, y);
__m256d zw = _mm256_mul_pd(z, w);
__m256d temp = _mm256_hadd_pd(xy, zw);
__m128d hi128 = _mm256_extractf128_pd(temp, 1);
__m128d low128 = _mm256_extractf128_pd(temp, 0);
//__m128d dotproduct = _mm_add_pd((__m128d)temp, hi128);
__m128d dotproduct = _mm_add_pd(low128, hi128);

sum += dotproduct.m128d_f64[0]+dotproduct.m128d_f64[1];
i += 3;
}

should be moved out of the loop:

__m256d xy = _mm256_setzero_pd();
__m256d zw = _mm256_setzero_pd();
...
for (int i = 0; i < n; i++)
{
...
xy = _mm256_add_pd(xy, _mm256_mul_pd(x, y));
zw = _mm256_add_pd(zw, _mm256_mul_pd(z, w));
i += 3;
}
__m256d temp = _mm256_hadd_pd(xy, zw);
__m128d hi128 = _mm256_extractf128_pd(temp, 1);
__m128d low128 = _mm256_extractf128_pd(temp, 0);
//__m128d dotproduct = _mm_add_pd((__m128d)temp, hi128);
__m128d dotproduct = _mm_add_pd(low128, hi128);

sum += dotproduct.m128d_f64[0]+dotproduct.m128d_f64[1];

Why don't wider versions of VDPPD / VDPPS exist, like 512-bit bit?

dpps is usually only helpful in the first place if you're using SIMD "wrong" (inefficiently), for horizontal operations within vectors rather than vertical across multiple vectors. (Sometimes you do that anyway if you can get some speedup for something without re-engineering some data structure that a lot of existing code uses, or if other use-cases for the data are more important.)

Slides + text: SIMD at Insomniac Games (GDC 2015) have some good stuff about designing your data structures to be SIMD friendly (SoA vs. AoS), and how that can give much better speedups than abusing a SIMD vector to hold an x,y,z geometry vector.



I have only been able to find 128-bit versions of the Vector Dot Product AVX/SIMD instructions

AVX1 includes vdpps ymm, ymm, ymm/m256, imm8. It does two independent 128-bit DPPS operations in the two lanes of a YMM, like for most other horizontal / shuffle instructions that got widened in less-than-useful ways in AVX and especially AVX2. Perhaps that's what you meant?

It seems like a reasonably important instruction family

Not at all. It's only useful for dot products of 2 to 4 elements, and decodes to multiple uops anyway. Can be worth using on Intel if it does exactly what you need, but AMD just microcodes it. https://uops.info/ / https://agner.org/optimize/. Or at least on Intel before Ice Lake also mostly "gave up" on it, decoding as 6 uops (14c latency), up from 4 in Skylake (13c latency) for vdpps xmm. Still not as bad as Zen's 8 uops (15c latency).

Geometry vectors of 3 elements in a 4-element SIMD vector get (ab)used some; that's what dpps is for. (And possibly sometimes stuff like 3x3 or 4x4 matrices, maybe even 2x2.) Two separate in-lane DPPS operations for vdpps ymm are probably more useful for getting some SIMD benefit for Array-of-Structs (AoS) data formats than a single 8-wide operation.

dpps isn't useful for dot products of larger arrays. For arrays, FMA into multiple accumulators which you only combine horizontally at the end, like in Improving performance of floating-point dot-product of an array with SIMD. See also this Q&A for some perf-tuning experiments.


A lane-crossing vdpps wouldn't be worth building any more dedicated hardware in execution units for, vs. just letting software handle arrays of 5 or more elements or micro-coding it. AVX-512 masking makes the immediate control operand of dpps less valuable. (The immediate lets you ignore some input elements, and zero vs. broadcast the result to your choice of elements, with two 4-bit bitmasks.)

That raises another point: vdpps already uses all the bits in the immediate. (The YMM version uses the same immediate control for both halves). So there wouldn't be room to scale it up to an 8-wide operation without changing how the immediate works. (e.g. maybe always broadcast, dropping the output 0-masking, so you an input control mask bit for each of 8 floats in a YMM? But that would have required custom hardware separate from what's needed for the xmm version).

Note that without any special instructions / uops, you can do the same thing as vdpps (ignoring the functionality of the immediate) with vmulps and then a horizontal sum. (vshufps / vaddps / vshufps / vaddps to leave the result broadcasted to each element.) Intel's hardware support brings this down from 5 to 4 uops (including handling the masking) on Skylake, 3p01 (the FP math execution units) plus 1 p5 (shuffle unit).

(Unfortunately haddps can't take advantage of any of that horizontal-FP hardware :/)

Ice Lake must have dropped some of this dedicated hardware since it's so rarely used; it's 6 uops there. (The 2 extra uops are for p0/p6 and p1/p5, so maybe a shuffle and maybe something turning the immediate into a mask? port 6 doesn't have any vector ALU execution units even in Ice Lake, AFAIK.)

Replicating that special-purpose hardware that dpps uses to an even wider vector width for AVX-512 would probably not have been worth the cost in transistors. 512-bits is very wide for a CPU; the FMA unit(s) already take significant area, and AVX-512 introduces a bunch of new instructions, including more lane-crossing shuffles, that are useful more of the time than a wider vdpps zmm would be on average across most code.

Usually dpps is only helpful in the first place if you're using SIMD wrong.

AVX2: Computing dot product of 512 float arrays

_mm256_dp_ps is only useful for dot-products of 2 to 4 elements; for longer vectors use vertical SIMD in a loop and reduce to scalar at the end. Using _mm256_dp_ps and _mm256_add_ps in a loop would be much slower.


GCC and clang require you to enable (with command line options) ISA extensions that you use intrinsics for, unlike MSVC and ICC.


The code below is probably close to theoretical performance limit of your CPU. Untested.

Compile it with clang or gcc -O3 -march=native. (Requires at least -mavx -mfma, but -mtune options implied by -march are good, too, and so are the other -mpopcnt and other things arch=native enables. Tune options are critical to this compiling efficiently for most CPUs with FMA, specifically -mno-avx256-split-unaligned-load: Why doesn't gcc resolve _mm256_loadu_pd as single vmovupd?)

Or compile it with MSVC -O2 -arch:AVX2

#include <immintrin.h>
#include <vector>
#include <assert.h>

// CPUs support RAM access like this: "ymmword ptr [rax+64]"
// Using templates with offset int argument to make easier for compiler to emit good code.

// Multiply 8 floats by another 8 floats.
template<int offsetRegs>
inline __m256 mul8( const float* p1, const float* p2 )
{
constexpr int lanes = offsetRegs * 8;
const __m256 a = _mm256_loadu_ps( p1 + lanes );
const __m256 b = _mm256_loadu_ps( p2 + lanes );
return _mm256_mul_ps( a, b );
}

// Returns acc + ( p1 * p2 ), for 8-wide float lanes.
template<int offsetRegs>
inline __m256 fma8( __m256 acc, const float* p1, const float* p2 )
{
constexpr int lanes = offsetRegs * 8;
const __m256 a = _mm256_loadu_ps( p1 + lanes );
const __m256 b = _mm256_loadu_ps( p2 + lanes );
return _mm256_fmadd_ps( a, b, acc );
}

// Compute dot product of float vectors, using 8-wide FMA instructions.
float dotProductFma( const std::vector<float>& a, const std::vector<float>& b )
{
assert( a.size() == b.size() );
assert( 0 == ( a.size() % 32 ) );
if( a.empty() )
return 0.0f;

const float* p1 = a.data();
const float* const p1End = p1 + a.size();
const float* p2 = b.data();

// Process initial 32 values. Nothing to add yet, just multiplying.
__m256 dot0 = mul8<0>( p1, p2 );
__m256 dot1 = mul8<1>( p1, p2 );
__m256 dot2 = mul8<2>( p1, p2 );
__m256 dot3 = mul8<3>( p1, p2 );
p1 += 8 * 4;
p2 += 8 * 4;

// Process the rest of the data.
// The code uses FMA instructions to multiply + accumulate, consuming 32 values per loop iteration.
// Unrolling manually for 2 reasons:
// 1. To reduce data dependencies. With a single register, every loop iteration would depend on the previous result.
// 2. Unrolled code checks for exit condition 4x less often, therefore more CPU cycles spent computing useful stuff.
while( p1 < p1End )
{
dot0 = fma8<0>( dot0, p1, p2 );
dot1 = fma8<1>( dot1, p1, p2 );
dot2 = fma8<2>( dot2, p1, p2 );
dot3 = fma8<3>( dot3, p1, p2 );
p1 += 8 * 4;
p2 += 8 * 4;
}

// Add 32 values into 8
const __m256 dot01 = _mm256_add_ps( dot0, dot1 );
const __m256 dot23 = _mm256_add_ps( dot2, dot3 );
const __m256 dot0123 = _mm256_add_ps( dot01, dot23 );
// Add 8 values into 4
const __m128 r4 = _mm_add_ps( _mm256_castps256_ps128( dot0123 ), _mm256_extractf128_ps( dot0123, 1 ) );
// Add 4 values into 2
const __m128 r2 = _mm_add_ps( r4, _mm_movehl_ps( r4, r4 ) );
// Add 2 lower values into the final result
const __m128 r1 = _mm_add_ss( r2, _mm_movehdup_ps( r2 ) );
// Return the lowest lane of the result vector.
// The intrinsic below compiles into noop, modern compilers return floats in the lowest lane of xmm0 register.
return _mm_cvtss_f32( r1 );
}

Possible further improvements:

  1. Unroll by 8 vectors instead of 4. I’ve checked gcc 9.2 asm output, compiler only used 8 vector registers out of the 16 available.

  2. Make sure both input vectors are aligned, e.g. use a custom allocator which calls _aligned_malloc / _aligned_free on msvc, or aligned_alloc / free on gcc & clang. Then replace _mm256_loadu_ps with _mm256_load_ps.


To auto-vectorize a simple scalar dot product, you'd also need OpenMP SIMD or -ffast-math (implied by -Ofast) to let the compiler treat FP math as associative even though it's not (because of rounding). But GCC won't use multiple accumulators when auto-vectorizing, even if it does unroll, so you'd bottleneck on FMA latency, not load throughput.

(2 loads per FMA means the throughput bottleneck for this code is vector loads, not actual FMA operations.)

AVX 256-bit equivalent for _mm_load1_ps

_mm256_broadcast_ss is what you are looking for.

How to efficiently perform double/int64 conversions with SSE/AVX?

There's no single instruction until AVX512, which added conversion to/from 64-bit integers, signed or unsigned. (Also support for conversion to/from 32-bit unsigned). See intrinsics like _mm512_cvtpd_epi64 and the narrower AVX512VL versions, like _mm256_cvtpd_epi64.

If you only have AVX2 or less, you'll need tricks like below for packed-conversion. (For scalar, x86-64 has scalar int64_t <-> double or float from SSE2, but scalar uint64_t <-> FP requires tricks until AVX512 adds unsigned conversions. Scalar 32-bit unsigned can be done by zero-extending to 64-bit signed.)


If you're willing to cut corners, double <-> int64 conversions can be done in only two instructions:

  • If you don't care about infinity or NaN.
  • For double <-> int64_t, you only care about values in the range [-2^51, 2^51].
  • For double <-> uint64_t, you only care about values in the range [0, 2^52).

double -> uint64_t

//  Only works for inputs in the range: [0, 2^52)
__m128i double_to_uint64(__m128d x){
x = _mm_add_pd(x, _mm_set1_pd(0x0010000000000000));
return _mm_xor_si128(
_mm_castpd_si128(x),
_mm_castpd_si128(_mm_set1_pd(0x0010000000000000))
);
}

double -> int64_t

//  Only works for inputs in the range: [-2^51, 2^51]
__m128i double_to_int64(__m128d x){
x = _mm_add_pd(x, _mm_set1_pd(0x0018000000000000));
return _mm_sub_epi64(
_mm_castpd_si128(x),
_mm_castpd_si128(_mm_set1_pd(0x0018000000000000))
);
}

uint64_t -> double

//  Only works for inputs in the range: [0, 2^52)
__m128d uint64_to_double(__m128i x){
x = _mm_or_si128(x, _mm_castpd_si128(_mm_set1_pd(0x0010000000000000)));
return _mm_sub_pd(_mm_castsi128_pd(x), _mm_set1_pd(0x0010000000000000));
}

int64_t -> double

//  Only works for inputs in the range: [-2^51, 2^51]
__m128d int64_to_double(__m128i x){
x = _mm_add_epi64(x, _mm_castpd_si128(_mm_set1_pd(0x0018000000000000)));
return _mm_sub_pd(_mm_castsi128_pd(x), _mm_set1_pd(0x0018000000000000));
}

Rounding Behavior:

  • For the double -> uint64_t conversion, rounding works correctly following the current rounding mode. (which is usually round-to-even)
  • For the double -> int64_t conversion, rounding will follow the current rounding mode for all modes except truncation. If the current rounding mode is truncation (round towards zero), it will actually round towards negative infinity.

How does it work?

Despite this trick being only 2 instructions, it's not entirely self-explanatory.

The key is to recognize that for double-precision floating-point, values in the range [2^52, 2^53) have the "binary place" just below the lowest bit of the mantissa. In other words, if you zero out the exponent and sign bits, the mantissa becomes precisely the integer representation.

To convert x from double -> uint64_t, you add the magic number M which is the floating-point value of 2^52. This puts x into the "normalized" range of [2^52, 2^53) and conveniently rounds away the fractional part bits.

Now all that's left is to remove the upper 12 bits. This is easily done by masking it out. The fastest way is to recognize that those upper 12 bits are identical to those of M. So rather than introducing an additional mask constant, we can simply subtract or XOR by M. XOR has more throughput.

Converting from uint64_t -> double is simply the reverse of this process. You add back the exponent bits of M. Then un-normalize the number by subtracting M in floating-point.

The signed integer conversions are slightly trickier since you need to deal with the 2's complement sign-extension. I'll leave those as an exercise for the reader.

Related: A fast method to round a double to a 32-bit int explained


Full Range int64 -> double:

After many years, I finally had a need for this.

  • 5 instructions for uint64_t -> double
  • 6 instructions for int64_t -> double

uint64_t -> double

__m128d uint64_to_double_full(__m128i x){
__m128i xH = _mm_srli_epi64(x, 32);
xH = _mm_or_si128(xH, _mm_castpd_si128(_mm_set1_pd(19342813113834066795298816.))); // 2^84
__m128i xL = _mm_blend_epi16(x, _mm_castpd_si128(_mm_set1_pd(0x0010000000000000)), 0xcc); // 2^52
__m128d f = _mm_sub_pd(_mm_castsi128_pd(xH), _mm_set1_pd(19342813118337666422669312.)); // 2^84 + 2^52
return _mm_add_pd(f, _mm_castsi128_pd(xL));
}

int64_t -> double

__m128d int64_to_double_full(__m128i x){
__m128i xH = _mm_srai_epi32(x, 16);
xH = _mm_blend_epi16(xH, _mm_setzero_si128(), 0x33);
xH = _mm_add_epi64(xH, _mm_castpd_si128(_mm_set1_pd(442721857769029238784.))); // 3*2^67
__m128i xL = _mm_blend_epi16(x, _mm_castpd_si128(_mm_set1_pd(0x0010000000000000)), 0x88); // 2^52
__m128d f = _mm_sub_pd(_mm_castsi128_pd(xH), _mm_set1_pd(442726361368656609280.)); // 3*2^67 + 2^52
return _mm_add_pd(f, _mm_castsi128_pd(xL));
}

These work for the entire 64-bit range and are correctly rounded to the current rounding behavior.

These are similar wim's answer below - but with more abusive optimizations. As such, deciphering these will also be left as an exercise to the reader.



Related Topics



Leave a reply



Submit