How to "Flatten" or "Index" 3D-Array in 1D Array

How to flatten or index 3D-array in 1D array?

The algorithm is mostly the same. If you have a 3D array Original[HEIGHT, WIDTH, DEPTH] then you could turn it into Flat[HEIGHT * WIDTH * DEPTH] by

Flat[x + WIDTH * (y + DEPTH * z)] = Original[x, y, z]

As an aside, you should prefer arrays of arrays over multi-dimensional arrays in .NET. The performance differences are significant

3D array (1D flat) indexing

This depends on that how you want to order your 3D data in 1D array,
if you wanted to have indexes in order: Z, Y, X then your 2x2x2 dimensioned 3D data will be stored like this:

index 0: [z=0,y=0,x=0]
index 1: [z=0,y=0,x=1]
index 2: [z=0,y=1,x=0]
index 3: [z=0,y=1,x=1]
index 4: [z=1,y=0,x=0]
index 5: [z=1,y=0,x=1]
index 6: [z=1,y=1,x=0]
index 7: [z=1,y=1,x=1]

DEPTH dimension corresponds to z, HEIGHT to y and WIDTH to x

The index calculation will be: index = HEIGHT*WIDTH*z + WIDTH*y + x.

The x is not multiplied by anything because the next x index is right after the previous one.

If you want to skip one Y row, you have to add whole row WIDTH, in this case 2, for example if you are at index 1, which has z=0,y=0 and x=1 and you add WIDTH=2 to index, you'll get index 3. Only y dimension has increased by 1.

To move from z=0 to z=1, you have to skip 4 indexes (look up at the index listing), the number is HEIGHT*WIDTH (in this example 2*2).

Performance

To gain speed its best to process your 3D data with z,y,x coordinates incrementing in a sequence so you don't have to recalculate the index so often. For example:

int z = 1, y=1, x=0;
int index = HEIGHT*WIDTH*z + WIDTH*y;
int data;

for(x=0;x<WIDTH;x++)
{
Object obj = oneDArray[index+x];
}

In ideal case, all processing of data is independent from each other and you don't have to even calculate the index, just increment one index trough whole oneDArray. What's possible to precompute depends on your usage.

numpy 3d array -- flatten -- 1d array -- select one element in 1d -- how to know the index of the element in 3d?

You can use np.unravel_index (doc) for that:

indices = np.array(np.unravel_index(samples, D.shape)).T

How to flatten a 3D array where the 3d dimension is not fixed size into 1D array?

You can map an index i, j, k to linear position p in O(1) and back in O(log N), where N is the size of the 2D array, not the total number of strings.

First, let's treat your 2D array as a 1D, since that just makes things much easier. Index i is the index of a vector in the array. Index k is the position of a string in the vector. N is the size of the array.

You can create an array of integers (e.g. size_t) that holds the zero-based cumulative sum of all the vector lengths:

lengths = array[N]
lengths[0] = 0
for(i = 1 to N)
lengths[i] = lengths[i - 1] + size(array[i - 1])

If you want, you can compute the total number of strings as total = lengths[N - 1] + size(array[N - 1]).

Now, for a given string at index i, k, the position in the expanded array is just

p = lengths[i] + k

Given a position p, you map it to i, k using a bisection algorithm (binary search that returns the index of the left bound when an exact match isn't found):

i = bisect(lengths, p)
k = p - lengths[i]

Bisection is a simplified binary search, so O(log N).

All this works very nicely until you start expanding your vectors. At that point, insertion and deletion become O(N) operations, since you need to increment or decrement all the cumulative sums past the insertion point. To insert:

array[i][k].push(a_string)
for(z = i + 1 to N)
lengths[z]++

And to delete:

array[i][k].pop()
for(z = i + 1 to N)
lengths[z]--

By the way, if you still want to use indices x, y for the array, you can convert between the linear index i of lengths and back using

i = x + C * y
x = i % C
y = i / C

Here, C is the number of columns in your array. You can easily generalize this to any number of dimensions.

correct method of casting 3D array into 1D array

That's because the array you created uses row-major order instead of column-major as you assumed

The correct way to compare is

depth = 2
row = 3
col = 4
a = np.arange(24)
b = a.reshape((depth, row, col))
b.strides

i,j,k = np.indices(b.shape)

assert(np.all(a[(i*row+j)*col+k] == b[i,j,k]))

If you want more information on how the matrix is arranged you can check
b.strides = (48, 16, 4), this gives the coefficient (in bytes) of each index, i.e.g (i*48+j*16+k*4) is the offset of the element b[i,j,k].

This is the column-major order is called Fortran order and you can get numpy to reshape assuming it by passing order='F'

bf = a.reshape((depth, row, col), order='F')
assert(np.all(a[i + depth * (j + row * k)] == bf))

Then bf will be

array([[[ 0,  6, 12, 18],
[ 2, 8, 14, 20],
[ 4, 10, 16, 22]],

[[ 1, 7, 13, 19],
[ 3, 9, 15, 21],
[ 5, 11, 17, 23]]])

numpy 3D array reshape/flattern to 1D array based on row order

You can stack then flatten the array:

>>> np.stack(x, axis=1).flatten()
array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2])

C# flattening/expanding a 3D matrix into a jagged array

I am assuming that you want to know what is the mistake in your code more than you want to know the quickest way to get to the answer. Your index calculation is wrong. You are calculating it like this:

int index = i + j * rows0 + k * rows1;

But you actually need to multiply the last term not just by rows1 but by rows0 too:

int index = i + j * rows0 + k * rows1 * rows0;

Also, it makes sense to swap the order of dimensions that are iterated in for loops to get results in order. The final code for that would be:

    public static T[] Flatten<T>(T[,,] arr)
{
int rows0 = arr.GetLength(0);
int rows1 = arr.GetLength(1);
int rows2 = arr.GetLength(2);
T[] arrFlattened = new T[rows0 * rows1* rows2];

int i, j, k;
for (k = 0; k < rows0; k++)
{
for (j = 0; j < rows1; j++)
{
for (i = 0; i < rows2; i++)
{
var test = arr[k, j, i];
int index = i + j * rows2 + k * rows1 * rows2;
arrFlattened[index] = test;
}
}
}
return arrFlattened;
}

public static T[,,] Expand<T>(T[] arr, int rows0, int rows1)
{
int length = arr.GetLength(0);
int rows2 = length / rows0 / rows1;

T[,,] arrExpanded = new T[rows0, rows1, rows2];
int i, j, k;
for (k = 0; k < rows0; k++)
{
for (j = 0; j < rows1; j++)
{
for (i = 0; i < rows2; i++)
{
T test = arr[i + j * rows2 + k * rows1 * rows2];
arrExpanded[k, j, i] = test;
}
}
}
return arrExpanded;
}


Related Topics



Leave a reply



Submit