Rotate an Yuv Byte Array on Android

Rotate an YUV byte array on Android

The following method can rotate a YUV420 byte array by 90 degree.

private byte[] rotateYUV420Degree90(byte[] data, int imageWidth, int imageHeight) 
{
byte [] yuv = new byte[imageWidth*imageHeight*3/2];
// Rotate the Y luma
int i = 0;
for(int x = 0;x < imageWidth;x++)
{
for(int y = imageHeight-1;y >= 0;y--)
{
yuv[i] = data[y*imageWidth+x];
i++;
}
}
// Rotate the U and V color components
i = imageWidth*imageHeight*3/2-1;
for(int x = imageWidth-1;x > 0;x=x-2)
{
for(int y = 0;y < imageHeight/2;y++)
{
yuv[i] = data[(imageWidth*imageHeight)+(y*imageWidth)+x];
i--;
yuv[i] = data[(imageWidth*imageHeight)+(y*imageWidth)+(x-1)];
i--;
}
}
return yuv;
}

(Note that this might only work if the width and height is a factor of 4)

Rotate YUV420Sp image by 90 degrees counter clockwise

OK, here's my native code that evolved after much banging of the head.

My difficulty was that I didn't understand planar image formats until I saw this and this:

YUV420SP NV21 image format

Here are the 2 functions I eventually wrote:

// rotate luma image plane 90*
//
// (dst direction)
// ------>
// dst -> +-------------+
// |^ |
// |^ (base dir) |
// |^ |
// base -> +-------------+ <- endp
//
//////////////////////////////////////////////////////////
void rotateLumaPlane90(const unsigned char *src, unsigned char *dst,
size_t size, size_t width, size_t height)
{
const unsigned char *endp;
const unsigned char *base;
int j;

endp = src + size;
for (base = endp - width; base < endp; base++) {
src = base;
for (j = 0; j < height; j++, src -= width)
{
*dst++ = *src;
}

}
}

//
// nv12 chroma plane is interleaved chroma values that map
// from one pair of chroma to 4 pixels:
//
// Y1 Y2 Y3 Y4
// Y5 Y6 Y7 Y8 U1,V1 -> chroma values for block Y1 Y2
// Y9 Ya Yb Yc Y5 Y6
// Yd Ye Yf Yg
// ----------- U2,V2 -> chroma values for block Y3 Y4
// U1 V1 U2 V2 Y7 Y8
// U3 V3 U4 V4
//
//////////////////////////////////////////////////////////
void rotateChromaPlane90(const unsigned char *src, unsigned char *dst,
size_t size, size_t width, size_t height)
{
// src will start at upper right, moving down to bottom
// then left 1 col and down...
//
// dest will start at end and go to 0

int row = 0;
int col = (int) width;
int src_offset = col - 1;
int dst_offset = (int) size - 2;

while (src_offset >= 0)
{
dst[dst_offset] = src[src_offset];
dst[dst_offset+1] = src[src_offset+1];
dst_offset -= 2;

src_offset += width;
row++;

if (row >= height) {
col -= 2;
src_offset = col;
row = 0;
}
}
}

And here is a sample of me calling these funcs from android native:

  // first rotate the Y plane
rotateLumaPlane90((unsigned char *) encode_buffer,
rotate_buffer,
yPlaneSize,
gInputWidth,
gInputHeight);

// now rotate the U and V planes
rotateChromaPlane90((unsigned char *) encode_buffer + yPlaneSize,
rotate_buffer + yPlaneSize,
yPlaneSize / 2,
gInputWidth,
gInputHeight/2);

Notice the last param to the rotateChromaPlane90 is the height of the original image/2. I should probably just change the chroma rotate function to make that less error-prone.

When flipped to the back facing camera I then found I needed to rotate 90* in the opposite direction (or 270*) so I also have a 270* variation as:

// rotate luma image plane 270*
//
// +-------------+
// |^ |
// |^ (base dir) |
// |^ |
// base -> +-------------+ <- endp
// ^
// <---------- |
// (dst dir) dst
//
//////////////////////////////////////////////////////////
void rotateLumaPlane270(unsigned char *src,
register unsigned char *dst,
int size, int width, int height)
{
unsigned char *endp;
register unsigned char *base;
int j;

endp = src + size;
dst = dst + size - 1;
for (base = endp - width; base < endp; base++) {
src = base;
for (j = 0; j < height; j++, src -= width)
{
*dst-- = *src;
}

}
}

//
// nv21 chroma plane is interleaved chroma values that map
// from one pair of chroma to 4 pixels:
//
// Y1 Y2 Y3 Y4
// Y5 Y6 Y7 Y8 U1,V1 -> chroma values for block Y1 Y2
// Y9 Ya Yb Yc Y5 Y6
// Yd Ye Yf Yg
// ----------- U2,V2 -> chroma values for block Y3 Y4
// U1 V1 U2 V2 Y7 Y8
// U3 V3 U4 V4
//
//////////////////////////////////////////////////////////
void rotateChromaPlane270(unsigned char *src,
register unsigned char *dst,
int size, int width, int height)
{
// src will start at upper right, moving down to bottom
// then left 1 col and down...
//
// dest will start at 0 and go til end

int row = 0;
int col = width;
int src_offset = col - 1;
int dst_offset = 0;

while (src_offset > 0)
{
dst[dst_offset++] = src[src_offset];
dst[dst_offset++] = src[src_offset+1];

src_offset += width;
row++;

if (row >= height) {
col -= 2;
src_offset = col;
row = 0;
}
}
}

how to Rotate YuvImage in xamarin android

Android.Graphics.Bitmap bmp = Android.Graphics.BitmapFactory.DecodeByteArray(imageData, 0, imageData.Length);
Android.Graphics.Matrix matrix = new Android.Graphics.Matrix();
matrix.PostRotate(90);
bmp = Android.Graphics.Bitmap.CreateBitmap(bmp, 0, 0, bmp.Width, bmp.Height, matrix, true);

MemoryStream ms = new MemoryStream();
bmp.Compress(Android.Graphics.Bitmap.CompressFormat.Jpeg, 100, ms);
File.WriteAllBytes(filePath, ms.ToArray());

this code worked for me.

The algorithm rotation of YUV_420_888 format in react-native-camera

The code that you have posted rotates a 1 byte per pixel monochrome (grey-scale) image 90 degrees clockwise and returns it in a new byte array. It doesn't process any chroma information.

The YUV_420_888 image format stores an image in YUV format, where Y is the luma (grey-scale component) which is stored first in memory, and U and V are the chroma components which are stored after the luma. To save space, U and V are stored at half the horizontal and vertical resolution of the luma component.

Because the luma component is stored first, if you just ignore the chroma channels that come after it, you can treat it as a monochrome image, which is what the code is doing.

To do the actual rotation, the code is iterating over all the pixels in y and x. For each pixel, it calculates the new pixel location in the rotated image and copies it there.

Here is a badly-drawn diagram of what's happening:

Sample Image

The YUV_420_888 stores the pixels one row at a time, top-to-bottom, left-to-right. So the math to compute a pixel location is like this:

old_pixel_location = (y * width) + x

As you can see in the image, the old image width becomes the new image height and vice versa. The pixel position in the rotated image has a new_y value equal to the x value, and a new_x value which is y pixels to the left of the right side of the image.

new_width = height
new_height = width
new_x = (new_width - 1) - y
new_y = x

The new pixel position is:

new_pixel_location = (new_y * new_width) + new_x

// substituting gives:
new_x = (height - 1) - y
new_pixel_location = (x * height) + ((height - 1) - y)

// removing brackets and re-ordering:
old_pixel_location = x + y * width
new_pixel_location = x * height + height - y - 1


Related Topics



Leave a reply



Submit