How to Convert Rgb -> Yuv -> Rgb (Both Ways)

How to convert RGB - YUV - RGB (both ways)

Yes, invertible transformations exist.

equasys GmbH posted invertible transformations from RGB to YUV, YCbCr, and YPbPr, along with explanations of which situation each is appropriate for, what this clamping is really about, and links to references. (Like a good SO answer.)

For my own application (jpg images, not analog voltages) YCbCr was appropriate, so I wrote code for those two transformations. Indeed, there-and-back-again values differed by less than 1 part in 256, for many images; and the before-and-after images were visually indistinguishable.

PIL's colour space conversion YCbCr -> RGB gets credit for mentioning equasys's web page.

Other answers, that could doubtfully improve on equasys's precision and concision:

  • https://code.google.com/p/imagestack/ includes rgb_to_x and x_to_rgb
    functions, but I didn't try to compile and test them.

  • Cory Nelson's answer links to code with similar functions, but it says that
    inversion's not possible in general, contradicting equasys.

  • The source code of FFmpeg, OpenCV, VLFeat, or ImageMagick.

2019 Edit: Here's the C code from github, mentioned in my comment.

void YUVfromRGB(double& Y, double& U, double& V, const double R, const double G, const double B)
{
Y = 0.257 * R + 0.504 * G + 0.098 * B + 16;
U = -0.148 * R - 0.291 * G + 0.439 * B + 128;
V = 0.439 * R - 0.368 * G - 0.071 * B + 128;
}
void RGBfromYUV(double& R, double& G, double& B, double Y, double U, double V)
{
Y -= 16;
U -= 128;
V -= 128;
R = 1.164 * Y + 1.596 * V;
G = 1.164 * Y - 0.392 * U - 0.813 * V;
B = 1.164 * Y + 2.017 * U;
}

How to deal with RGB to YUV conversion

You can convert RGB<->YUV in OpenCV with cvtColor using the code CV_YCrCb2RGB for YUV->RGB and CV_RGBYCrCb for RGB->YUV.

void cvCvtColor(const CvArr* src, CvArr* dst, int code)

Converts an image from one color space
to another.

Converting YUV-RGB(Image processing)-YUV during onPreviewFrame in android?

Why not specify that camera preview should provide RGB images?

i.e. Camera.Parameters.setPreviewFormat(ImageFormat.RGB_565);

Converting YUV into BGR or RGB in OpenCV

It looks to me like you're decoding a YUV422 stream as YUV444. Try this modification to the code you provided:

for(int i = 0, j=0; i < 1280 * 720 * 3; i+=6, j+=4)
{
m_RGB->imageData[i] = pData[j] + pData[j+3]*((1 - 0.299)/0.615);
m_RGB->imageData[i+1] = pData[j] - pData[j+1]*((0.114*(1-0.114))/(0.436*0.587)) - pData[j+3]*((0.299*(1 - 0.299))/(0.615*0.587));
m_RGB->imageData[i+2] = pData[j] + pData[j+1]*((1 - 0.114)/0.436);
m_RGB->imageData[i+3] = pData[j+2] + pData[j+3]*((1 - 0.299)/0.615);
m_RGB->imageData[i+4] = pData[j+2] - pData[j+1]*((0.114*(1-0.114))/(0.436*0.587)) - pData[j+3]*((0.299*(1 - 0.299))/(0.615*0.587));
m_RGB->imageData[i+5] = pData[j+2] + pData[j+1]*((1 - 0.114)/0.436);
}

I'm not sure you've got your constants correct, but at worst your colors will be off - the image should be recognizable.

Python OpenCV converting planar YUV 4:2:0 image to RGB -- YUV array format

In case the YUV standard matches the OpenCV COLOR_YUV2BGR_I420 conversion formula, you may read the frame as one chunk, and reshape it to height*1.5 rows apply conversion.

The following code sample:

  • Builds an input in YUV420 format, and write it to memory stream (instead of fifo).
  • Read frame from stream and convert it to BGR using COLOR_YUV2BGR_I420.

    Colors are incorrect...
  • Repeat the process by reading Y, U and V, resizing U, and V and using COLOR_YCrCb2BGR conversion.

    Note: OpenCV works in BGR color format (not RGB).

Here is the code:

import cv2
import numpy as np
import io

# Building the input:
###############################################################################
img = cv2.imread('GrandKingdom.jpg')

#yuv = cv2.cvtColor(img, cv2.COLOR_BGR2YUV)
#y, u, v = cv2.split(yuv)

# Convert BGR to YCrCb (YCrCb apply YCrCb JPEG (or YCC), "full range",
# where Y range is [0, 255], and U, V range is [0, 255] (this is the default JPEG format color space format).
yvu = cv2.cvtColor(img, cv2.COLOR_BGR2YCrCb)
y, v, u = cv2.split(yvu)

# Downsample U and V (apply 420 format).
u = cv2.resize(u, (u.shape[1]//2, u.shape[0]//2))
v = cv2.resize(v, (v.shape[1]//2, v.shape[0]//2))

# Open In-memory bytes streams (instead of using fifo)
f = io.BytesIO()

# Write Y, U and V to the "streams".
f.write(y.tobytes())
f.write(u.tobytes())
f.write(v.tobytes())

f.seek(0)
###############################################################################

# Read YUV420 (I420 planar format) and convert to BGR
###############################################################################
data = f.read(y.size*3//2) # Read one frame (number of bytes is width*height*1.5).

# Reshape data to numpy array with height*1.5 rows
yuv_data = np.frombuffer(data, np.uint8).reshape(y.shape[0]*3//2, y.shape[1])

# Convert YUV to BGR
bgr = cv2.cvtColor(yuv_data, cv2.COLOR_YUV2BGR_I420);

# How to How should I be placing the u and v channel information in all_yuv_data?
# -------------------------------------------------------------------------------
# Example: place the channels one after the other (for a single frame)
f.seek(0)
y0 = f.read(y.size)
u0 = f.read(y.size//4)
v0 = f.read(y.size//4)
yuv_data = y0 + u0 + v0
yuv_data = np.frombuffer(yuv_data, np.uint8).reshape(y.shape[0]*3//2, y.shape[1])
bgr = cv2.cvtColor(yuv_data, cv2.COLOR_YUV2BGR_I420);
###############################################################################

# Display result:
cv2.imshow("bgr incorrect colors", bgr)

###############################################################################
f.seek(0)
y = np.frombuffer(f.read(y.size), dtype=np.uint8).reshape((y.shape[0], y.shape[1])) # Read Y color channel and reshape to height x width numpy array
u = np.frombuffer(f.read(y.size//4), dtype=np.uint8).reshape((y.shape[0]//2, y.shape[1]//2)) # Read U color channel and reshape to height x width numpy array
v = np.frombuffer(f.read(y.size//4), dtype=np.uint8).reshape((y.shape[0]//2, y.shape[1]//2)) # Read V color channel and reshape to height x width numpy array

# Resize u and v color channels to be the same size as y
u = cv2.resize(u, (y.shape[1], y.shape[0]))
v = cv2.resize(v, (y.shape[1], y.shape[0]))
yvu = cv2.merge((y, v, u)) # Stack planes to 3D matrix (use Y,V,U ordering)

bgr = cv2.cvtColor(yvu, cv2.COLOR_YCrCb2BGR)
###############################################################################

# Display result:
cv2.imshow("bgr", bgr)
cv2.waitKey(0)
cv2.destroyAllWindows()

Result:

Sample Image

Converting from YUV colour space to RGB using OpenCV

There is not enough detail in your question to give a certain answer but below is my best guess. I'll assume you want RGBA output (not RGB, BGR or BGRA) and that your YUV is yuv420sp (as this is what comes out of an Android camera, and it is consistent with your Mat sizes)

void ConvertYUVtoRGBA(const unsigned char *src, unsigned char *dest, int width, int height)
{
//cv::Mat myuv(height + height/2, width, CV_8UC1, &src);
cv::Mat myuv(height + height/2, width, CV_8UC1, src); // pass buffer pointer, not its address
//cv::Mat mrgb(height, width, CV_8UC4, &dest);
cv::Mat mrgb(height, width, CV_8UC4, dest);

//cv::cvtColor(myuv, mrgb, CV_YCrCb2RGB);
cv::cvtColor(myuv, mrgb, CV_YUV2RGBA_NV21); // are you sure you don't want BGRA?
return;
}

Do I need to convert the Mat into char again?*

No the Mat mrgb is a wrapper around dest and, the way you have arranged it, the RGBA data will written directly into the dest buffer.

How to enhance this YUV420P to RGB conversion in C/C++?

Is it just me, but but shouldn't you be reading from the yuv array and writing to the rgbData array? You actually have it reversed in your implementation.

There's not need to invoke ceil on an integer expression such as i/4. And when you implement an image processing route, invoking a function call on every pixel is just going to kill performance (been there, done that). Maybe the compiler can optimize it out, but why take that chance.

So change this:

    Cr = rgbData[CrBase + ceil(i/4)]  - 128;
Cb = rgbData[CbBase + ceil(i/4)] - 128;

To this:

    Cr = rgbData[CrBase + i/4]  - 128;
Cb = rgbData[CbBase + i/4] - 128;

The only other thing to be wary of is that you may want to clamp R, G, and B to be in the 8-bit byte range before assigning back to the yuv array. Those math equations can produce results < 0 and > 255.

Another micro-optimization is to declare all your variables within the for-loop block so the compiler has more hints about optimizing on it as temporaries. And declaring some of your other constants as const May I suggest:

JNIEXPORT void JNICALL Java_com_example_mediacodecdecoderexample_YuvToRgb_YUVtoRBGA2(JNIEnv * env, jobject obj, jbyteArray yuv420sp, jint width, jint height, jbyteArray rgbOut)
{
//ITU-R BT.601 conversion
//
// R = 1.164*(Y-16)+1.596*(Cr-128)
// G = 1.164*(Y-16)-0.392*(Cb-128)-0.813*(Cr-128)
// B = 1.164*(Y-16)+2.017*(Cb-128)
//
const int size = width * height;
//After width*height luminance values we have the Cr values

const size_t CrBase = size;
//After width*height luminance values + width*height/4 we have the Cb values

const size_t CbBase = size + width*height/4;

jbyte *rgbData = (jbyte*) ((*env)->GetPrimitiveArrayCritical(env, rgbOut, 0));
jbyte* yuv= (jbyte*) (*env)->GetPrimitiveArrayCritical(env, yuv420sp, 0);

for (int i=0; i<size; i++) {
int Y = yuv[i] - 16;
int Cr = yuv[CrBase + i/4] - 128;
int Cb = yuv[CbBase + i/4] - 128;

int R = 1.164*Y+1.596*Cr;
int G = 1.164*Y-0.392*Cb-0.813*Cr;
int B = 1.164*Y+2.017*Cb;

rgbData[i*3] = (R > 255) ? 255 : ((R < 0) ? 0 : R);
rgbData[i*3+1] = (G > 255) ? 255 : ((G < 0) ? 0 : G);
rgbData[i*3+2] = (B > 255) ? 255 : ((B < 0) ? 0 : B);
}

(*env)->ReleasePrimitiveArrayCritical(env, rgbOut, rgbData, 0);
(*env)->ReleasePrimitiveArrayCritical(env, yuv420sp, yuv, 0);
}

Then the only left to do is just to compile with max optimizations on. The compiler will take care of the rest.

After that, investigating SIMD optimizations, which some compilers offer as a compiler switch (or enabled via pragma).

Where did this YUV420P to RGB shader conversion come from?

Why take [...] 0.5 and 0.5 in the first part?

U and V are stored in the green and blue color channel of the texture. The values in the color channels are stored in the range [0.0, 1.0]. For the computations the values have to be in mapped to the range [-0.5, 0.5]:

yuv.g = texture(tex_u, TexCoord).r - 0.5;
yuv.b = texture(tex_v, TexCoord).r - 0.5;

Subtracting 0.0625 from the red color channel is just an optimization. Thus, It does not have to be subtracted separately in each expression later.

The algorithm is the same as in How to convert RGB -> YUV -> RGB (both ways) or various books.



Related Topics



Leave a reply



Submit