Convert Uiimage from Bgr to Rgb

Convert UIImage from BGR to RGB

Here's a very simple CIKernel to swap things:

kernel vec4 swapRedAndGreenAmount(__sample s) {
return s.bgra;
}

Here's the Swift code to use it:

let uiInput = UIImage(named: "myImage")
let ciInput = CIImage(image: uiInput!)
let ctx = CIContext(options: nil)
let swapKernel = CIColorKernel( source:
"kernel vec4 swapRedAndGreenAmount(__sample s) {" +
"return s.bgra;" +
"}"
)
let ciOutput = swapKernel?.apply(extent: (ciInput?.extent)!, arguments: [ciInput as Any])
let cgImage = ctx.createCGImage(ciOutput!, from: (ciInput?.extent)!)
let uiOutput = UIImage(cgImage: cgImage!)

Be aware of a few things:

  • This will work on devices running iOS 9 or later.
  • Second and almost as important, this uses CoreImage and the GPU. Thus, testing this on a simulator may take seconds to render. But on a device it will take microseconds.
  • I tend to use a CIContext to create a CGImage before ending up with a UIImage. You may be able to remove this step and go straight from a CIImage to a UIImage.
  • Excuse the wrapping/unwrapping, it's converted from old code. You can probably do a better job.

Explanation:

Using CoreImage "Kernel" code, which until iOS 11 could only be a subset of GLSL code, I wrote a simple CIColorKernel that takes a pixel's RGB value and returns the pixel color as GRB.

A CIColorKernel is optimized to work on a single pixel at a time with no access to the pixels surrounding it. Unlike that, a CIWarpKernel is optimized to "warp" a pixel based on the pixels around it. Both of these are (more or less) optimized subclasses of a CIKernel, which - until iOS 11 and Metal Performance Shaders - is about the closest you get to using openGL inside of CoreImage.

Final edit:

What this solution does is swap a pixel's RGB one-by-one using CoreImage. It's fast because it uses the GPU, deceptively fast (because the simulator does not give you anything close to real-time performance on a device), and simple (because it swaps things from RGB to BGR).

The actual code to do this is straightforward. Hopefully it works as a start for those who want to do much larger "under the hood" things using CoreImage.

EDIT (25 February 2021):

As of WWDC 2019 Apple deprecated openGL - specifically GLKit - in favor of MetalKit. For a color kernel like this, it's rather trivial to convert this code. Warp kernels are slightly more trickier though.

As for when Apple will "kill" OpenGL is hard to say. We all know that someday UIKit will also be deprecated, but (showing my age now) it may not be in my lifetime. YMMV.

PIL rotate image colors (BGR - RGB)

Assuming no alpha band, isn't it as simple as this?

b, g, r = im.split()
im = Image.merge("RGB", (r, g, b))

Edit:

Hmm... It seems PIL has a few bugs in this regard... im.split() doesn't seem to work with recent versions of PIL (1.1.7). It may (?) still work with 1.1.6, though...

Edit a RGB colorspace image with HSL conversion failed

1) take a look to this, specifically the part of RGB->HLS. When the source image is 8 bits it will go from 0-255 but if you use float image it may have different values.

8-bit images: V←255⋅V, S←255⋅S, H←H/2(to fit to 0 to 255)

V should be L, there is a typo in the documentation

You can convert the RGB/BGR image to a floating point image and then you will have the full value. i.e. the S and L are from 0 to 1 and H 0-360.

But you have to be careful converting it back.

2) Vec3b is unsigned 8 bits image (CV_8U) and Vec3i is integers (CV_32S). Knowing this, it depends on what type is your image. As you said it goes from 0-255 it should be unsigned 8 bits which you should use Vec3b. If you use the other one, it will get 32 bits per pixel and it uses this size to calculate the position in the array of pixels... so it may give something like out of bounds or segmentation error or random problems.

If you have a question, feel free to comment

How to check whether my image is RGB format or BGR format in python? How do i convert them and viceversa?

When you use opencv (imread, VideoCapture), the images are loaded in the BGR color space.

Reference :
Note: In the case of color images, the decoded images will have the channels stored in B G R order.

Link : https://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html#imread)

To convert you can use

rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

and vice versa.

how to convert from cvMat to UIImage in objective-c?

Note: most implementations don't correctly handle an alpha channel or convert from OpenCV's BGR pixel format to iOS's RGB.

This will correctly convert from cv::Mat to UIImage:

+(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data length:image.step.p[0]*image.rows];

CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;

if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapInfo = kCGBitmapByteOrder32Little | (
cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
}

CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
bitmapInfo, // bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);

// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);

return finalImage;
}

And to convert from UIImage to cv::Mat:

+ (cv::Mat)cvMatWithImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
size_t numberOfComponents = CGColorSpaceGetNumberOfComponents(colorSpace);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;

cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault;

// check whether the UIImage is greyscale already
if (numberOfComponents == 1){
cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
}

CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
bitmapInfo); // Bitmap info flags

CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);

return cvMat;
}

Convert Image object to rgb pixel array and back in Flutter

https://pub.dartlang.org/packages/image provides image conversion and manipulation utility functions.

How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?

FYI, I combined Keremk's answer with my original outline, cleaned-up the typos, generalized it to return an array of colors and got the whole thing to compile. Here is the result:

+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)x andY:(int)y count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];

// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);

CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);

// Now your rawData contains the image data in the RGBA8888 pixel format.
NSUInteger byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
for (int i = 0 ; i < count ; ++i)
{
CGFloat alpha = ((CGFloat) rawData[byteIndex + 3] ) / 255.0f;
CGFloat red = ((CGFloat) rawData[byteIndex] ) / alpha;
CGFloat green = ((CGFloat) rawData[byteIndex + 1] ) / alpha;
CGFloat blue = ((CGFloat) rawData[byteIndex + 2] ) / alpha;
byteIndex += bytesPerPixel;

UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}

free(rawData);

return result;
}

How do I get the color of a pixel in a UIImage with Swift?

A bit of searching leads me here since I was facing the similar problem.
You code works fine. The problem might be raised from your image.

Code:

  //On the top of your swift 
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {

let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)

let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4

let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)

return UIColor(red: r, green: g, blue: b, alpha: a)
}
}

What happens is this method will pick the pixel colour from the image's CGImage. So make sure you are picking from the right image. e.g. If you UIImage is 200x200, but the original image file from Imgaes.xcassets or wherever it came from, is 400x400, and you are picking point (100,100), you are actually picking the point on the upper left section of the image, instead of middle.

Two Solutions:

1, Use image from Imgaes.xcassets, and only put one @1x image in 1x field. Leave the @2x, @3x blank. Make sure you know the image size, and pick a point that is within the range.

//Make sure only 1x image is set
let image : UIImage = UIImage(named:"imageName")
//Make sure point is within the image
let color : UIColor = image.getPixelColor(CGPointMake(xValue, yValue))

2, Scale you CGPoint up/down the proportion to match the UIImage. e.g. let point = CGPoint(100,100) in the example above,

let xCoordinate : Float = Float(point.x) * (400.0/200.0)
let yCoordinate : Float = Float(point.y) * (400.0/200.0)
let newCoordinate : CGPoint = CGPointMake(CGFloat(xCoordinate), CGFloat(yCoordinate))
let image : UIImage = largeImage
let color : UIColor = image.getPixelColor(CGPointMake(xValue, yValue))

I've only tested the first method, and I am using it to get a colour off a colour palette. Both should work.
Happy coding :)



Related Topics



Leave a reply



Submit