iOS -- Detect the Color of a Pixel

iOS -- detect the color of a pixel?

This may not be the most direct route, but you could:

  1. Use UIGraphicsBeginImageContextWithOptions to grab the screen (see the Apple Q&A QA1703 - "Screen Capture in UIKit Applications").

  2. Then use CGImageCreateWithImageInRect to grab the portion of the resultant image you require.

  3. Finally analyse the resultant image. It gets complicated at this point, but thankfully there's an existing question that should show you the way: How to get the RGB values for a pixel on an image on the iphone

Alternatively, there's the following blog article that has accompanying code: What Color is My Pixel? Image based color picker on iPhone

Get color from pixel in Objective-C

Try the below code:

// UIView+ColorOfPoint.h
@interface UIView (ColorOfPoint)
- (UIColor *) colorOfPoint:(CGPoint)point;
@end

// UIView+ColorOfPoint.m
#import "UIView+ColorOfPoint.h"
#import <QuartzCore/QuartzCore.h>

@implementation UIView (ColorOfPoint)

- (UIColor *) colorOfPoint:(CGPoint)point
{
unsigned char pixel[4] = {0};

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 4, colorSpace, kCGBitmapAlphaInfoMask & kCGImageAlphaPremultipliedLast);

CGContextTranslateCTM(context, -point.x, -point.y);

[self.layer renderInContext:context];

CGContextRelease(context);
CGColorSpaceRelease(colorSpace);

//NSLog(@"pixel: %d %d %d %d", pixel[0], pixel[1], pixel[2], pixel[3]);

UIColor *color = [UIColor colorWithRed:pixel[0]/255.0 green:pixel[1]/255.0 blue:pixel[2]/255.0 alpha:pixel[3]/255.0];

return color;
}

You can find the files here: https://github.com/ivanzoid/ikit/tree/master/UIView%2BColorOfPoint

Also, check the answers here for more info: How to get the color of a pixel in an UIView?

How to get the color of a pixel of a NSImage?

The coordinate system of an image and coordinate system of your view are not the same. The conversion is needed between them.

It is hard to say how

let posX = self.frame.origin.x + (self.frame.width / 2)
let posY = self.frame.origin.y + (self.frame.height / 2)

relate to your image as you did not specify any additional information.

If you have an image view and you would like to extract a pixel at a certain position (x, y) then you need to take into consideration the scaling and content mode.

The image itself is usually placed in the byte buffer so that the top-left pixel is first and is followed by the pixel right of it. The coordinate system of NSView is not though as it starts on bottom left.

To begin with it makes most sense to get relative position. This is a point with coordinates within [0, 1]. For your view it should be:

func getRelativePositionInView(_ view: NSView, absolutePosition: (x: CGFloat, y: CGFloat)) -> (x: CGFloat, y: CGFloat) {
return ((absolutePosition.x - view.frame.origin.x)/view.frame.width, (absolutePosition.y - view.frame.origin.y)/view.frame.height)
}

Now this point should be converted to image coordinate system where vertical flip needs be done and scaling applied.

If content mode is simply "scale" (whole image is shown) then the solution is simple:

func pointOnImage(_ image: NSImage, relativePositionInView: (x: CGFloat, y: CGFloat)) -> (x: CGFloat, y: CGFloat)? {
let convertedCoordinates: (x: CGFloat, y: CGFloat) = (
relativePositionInView.x * image.size.width,
(1.0-relativePositionInView.y) * image.size.height
)
guard convertedCoordinates.x >= 0.0 else { return nil }
guard convertedCoordinates.y >= 0.0 else { return nil }
guard convertedCoordinates.x < image.size.width else { return nil }
guard convertedCoordinates.y < image.size.height else { return nil }

return convertedCoordinates
}

Some other more common modes are scale-aspect-fill and scale-aspect-fit. Those need extra computations when converting points but this seems to not be a part of your issue (for now).

So the two methods will most likely fix your issue. But you can also just apply a very short fix:

let posY = whateverViewTheImageIsOn.frame.height - (self.frame.origin.y + (self.frame.height / 2))

Personally I think this to be very messy but you be the judge of that.

There are also some other considerations which may or may not be valid for your case. When displaying an image the color of pixels may appear different than they are in your buffer. Mostly this is due to scaling. For instance a pure black and white image may show gray areas on some pixels. If this is something you would like to apply when finding a color it makes more sense to look into creating an image from NSView. This approach could also remove a lot of mathematical problems for you.

How to get the RGB values for a pixel on an image on the iphone

A little more detail...

I posted earlier this evening with a consolidation and small addition to what had been said on this page - that can be found at the bottom of this post. I am editing the post at this point, however, to post what I propose is (at least for my requirements, which include modifying pixel data) a better method, as it provides writable data (whereas, as I understand it, the method provided by previous posts and at the bottom of this post provides a read-only reference to data).

Method 1: Writable Pixel Information

  1. I defined constants

    #define RGBA        4
    #define RGBA_8_BIT 8
  2. In my UIImage subclass I declared instance variables:

    size_t bytesPerRow;
    size_t byteCount;
    size_t pixelCount;

    CGContextRef context;
    CGColorSpaceRef colorSpace;

    UInt8 *pixelByteData;
    // A pointer to an array of RGBA bytes in memory
    RPVW_RGBAPixel *pixelData;
  3. The pixel struct (with alpha in this version)

    typedef struct RGBAPixel {
    byte red;
    byte green;
    byte blue;
    byte alpha;
    } RGBAPixel;
  4. Bitmap function (returns pre-calculated RGBA; divide RGB by A to get unmodified RGB):

    -(RGBAPixel*) bitmap {
    NSLog( @"Returning bitmap representation of UIImage." );
    // 8 bits each of red, green, blue, and alpha.
    [self setBytesPerRow:self.size.width * RGBA];
    [self setByteCount:bytesPerRow * self.size.height];
    [self setPixelCount:self.size.width * self.size.height];

    // Create RGB color space
    [self setColorSpace:CGColorSpaceCreateDeviceRGB()];

    if (!colorSpace)
    {
    NSLog(@"Error allocating color space.");
    return nil;
    }

    [self setPixelData:malloc(byteCount)];

    if (!pixelData)
    {
    NSLog(@"Error allocating bitmap memory. Releasing color space.");
    CGColorSpaceRelease(colorSpace);

    return nil;
    }

    // Create the bitmap context.
    // Pre-multiplied RGBA, 8-bits per component.
    // The source image format will be converted to the format specified here by CGBitmapContextCreate.
    [self setContext:CGBitmapContextCreate(
    (void*)pixelData,
    self.size.width,
    self.size.height,
    RGBA_8_BIT,
    bytesPerRow,
    colorSpace,
    kCGImageAlphaPremultipliedLast
    )];

    // Make sure we have our context
    if (!context) {
    free(pixelData);
    NSLog(@"Context not created!");
    }

    // Draw the image to the bitmap context.
    // The memory allocated for the context for rendering will then contain the raw image pixelData in the specified color space.
    CGRect rect = { { 0 , 0 }, { self.size.width, self.size.height } };

    CGContextDrawImage( context, rect, self.CGImage );

    // Now we can get a pointer to the image pixelData associated with the bitmap context.
    pixelData = (RGBAPixel*) CGBitmapContextGetData(context);

    return pixelData;
    }

Read-Only Data (Previous information) - method 2:


Step 1. I declared a type for byte:

 typedef unsigned char byte;

Step 2. I declared a struct to correspond to a pixel:

 typedef struct RGBPixel{
byte red;
byte green;
byte blue;
}
RGBPixel;

Step 3. I subclassed UIImageView and declared (with corresponding synthesized properties):

//  Reference to Quartz CGImage for receiver (self)  
CFDataRef bitmapData;

// Buffer holding raw pixel data copied from Quartz CGImage held in receiver (self)
UInt8* pixelByteData;

// A pointer to the first pixel element in an array
RGBPixel* pixelData;

Step 4. Subclass code I put in a method named bitmap (to return the bitmap pixel data):

//Get the bitmap data from the receiver's CGImage (see UIImage docs)  
[self setBitmapData: CGDataProviderCopyData(CGImageGetDataProvider([self CGImage]))];

//Create a buffer to store bitmap data (unitialized memory as long as the data)
[self setPixelBitData:malloc(CFDataGetLength(bitmapData))];

//Copy image data into allocated buffer
CFDataGetBytes(bitmapData,CFRangeMake(0,CFDataGetLength(bitmapData)),pixelByteData);

//Cast a pointer to the first element of pixelByteData
//Essentially what we're doing is making a second pointer that divides the byteData's units differently - instead of dividing each unit as 1 byte we will divide each unit as 3 bytes (1 pixel).
pixelData = (RGBPixel*) pixelByteData;

//Now you can access pixels by index: pixelData[ index ]
NSLog(@"Pixel data one red (%i), green (%i), blue (%i).", pixelData[0].red, pixelData[0].green, pixelData[0].blue);

//You can determine the desired index by multiplying row * column.
return pixelData;

Step 5. I made an accessor method:

-(RGBPixel*)pixelDataForRow:(int)row column:(int)column{
//Return a pointer to the pixel data
return &pixelData[row * column];
}

How to determine the presence of a color in a picture from iPhone Camera

/**
* Structure to keep one pixel in RRRRRRRRGGGGGGGGAAAAAAAA format
*/

struct pixel {
unsigned char r, g, b, a;
};

/**
* Process the image and return the number of pixels in it.
*/

- (NSUInteger) processImage: (UIImage*) image withRed:(NSUInteger)r green:(NSUInteger)g blue:(NSUInteger)b
{
NSUInteger numberOfPixels = 0;

// Allocate a buffer big enough to hold all the pixels

struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
if (pixels != nil)
{
// Create a new bitmap

CGContextRef context = CGBitmapContextCreate(
(void*) pixels,
image.size.width,
image.size.height,
8,
image.size.width * 4,
CGImageGetColorSpace(image.CGImage),
kCGImageAlphaPremultipliedLast
);

if (context != NULL)
{
// Draw the image in the bitmap

CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);

// Now that we have the image drawn in our own buffer, we can loop over the pixels to
// process it. This simple case simply counts all pixels that have a pure red component.

// There are probably more efficient and interesting ways to do this. But the important
// part is that the pixels buffer can be read directly.

NSUInteger p = image.size.width * image.size.height;

while (p > 0) {
if (pixels->r == r && pixels->g == g && pixels->b == b) {
numberOfPixels++;
}
pixels++;
p--;
}

CGContextRelease(context);
}

free(pixels);
}

return numberOfPixels;
}

Use:

NSUInteger numberOfSpecificColorPixels = [self processImage: [UIImage imageNamed: @"testImage.png" withRed:232 green:212 blue:192]];

this will give you number of pixels for specific color, which you can then use as per your requirement



Related Topics



Leave a reply



Submit