How to Get the Color of a Pixel in a Uiimage With Swift

Get Pixel color of UIImage

You can't access the raw data directly, but by getting the CGImage of this image you can access it. here is a link to another question that answers your question and others you might have regarding detailed image manipulation : CGImage

How do I get the color of a pixel in a UIImage with Swift?

A bit of searching leads me here since I was facing the similar problem.
You code works fine. The problem might be raised from your image.

Code:

  //On the top of your swift 
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {

let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer = CFDataGetBytePtr(pixelData)

let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4

let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)

return UIColor(red: r, green: g, blue: b, alpha: a)
}
}

What happens is this method will pick the pixel colour from the image's CGImage. So make sure you are picking from the right image. e.g. If you UIImage is 200x200, but the original image file from Imgaes.xcassets or wherever it came from, is 400x400, and you are picking point (100,100), you are actually picking the point on the upper left section of the image, instead of middle.

Two Solutions:

1, Use image from Imgaes.xcassets, and only put one @1x image in 1x field. Leave the @2x, @3x blank. Make sure you know the image size, and pick a point that is within the range.

//Make sure only 1x image is set
let image : UIImage = UIImage(named:"imageName")
//Make sure point is within the image
let color : UIColor = image.getPixelColor(CGPointMake(xValue, yValue))

2, Scale you CGPoint up/down the proportion to match the UIImage. e.g. let point = CGPoint(100,100) in the example above,

let xCoordinate : Float = Float(point.x) * (400.0/200.0)
let yCoordinate : Float = Float(point.y) * (400.0/200.0)
let newCoordinate : CGPoint = CGPointMake(CGFloat(xCoordinate), CGFloat(yCoordinate))
let image : UIImage = largeImage
let color : UIColor = image.getPixelColor(CGPointMake(xValue, yValue))

I've only tested the first method, and I am using it to get a colour off a colour palette. Both should work.
Happy coding :)

Getting Pixel Color from an Image using CGPoint in Swift 3

You can write something like this when you have an UnsafeRawPointer in your data:

    let alpha = data.load(fromByteOffset: offset, as: UInt8.self)
let red = data.load(fromByteOffset: offset+1, as: UInt8.self)
let green = data.load(fromByteOffset: offset+2, as: UInt8.self)
let blue = data.load(fromByteOffset: offset+3, as: UInt8.self)

Or else, you can get UnsafeMutablePointer from your uncastedData (assuming it's an UnsafeMutableRawPointer):

    let data = uncastedData.assumingMemoryBound(to: UInt8.self)

Change color of certain pixels in a UIImage

You have to extract the pixel buffer of the image, at which point you can loop through, changing pixels as you see fit. At the end, create a new image from the buffer.

In Swift 3, this looks like:

func processPixels(in image: UIImage) -> UIImage? {
guard let inputCGImage = image.cgImage else {
print("unable to get cgImage")
return nil
}
let colorSpace = CGColorSpaceCreateDeviceRGB()
let width = inputCGImage.width
let height = inputCGImage.height
let bytesPerPixel = 4
let bitsPerComponent = 8
let bytesPerRow = bytesPerPixel * width
let bitmapInfo = RGBA32.bitmapInfo

guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
print("unable to create context")
return nil
}
context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: width, height: height))

guard let buffer = context.data else {
print("unable to get context data")
return nil
}

let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)

for row in 0 ..< Int(height) {
for column in 0 ..< Int(width) {
let offset = row * width + column
if pixelBuffer[offset] == .black {
pixelBuffer[offset] = .red
}
}
}

let outputCGImage = context.makeImage()!
let outputImage = UIImage(cgImage: outputCGImage, scale: image.scale, orientation: image.imageOrientation)

return outputImage
}

struct RGBA32: Equatable {
private var color: UInt32

var redComponent: UInt8 {
return UInt8((color >> 24) & 255)
}

var greenComponent: UInt8 {
return UInt8((color >> 16) & 255)
}

var blueComponent: UInt8 {
return UInt8((color >> 8) & 255)
}

var alphaComponent: UInt8 {
return UInt8((color >> 0) & 255)
}

init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
let red = UInt32(red)
let green = UInt32(green)
let blue = UInt32(blue)
let alpha = UInt32(alpha)
color = (red << 24) | (green << 16) | (blue << 8) | (alpha << 0)
}

static let red = RGBA32(red: 255, green: 0, blue: 0, alpha: 255)
static let green = RGBA32(red: 0, green: 255, blue: 0, alpha: 255)
static let blue = RGBA32(red: 0, green: 0, blue: 255, alpha: 255)
static let white = RGBA32(red: 255, green: 255, blue: 255, alpha: 255)
static let black = RGBA32(red: 0, green: 0, blue: 0, alpha: 255)
static let magenta = RGBA32(red: 255, green: 0, blue: 255, alpha: 255)
static let yellow = RGBA32(red: 255, green: 255, blue: 0, alpha: 255)
static let cyan = RGBA32(red: 0, green: 255, blue: 255, alpha: 255)

static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue

static func ==(lhs: RGBA32, rhs: RGBA32) -> Bool {
return lhs.color == rhs.color
}
}

For Swift 2 rendition, see previous revision of this answer.

How to change colour of individual pixel of UIImage/UIImageView

You'll want to break this problem up into multiple steps.

  1. Get the coordinates of the touched point in the image coordinate system
  2. Get the x and y position of the pixel to change
  3. Create a bitmap context and replace the given pixel's components with your new color's components.

First of all, to get the coordinates of the touched point in the image coordinate system – you can use a category method that I wrote on UIImageView. This will return a CGAffineTransform that will map a point from view coordinates to image coordinates – depending on the content mode of the view.

@interface UIImageView (PointConversionCatagory)

@property (nonatomic, readonly) CGAffineTransform viewToImageTransform;
@property (nonatomic, readonly) CGAffineTransform imageToViewTransform;

@end

@implementation UIImageView (PointConversionCatagory)

-(CGAffineTransform) viewToImageTransform {

UIViewContentMode contentMode = self.contentMode;

// failure conditions. If any of these are met – return the identity transform
if (!self.image || self.frame.size.width == 0 || self.frame.size.height == 0 ||
(contentMode != UIViewContentModeScaleToFill && contentMode != UIViewContentModeScaleAspectFill && contentMode != UIViewContentModeScaleAspectFit)) {
return CGAffineTransformIdentity;
}

// the width and height ratios
CGFloat rWidth = self.image.size.width/self.frame.size.width;
CGFloat rHeight = self.image.size.height/self.frame.size.height;

// whether the image will be scaled according to width
BOOL imageWiderThanView = rWidth > rHeight;

if (contentMode == UIViewContentModeScaleAspectFit || contentMode == UIViewContentModeScaleAspectFill) {

// The ratio to scale both the x and y axis by
CGFloat ratio = ((imageWiderThanView && contentMode == UIViewContentModeScaleAspectFit) || (!imageWiderThanView && contentMode == UIViewContentModeScaleAspectFill)) ? rWidth:rHeight;

// The x-offset of the inner rect as it gets centered
CGFloat xOffset = (self.image.size.width-(self.frame.size.width*ratio))*0.5;

// The y-offset of the inner rect as it gets centered
CGFloat yOffset = (self.image.size.height-(self.frame.size.height*ratio))*0.5;

return CGAffineTransformConcat(CGAffineTransformMakeScale(ratio, ratio), CGAffineTransformMakeTranslation(xOffset, yOffset));
} else {
return CGAffineTransformMakeScale(rWidth, rHeight);
}
}

-(CGAffineTransform) imageToViewTransform {
return CGAffineTransformInvert(self.viewToImageTransform);
}

@end

There's nothing too complicated here, just some extra logic for scale aspect fit/fill, to ensure the centering of the image is taken into account. You could skip this step entirely if your were displaying your image 1:1 on screen.

Next, you'll want to get the x and y position of the pixel to change. This is fairly simple – you just want to use the above category property viewToImageTransform to get the pixel in the image coordinate system, and then use floor to make the values integral.

UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(imageViewWasTapped:)];
tapGesture.numberOfTapsRequired = 1;
[imageView addGestureRecognizer:tapGesture];

...

-(void) imageViewWasTapped:(UIGestureRecognizer*)tapGesture {

if (!imageView.image) {
return;
}

// get the pixel position
CGPoint pt = CGPointApplyAffineTransform([tapGesture locationInView:imageView], imageView.viewToImageTransform);
PixelPosition pixelPos = {(NSInteger)floor(pt.x), (NSInteger)floor(pt.y)};

// replace image with new image, with the pixel replaced
imageView.image = [imageView.image imageWithPixel:pixelPos replacedByColor:[UIColor colorWithRed:0 green:1 blue:1 alpha:1.0]];
}

Finally, you'll want to use another of my category methods – imageWithPixel:replacedByColor: to get out your new image with a replaced pixel with a given color.

/// A simple struct to represent the position of a pixel
struct PixelPosition {
NSInteger x;
NSInteger y;
};

typedef struct PixelPosition PixelPosition;

@interface UIImage (UIImagePixelManipulationCatagory)

@end

@implementation UIImage (UIImagePixelManipulationCatagory)

-(UIImage*) imageWithPixel:(PixelPosition)pixelPosition replacedByColor:(UIColor*)color {

// components of replacement color – in a 255 UInt8 format (fairly standard bitmap format)
const CGFloat* colorComponents = CGColorGetComponents(color.CGColor);
UInt8* color255Components = calloc(sizeof(UInt8), 4);
for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);

// raw image reference
CGImageRef rawImage = self.CGImage;

// image attributes
size_t width = CGImageGetWidth(rawImage);
size_t height = CGImageGetHeight(rawImage);
CGRect rect = {CGPointZero, {width, height}};

// image format
size_t bitsPerComponent = 8;
size_t bytesPerRow = width*4;

// the bitmap info
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;

// data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
UInt8* data = calloc(bytesPerRow, height);

// get new RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

// create bitmap context
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);

// draw image into context (populating the data array while doing so)
CGContextDrawImage(ctx, rect, rawImage);

// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));

// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a

// get image from context
CGImageRef img = CGBitmapContextCreateImage(ctx);

// clean up
free(color255Components);
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(data);

UIImage* returnImage = [UIImage imageWithCGImage:img];
CGImageRelease(img);

return returnImage;
}

@end

What this does is first get out the components of the color you want to write to one of the pixels, in a 255 UInt8 format. Next, it creates a new bitmap context, with the given attributes of your input image.

The important bit of this method is:

// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));

// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a

What this does is get out the index of a given pixel (based on the x and y coordinate of the pixel) – then uses that index to replace the component data of that pixel with the color components of your replacement color.

Finally, we get out an image from the bitmap context and perform some cleanup.

Finished Result:

Sample Image


Full Project: https://github.com/hamishknight/Pixel-Color-Changing



Related Topics



Leave a reply



Submit