Pixel Array to Uiimage in Swift

Pixel Array to UIImage in Swift

Note: This is a solution for iOS creating a UIImage. For a solution for macOS and NSImage, see this answer.

Your only problem is that the data types in your PixelData structure need to be UInt8. I created a test image in a Playground with the following:

public struct PixelData {
var a: UInt8
var r: UInt8
var g: UInt8
var b: UInt8
}

var pixels = [PixelData]()

let red = PixelData(a: 255, r: 255, g: 0, b: 0)
let green = PixelData(a: 255, r: 0, g: 255, b: 0)
let blue = PixelData(a: 255, r: 0, g: 0, b: 255)

for _ in 1...300 {
pixels.append(red)
}
for _ in 1...300 {
pixels.append(green)
}
for _ in 1...300 {
pixels.append(blue)
}

let image = imageFromARGB32Bitmap(pixels: pixels, width: 30, height: 30)

Update for Swift 4:

I updated imageFromARGB32Bitmap to work with Swift 4. The function now returns a UIImage? and guard is used to return nil if anything goes wrong.

func imageFromARGB32Bitmap(pixels: [PixelData], width: Int, height: Int) -> UIImage? {
guard width > 0 && height > 0 else { return nil }
guard pixels.count == width * height else { return nil }

let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue)
let bitsPerComponent = 8
let bitsPerPixel = 32

var data = pixels // Copy to mutable []
guard let providerRef = CGDataProvider(data: NSData(bytes: &data,
length: data.count * MemoryLayout<PixelData>.size)
)
else { return nil }

guard let cgim = CGImage(
width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: width * MemoryLayout<PixelData>.size,
space: rgbColorSpace,
bitmapInfo: bitmapInfo,
provider: providerRef,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent
)
else { return nil }

return UIImage(cgImage: cgim)
}

Making it a convenience initializer for UIImage:

This function works well as a convenience initializer for UIImage. Here is the implementation:

extension UIImage {
convenience init?(pixels: [PixelData], width: Int, height: Int) {
guard width > 0 && height > 0, pixels.count == width * height else { return nil }
var data = pixels
guard let providerRef = CGDataProvider(data: Data(bytes: &data, count: data.count * MemoryLayout<PixelData>.size) as CFData)
else { return nil }
guard let cgim = CGImage(
width: width,
height: height,
bitsPerComponent: 8,
bitsPerPixel: 32,
bytesPerRow: width * MemoryLayout<PixelData>.size,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue),
provider: providerRef,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent)
else { return nil }
self.init(cgImage: cgim)
}
}

Here is an example of its usage:

// Generate a 500x500 image of randomly colored pixels

let height = 500
let width = 500

var pixels: [PixelData] = .init(repeating: .init(a: 0, r: 0, g: 0, b: 0), count: width * height)
for index in pixels.indices {
pixels[index].a = 255
pixels[index].r = .random(in: 0...255)
pixels[index].g = .random(in: 0...255)
pixels[index].b = .random(in: 0...255)
}
let image = UIImage(pixels: pixels, width: width, height: height)

iOS: How to get pixel data array from CGImage in Swift

Per the link provided in your question, you can get the pixelData by doing

extension UIImage {
func pixelData() -> [UInt8]? {
let size = self.size
let dataSize = size.width * size.height * 4
var pixelData = [UInt8](repeating: 0, count: Int(dataSize))
let colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: &pixelData,
width: Int(size.width),
height: Int(size.height),
bitsPerComponent: 8,
bytesPerRow: 4 * Int(size.width),
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue)
guard let cgImage = self.cgImage else { return nil }
context?.draw(cgImage, in: CGRect(x: 0, y: 0, width: size.width, height: size.height))

return pixelData
}
}

However, as a developer, the concerning objects here are the bitmapInfo and colorSpace. Your image may be getting distorted or colored differently depending on the information provided. The exact solution will be dependent upon how you obtained the image and what color schemes were provided from the image. You may just need to play with the variables.

I've never had an issue using CGColorSpaceCreateDeviceRGB() as my colorSpace but I have had to alter my bitmapInfo many times as my images were coming in as a different value.

Here is the location to reference the different types of bitmaps. More than likely though, you only need a variation of the CGImageAlphaInfo which can be located here.

If necessary, you can change the colorSpace. The default CGcolorSpace webpage can be found here. However, you could probably get away with one of the default ones located here

Generate Image from Pixel Array (fast)

The most performant way to do that is to use Metal Compute Function.

Apple has a good documentation to illustrate GPU programming.

  • Performing Calculations on a GPU

  • Processing a Texture in a Compute Function

Re: Get pixel data as array from UIImage/CGImage in swift

You probably forgot the CGImageAlphaInfo parameter. For color images, if you assume bytesPerPixel to be 4, you need to set either RGBA (or ARGB) when creating the context. Following is an example for RGBA without the alpha channel.

// RGBA format
let ctx = CGBitmapContextCreate(&data, pixelsWide, pixelsHigh, 8,
bitmapBytesPerRow, colorSpace, CGImageAlphaInfo.NoneSkipLast.rawValue)

According to the documentation, you have these options:

enum CGImageAlphaInfo : UInt32 {
case None
case PremultipliedLast
case PremultipliedFirst
case Last
case First
case NoneSkipLast
case NoneSkipFirst
case Only
}

Creating image on a pixel-by-pixel basis from data-array with Swift

Here's a snippet of code that I slapped together which does actually save a bmp-file with my data. So it does work. Absolutely not good code, I assume. But I was just trying to get a working example, which I would refine later.

var myImageDataArray : [UInt8] = [167, 241, 217, 42, 130, 200, 216, 254, 67, 77, 152, 85, 140, 226, 179, 71]
let dataProv = CGDataProviderCreateWithData(nil, myImageDataArray, myImageDataArray.count, nil)
let myCG = CGImageCreate(4, 4, 8, 8, 4, CGColorSpaceCreateWithName(kCGColorSpaceGenericGrayGamma2_2), CGBitmapInfo.ByteOrderDefault, dataProv, nil, false, CGColorRenderingIntent.RenderingIntentDefault)
let testTIFF = NSBitmapImageRep(CGImage: myCG!)
let mynewData = testTIFF.representationUsingType(NSBitmapImageFileType.NSTIFFFileType, properties: [NSImageCompressionMethod: 1])
mynewData?.writeToFile("testTIFF.tiff", atomically: false)

UIImage to UIColor array of pixel colors

This is simply a Swift's translation of Olie's answer to the same question in ObjC. Make sure you give him an upvote as well.

extension UIImage {
func colorArray() -> [UIColor] {
let result = NSMutableArray()

let img = self.CGImage
let width = CGImageGetWidth(img)
let height = CGImageGetHeight(img)
let colorSpace = CGColorSpaceCreateDeviceRGB()

var rawData = [UInt8](count: width * height * 4, repeatedValue: 0)
let bytesPerPixel = 4
let bytesPerRow = bytesPerPixel * width
let bytesPerComponent = 8

let bitmapInfo = CGImageAlphaInfo.PremultipliedLast.rawValue | CGBitmapInfo.ByteOrder32Big.rawValue
let context = CGBitmapContextCreate(&rawData, width, height, bytesPerComponent, bytesPerRow, colorSpace, bitmapInfo)

CGContextDrawImage(context, CGRectMake(0, 0, CGFloat(width), CGFloat(height)), img);
for x in 0..<width {
for y in 0..<height {
let byteIndex = (bytesPerRow * x) + y * bytesPerPixel

let red = CGFloat(rawData[byteIndex] ) / 255.0
let green = CGFloat(rawData[byteIndex + 1]) / 255.0
let blue = CGFloat(rawData[byteIndex + 2]) / 255.0
let alpha = CGFloat(rawData[byteIndex + 3]) / 255.0

let color = UIColor(red: red, green: green, blue: blue, alpha: alpha)
result.addObject(color)
}
}

return (result as NSArray) as! [UIColor]
}
}

Note that this runs rather slowly. It takes 35 seconds for the simulator to decode a 15MP image, and that's on a quad-core i7.

Creating CGImage/UIImage from grayscale matrix

There are two key issues.

  1. The code is calculating all the red values for every grayscale pixel and creating the four byte PixelData for each (even though only the red channel is populated) and adding that to the pixelsData array. It then repeats that for the green values, and then again for the blue values. That results in three times as much data as one needs for the image, and only the red channel data is being used.

    Instead, we should calculate the RGBA values once, create a PixelData for each, and repeat this pixel by pixel.

  2. The premultipliedFirst means ARGB. But your structure is using RGBA, so you want premultipliedLast.

Thus:

func generateTintedImage(completion: @escaping (UIImage?) -> Void) {
DispatchQueue.global(qos: .userInitiated).async {
let image = self.tintedImage()
DispatchQueue.main.async {
completion(image)
}
}
}

private func tintedImage() -> UIImage? {
let tintRed = tintColor.red
let tintGreen = tintColor.green
let tintBlue = tintColor.blue
let tintAlpha = tintColor.alpha

let data = pixels.map { pixel -> PixelData in
let red = UInt8((Float(pixel) / 255) * tintRed)
let green = UInt8((Float(pixel) / 255) * tintGreen)
let blue = UInt8((Float(pixel) / 255) * tintBlue)
let alpha = UInt8(tintAlpha)
return PixelData(r: red, g: green, b: blue, a: alpha)
}.withUnsafeBytes { Data($0) }

let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let bitsPerComponent = 8
let bitsPerPixel = 32

guard
let providerRef = CGDataProvider(data: data as CFData),
let cgImage = CGImage(width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: width * MemoryLayout<PixelData>.stride,
space: rgbColorSpace,
bitmapInfo: bitmapInfo,
provider: providerRef,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent)
else {
return nil
}

return UIImage(cgImage: cgImage)
}

I’ve also renamed a few variables, used stride instead of size, replaced dimension with width and height so I could process non-square images, etc.

I also would advise against using a computed property for anything this computationally intense, so I gave this an asynchronous method, which you might use as follows:

let map = Map(with: image)
map.generateTintedImage { image in
self.tintedImageView.image = image
}

Anyway, the above yields the following, where the rightmost image is your tinted image:

Sample Image


Needless to say, to convert your matrix into your pixels array, you can just flatten the array of arrays:

let matrix: [[Pixel]] = [
[0, 0, 125],
[10, 50, 255],
[90, 0, 255]
]
pixels = matrix.flatMap { $0 }

Here is a parallelized rendition which is also slightly more efficient with respect to the memory buffer:

private func tintedImage() -> UIImage? {
let tintAlpha = tintColor.alpha
let tintRed = tintColor.red / 255
let tintGreen = tintColor.green / 255
let tintBlue = tintColor.blue / 255

let alpha = UInt8(tintAlpha)

let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue
let bitsPerComponent = 8
let bytesPerRow = width * MemoryLayout<PixelData>.stride

guard
let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
let data = context.data
else {
return nil
}

let buffer = data.bindMemory(to: PixelData.self, capacity: width * height)

DispatchQueue.concurrentPerform(iterations: height) { row in
let start = width * row
let end = start + width
for i in start ..< end {
let pixel = pixels[i]
let red = UInt8(Float(pixel) * tintRed)
let green = UInt8(Float(pixel) * tintGreen)
let blue = UInt8(Float(pixel) * tintBlue)
buffer[i] = PixelData(r: red, g: green, b: blue, a: alpha)
}
}

return context.makeImage()
.flatMap { UIImage(cgImage: $0) }
}


Related Topics



Leave a reply



Submit