Bitwise Operations with Cgbitmapinfo and Cgimagealphainfo

Bitwise operations with CGBitmapInfo and CGImageAlphaInfo

You have the right equivalent Swift code:

bitmapInfo &= ~CGBitmapInfo.AlphaInfoMask
bitmapInfo |= CGBitmapInfo(CGImageAlphaInfo.NoneSkipFirst.rawValue)

It's a little strange because CGImageAlphaInfo isn't actually a bitmask -- it's just a UInt32 enum (or a CF_ENUM/NS_ENUM with type uint32_t, in C parlance), with values for 0 through 7.

What's actually happening is that your first line clears the first five bits of bitmapInfo, which is a bitmask (aka RawOptionSetType in Swift), since CGBitmapInfo.AlphaInfoMask is 31, or 0b11111. Then your second line sticks the raw value of the CGImageAlphaInfo enum into those cleared bits.

I haven't seen enums and bitmasks combined like this anywhere else, if that explains why there isn't really documentation. Since CGImageAlphaInfo is an enum, its values are mutually exclusive. This wouldn't make any sense:

bitmapInfo &= ~CGBitmapInfo.AlphaInfoMask
bitmapInfo |= CGBitmapInfo(CGImageAlphaInfo.NoneSkipFirst.rawValue)
bitmapInfo |= CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue)

Combining CGBitmapInfo and CGImageAlphaInfo in Swift

You can make it a little simpler:

let bimapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue)
.union(.ByteOrder32Little)

You unfortunately can't get away from converting the CGImageAlphaInfo into a CGBitmapInfo. That's just a weakness in the current API. But once you have it, you can use .union to combine it with other values. And once the enum type is known, you don't have to keep repeating it.

It's weird to me that there's no operator available here. I've opened a radar for that, and included an | implementation. http://www.openradar.me/23516367

public func |<T: SetAlgebraType>(lhs: T, rhs: T) -> T {
return lhs.union(rhs)
}

@warn_unused_result
public func |=<T : SetAlgebraType>(inout lhs: T, rhs: T) {
lhs.unionInPlace(rhs)
}

let bimapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue)
| .ByteOrder32Little

CGBitmapInfo alpha mask after swift 2.0

error:Cannot convert value of type 'CGBitmapInfo' to expected argument of type UInt32

Swift 2.0 actually expects a UInt32 instead of a CGBitMapInfo object, so you should take out a UInt32 Variable in CGBitMapInfo.

CGBitmapContextCreate(
nil,
Int(ceil(pixelSize.width)),
Int(ceil(pixelSize.height)),
CGImageGetBitsPerComponent(originalImageRef),
0,
colorSpace,
bitmapInfo.rawValue)

https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CGBitmapContext/#//apple_ref/c/func/CGBitmapContextCreate

Using C style unsigned char array and bitwise operators in Swift

Have a look at Interacting with C APIs

Mostly this

C Mutable Pointers

When a function is declared as taking a CMutablePointer
argument, it can accept any of the following:

  • nil, which is passed as a null pointer
  • A CMutablePointer value
  • An in-out expression whose operand is a stored lvalue of type Type,
    which is passed as the address of the lvalue
  • An in-out Type[] value,
    which is passed as a pointer to the start of the array, and
    lifetime-extended for the duration of the call

If you have declared a
function like this one:

SWIFT

func takesAMutablePointer(x: CMutablePointer<Float>) { /*...*/ } You

can call it in any of the following ways:

SWIFT

var x: Float = 0.0 
var p: CMutablePointer<Float> = nil
var a: Float[] = [1.0, 2.0, 3.0]
takesAMutablePointer(nil)
takesAMutablePointer(p)
takesAMutablePointer(&x)
takesAMutablePointer(&a)

So you code becomes

var advertisementBytes = CUnsignedChar[]()
self.proximityUUID.getUUIDBytes(&advertisementBytes)
advertisementBytes[16] = CUnsignedChar(self.major >> 8)

Swift bitwise shift operations giving compile-time error

The type safety in Swift means that you can't do operations between different numeric types. You're trying to shift UInt8s against a UInt64, which isn't supported. Try casting the result to UInt64 instead:

func addSquareAt(x : UInt8, y : UInt8) {
squares |= 1 << UInt64(x + y * 8)
}

CGImageRef in OS X application?

Did you add ApplicationServices.framework to your target?

converting image to binary in swift

A couple of observations:

  1. Make sure you're doing a test on a device with a release build (or optimizations turned off). That alone makes it much faster. On iPhone 7+ it reduced the conversion of 1920 x 1080 pixel color image to grayscale from 1.7 seconds to less than 0.1 seconds.

  2. You might want to use DispatchQueue.concurrentPerform to process pixels concurrently. On my iPhone 7+, that made it about twice as fast.

In my experience Core Image filters weren't much faster, but you can contemplate vImage or Metal if you need it much faster. But unless you're dealing with extraordinarily large images, the response time with optimized (and possibly concurrent) simple Swift code might be sufficient.

An unrelated observation:


  1. Also, I'm not sure how your conversion to black and white works, but often you'd want to calculate the relative luminance of the color pixel (e.g. 0.2126 * red + 0.7152 * green + 0.0722 * blue). Certainly when converting color image to grayscale you'd do something like that to get an image that more closely represents what the human eye can see, and I'd personally do something like that if converting to black and white, too.

FYI, my Swift 3/4 color-to-grayscale routine looks like:

func blackAndWhite(image: UIImage, completion: @escaping (UIImage?) -> Void) {
DispatchQueue.global(qos: .userInitiated).async {
// get information about image

let imageref = image.cgImage!
let width = imageref.width
let height = imageref.height

// create new bitmap context

let bitsPerComponent = 8
let bytesPerPixel = 4
let bytesPerRow = width * bytesPerPixel
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = Pixel.bitmapInfo
let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)!

// draw image to context

let rect = CGRect(x: 0, y: 0, width: CGFloat(width), height: CGFloat(height))
context.draw(imageref, in: rect)

// manipulate binary data

guard let buffer = context.data else {
print("unable to get context data")
completion(nil)
return
}

let pixels = buffer.bindMemory(to: Pixel.self, capacity: width * height)

DispatchQueue.concurrentPerform(iterations: height) { row in
for col in 0 ..< width {
let offset = Int(row * width + col)

let red = Float(pixels[offset].red)
let green = Float(pixels[offset].green)
let blue = Float(pixels[offset].blue)
let alpha = pixels[offset].alpha
let luminance = UInt8(0.2126 * red + 0.7152 * green + 0.0722 * blue)
pixels[offset] = Pixel(red: luminance, green: luminance, blue: luminance, alpha: alpha)
}
}

// return the image

let outputImage = context.makeImage()!
completion(UIImage(cgImage: outputImage, scale: image.scale, orientation: image.imageOrientation))
}
}

struct Pixel: Equatable {
private var rgba: UInt32

var red: UInt8 {
return UInt8((rgba >> 24) & 255)
}

var green: UInt8 {
return UInt8((rgba >> 16) & 255)
}

var blue: UInt8 {
return UInt8((rgba >> 8) & 255)
}

var alpha: UInt8 {
return UInt8((rgba >> 0) & 255)
}

init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
rgba = (UInt32(red) << 24) | (UInt32(green) << 16) | (UInt32(blue) << 8) | (UInt32(alpha) << 0)
}

static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue

static func ==(lhs: Pixel, rhs: Pixel) -> Bool {
return lhs.rgba == rhs.rgba
}
}

Clearly, if you want to convert it to absolute black and white, then adjust the algorithm accordingly, but this illustrates a concurrent image buffer manipulation routine.


While the above is reasonably fast (again, in optimized release builds), using vImage is even faster. The following is adapted from Converting Color Images to Grayscale:

func grayscale(of image: UIImage) -> UIImage? {
guard var source = sourceBuffer(for: image) else { return nil }

defer { free(source.data) }

var destination = destinationBuffer(for: source)

// Declare the three coefficients that model the eye's sensitivity
// to color.
let redCoefficient: Float = 0.2126
let greenCoefficient: Float = 0.7152
let blueCoefficient: Float = 0.0722

// Create a 1D matrix containing the three luma coefficients that
// specify the color-to-grayscale conversion.
let divisor: Int32 = 0x1000
let fDivisor = Float(divisor)

var coefficients = [
Int16(redCoefficient * fDivisor),
Int16(greenCoefficient * fDivisor),
Int16(blueCoefficient * fDivisor)
]

// Use the matrix of coefficients to compute the scalar luminance by
// returning the dot product of each RGB pixel and the coefficients
// matrix.
let preBias: [Int16] = [0, 0, 0, 0]
let postBias: Int32 = 0

let result = vImageMatrixMultiply_ARGB8888ToPlanar8(
&source,
&destination,
&coefficients,
divisor,
preBias,
postBias,
vImage_Flags(kvImageNoFlags))

guard result == kvImageNoError else { return nil }

defer { free(destination.data) }

// Create a 1-channel, 8-bit grayscale format that's used to
// generate a displayable image.
var monoFormat = vImage_CGImageFormat(
bitsPerComponent: 8,
bitsPerPixel: 8,
colorSpace: Unmanaged.passRetained(CGColorSpaceCreateDeviceGray()),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue),
version: 0,
decode: nil,
renderingIntent: .defaultIntent)

// Create a Core Graphics image from the grayscale destination buffer.
let cgImage = vImageCreateCGImageFromBuffer(&destination,
&monoFormat,
nil,
nil,
vImage_Flags(kvImageNoFlags),
nil)?.takeRetainedValue()
return cgImage.map { UIImage(cgImage: $0) }
}

func sourceBuffer(for image: UIImage) -> vImage_Buffer? {
guard let cgImage = image.cgImage else { return nil }

let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue).union(.byteOrder32Big)

var format = vImage_CGImageFormat(bitsPerComponent: 8,
bitsPerPixel: 32,
colorSpace: Unmanaged.passRetained(CGColorSpaceCreateDeviceRGB()),
bitmapInfo: bitmapInfo,
version: 0,
decode: nil,
renderingIntent: .defaultIntent)

var sourceImageBuffer = vImage_Buffer()
vImageBuffer_InitWithCGImage(&sourceImageBuffer,
&format,
nil,
cgImage,
vImage_Flags(kvImageNoFlags))

return sourceImageBuffer

func destinationBuffer(for sourceBuffer: vImage_Buffer) -> vImage_Buffer {
var destinationBuffer = vImage_Buffer()

vImageBuffer_Init(&destinationBuffer,
sourceBuffer.height,
sourceBuffer.width,
8,
vImage_Flags(kvImageNoFlags))

return destinationBuffer
}


Related Topics



Leave a reply



Submit