How to Initialise Cvpixelbufferref in Swift

Correct way to draw/edit a CVPixelBuffer in Swift in iOS

You need to call CVPixelBufferLockBaseAddress(pixelBuffer, 0) before creating the bitmap CGContext and CVPixelBufferUnlockBaseAddress(pixelBuffer, 0) after you have finished drawing to the context.

Without locking the pixel buffer, CVPixelBufferGetBaseAddress() returns NULL. This causes your CGContext to allocate new memory to draw into, which is subsequently discarded.

Also double check your colour space. It's easy to mix up your components.

e.g.

guard
CVPixelBufferLockBaseAddress(pixelBuffer) == kCVReturnSuccess,
let context = CGContext(data: CVPixelBufferGetBaseAddress(pixelBuffer),
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer),
space: colorSpace,
bitmapInfo: alphaInfo.rawValue)
else {
return nil
}

context.setFillColor(red: 1, green: 0, blue: 0, alpha: 1.0)
context.fillEllipse(in: CGRect(x: 0, y: 0, width: width, height: height))

CVPixelBufferUnlockBaseAddress(pixelBuffer)

adapter?.append(pixelBuffer, withPresentationTime: time)

Swift - How do you cast a CVImageBufferRef as a CVPixelBufferRef

EDIT

This answer was given during Swift beta test period. It seems that now the solution is simpler, as suggested by klinger

var pixelBuffer : CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)

however I'll leave the previous answer for historycal reasons :-)

PREVIOUS ANSWER

Look at the prerelease docs:

https://developer.apple.com/library/prerelease/ios/documentation/Swift/Conceptual/BuildingCocoaApps/WorkingWithCocoaDataTypes.html#//apple_ref/doc/uid/TP40014216-CH6-XID_40

Specifically this statement

Remapped Types

When Swift imports Core Foundation types, the compiler remaps the
names of these types. The compiler removes Ref from the end of each
type name because all Swift classes are reference types, therefore the
suffix is redundant.

The Core Foundation CFTypeRef type completely remaps to the AnyObject
type. Wherever you would use CFTypeRef, you should now use AnyObject
in your code.

The first thing you would like to do, is to remove the "ref" from each type.
However it's not necessary, since the "refs" are typealiased to the "non-ref" types.

Then, this statement should work. It may need some tuning before I've never worked with CMSampleBufferGetImageBuffer, and for this reason I'm not sure about the first line (initialize the buffer)

    var buf : CMSampleBuffer = // initialize the buffer
var anUnmanaged : Unmanaged<CVImageBuffer> = CMSampleBufferGetImageBuffer(buf)
var returnValue = anUnmanaged.takeUnretainedValue()

Or, shortly

    var buf : CMSampleBuffer = // initialize the buffer
var anUnmanaged : CVImageBuffer = CMSampleBufferGetImageBuffer(buf).takeRetainedValue()

However, you asked for a CVPixelBuffer.
If the two types are fully compatible (I don't know the underlying API so I assume that casting between CVPixelBuffer and CVImageBuffer in objc is always safe), there is no "automatism" to do it, you have to pass through an Unsafe Pointer.

The complete code is this:

    var buf : CMSampleBuffer = // initialize the buffer
var anUnmanaged : Unmanaged<CVImageBuffer> = CMSampleBufferGetImageBuffer(buf)
var returnValue = anUnmanaged.takeUnretainedValue()

var anOpaque = anUnmanaged.toOpaque()
var pixelBuffer : CVPixelBuffer = Unmanaged<CVPixelBuffer>.fromOpaque(anOpaque).takeUnretainedValue()

I used takeUnretainedValue() that doesn't consume a retain count, since CMSampleBufferGetImageBuffer() returns an unretained object

Modifying CVPixelBuffer

Change the lock flags.

let lockFlags = CVPixelBufferLockFlags(rawValue: 0)

guard CVPixelBufferLockBaseAddress(pixelBuffer, lockFlags) == kCVReturnSuccess else {
return pixelBuffer
}

// ...

CVPixelBufferUnlockBaseAddress(pixelBuffer, lockFlags)

Get pixel value from CVPixelBufferRef in Swift

baseAddress is an unsafe mutable pointer or more precisely a UnsafeMutablePointer<Void>. You can easily access the memory once you have converted the pointer away from Void to a more specific type:

// Convert the base address to a safe pointer of the appropriate type
let byteBuffer = UnsafeMutablePointer<UInt8>(baseAddress)

// read the data (returns value of type UInt8)
let firstByte = byteBuffer[0]

// write data
byteBuffer[3] = 90

Make sure you use the correct type (8, 16 or 32 bit unsigned int). It depends on the video format. Most likely it's 8 bit.

Update on buffer formats:

You can specify the format when you initialize the AVCaptureVideoDataOutput instance. You basically have the choice of:

  • BGRA: a single plane where the blue, green, red and alpha values are stored in a 32 bit integer each
  • 420YpCbCr8BiPlanarFullRange: Two planes, the first containing a byte for each pixel with the Y (luma) value, the second containing the Cb and Cr (chroma) values for groups of pixels
  • 420YpCbCr8BiPlanarVideoRange: The same as 420YpCbCr8BiPlanarFullRange but the Y values are restricted to the range 16 – 235 (for historical reasons)

If you're interested in the color values and speed (or rather maximum frame rate) is not an issue, then go for the simpler BGRA format. Otherwise take one of the more efficient native video formats.

If you have two planes, you must get the base address of the desired plane (see video format example):

Video format example

let pixelBuffer: CVPixelBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(pixelBuffer, 0)
let baseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
let bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0)
let byteBuffer = UnsafeMutablePointer<UInt8>(baseAddress)

// Get luma value for pixel (43, 17)
let luma = byteBuffer[17 * bytesPerRow + 43]

CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)

BGRA example

let pixelBuffer: CVPixelBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(pixelBuffer, 0)
let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer)
let int32PerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
let int32Buffer = UnsafeMutablePointer<UInt32>(baseAddress)

// Get BGRA value for pixel (43, 17)
let luma = int32Buffer[17 * int32PerRow + 43]

CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)

How to create CVPixelBuffer attributes dictionary in Swift

Just use NSString instead:

let attributes:[NSString : NSNumber] = // ... the rest is the same

That, after all, is what you were really doing in the Objective-C code; it's just that Objective-C was bridge-casting for you. A CFString cannot be the key in an Objective-C dictionary any more than in a Swift dictionary.

Another (perhaps Swiftier) way is to write this:

let attributes : [NSObject:AnyObject] = [
kCVPixelBufferCGImageCompatibilityKey : true,
kCVPixelBufferCGBitmapContextCompatibilityKey : true
]

Note that by doing that, we don't have to wrap true in an NSNumber either; that will be taken care of for us by Swift's bridging automatically.

Convert Image to CVPixelBuffer for Machine Learning Swift

You don't need to do a bunch of image mangling yourself to use a Core ML model with an image — the new Vision framework can do that for you.

import Vision
import CoreML

let model = try VNCoreMLModel(for: MyCoreMLGeneratedModelClass().model)
let request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod)
let handler = VNImageRequestHandler(url: myImageURL)
handler.perform([request])

func myResultsMethod(request: VNRequest, error: Error?) {
guard let results = request.results as? [VNClassificationObservation]
else { fatalError("huh") }
for classification in results {
print(classification.identifier, // the scene label
classification.confidence)
}

}

The WWDC17 session on Vision should have a bit more info — it's tomorrow afternoon.



Related Topics



Leave a reply



Submit