Convert Cmsamplebufferref to UIimage

Convert CMSampleBuffer to UIImage

The conversion is simple:

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let ciimage = CIImage(cvPixelBuffer: imageBuffer)
let image = self.convert(cmage: ciimage)
}

// Convert CIImage to UIImage
func convert(cmage: CIImage) -> UIImage {
let context = CIContext(options: nil)
let cgImage = context.createCGImage(cmage, from: cmage.extent)!
let image = UIImage(cgImage: cgImage)
return image
}

convert CMSampleBufferRef to UIImage

Your best bet will be to set the capture video data output's videoSettings to a dictionary that specifies the pixel format you want, which you'll need to set to some variation on RGB that CGBitmapContext can handle.

The documentation has a list of all of the pixel formats that Core Video can process. Only a tiny subset of those are supported by CGBitmapContext. The format that the code you found on the internet is expecting is kCVPixelFormatType_32BGRA, but that might have been written for Macs—on iOS devices, kCVPixelFormatType_32ARGB (big-endian) might be faster. Try them both, on the device, and compare frame rates.

How to convert a CVImageBufferRef to UIImage

The way that you are passing on the baseAddress presumes that the image data is in the form

ACCC

( where C is some color component, R || G || B ).

If you've set up your AVCaptureSession to capture the video frames in native format, more than likely you're getting the video data back in planar YUV420 format. (see: link text ) In order to do what you're attempting to do here, probably the easiest thing to do would be specify that you want the video frames captured in kCVPixelFormatType_32RGBA . Apple recommends that you capture the video frames in kCVPixelFormatType_32BGRA if you capture it in non-planar format at all, the reasoning for which is not stated, but I can reasonably assume is due to performance considerations.

Caveat: I've not done this, and am assuming that accessing the CVPixelBufferRef contents like this is a reasonable way to build the image. I can't vouch for this actually working, but I /can/ tell you that the way you are doing things right now reliably will not work due to the pixel format that you are (probably) capturing the video frames as.

Convert a CMSampleBuffer into a UIImage

This is a solution for Swift 3.0, where CMSampleBuffer is extended, creating a variable that gives you an optional UIImage.

import AVFoundation

extension CMSampleBuffer {
var uiImage: UIImage? {
guard let imageBuffer = CMSampleBufferGetImageBuffer(self) else { return nil }

CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
guard let context = CGContext(data: baseAddress,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: bytesPerRow,
space: colorSpace,
bitmapInfo: bitmapInfo.rawValue) else { return nil }
guard let cgImage = context.makeImage() else { return nil }

CVPixelBufferUnlockBaseAddress(imageBuffer,CVPixelBufferLockFlags(rawValue: 0));

return UIImage(cgImage: cgImage)
}
}

UIImage created from CMSampleBufferRef not displayed in UIImageView?

I had the same problem ... but I found this old post, and its method of creating the CGImageRef works!

http://forum.unity3d.com/viewtopic.php?p=300819

Here's a working sample:

app has a member UIImage theImage;

// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
//... just an example of how to get an image out of this ...

CGImageRef cgImage = [self imageFromSampleBuffer:sampleBuffer];
theImage.image = [UIImage imageWithCGImage: cgImage ];
CGImageRelease( cgImage );
}

- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer

uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);

CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!

return newImage;
}

How to convert CMSampleBufferRef/CIImage/UIImage into pixels e.g. uint8_t[]

Some pointers you can use to search for more info. It's nicely documented and you shouldn't have an issue.

int convertCMSampleBufferToPixelArray (CMSampleBufferRef sampleBuffer) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (imageBuffer == NULL) {
return -1;
}

// Get address of the image buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
uint8_t* data = CVPixelBufferGetBaseAddress(imageBuffer);

// Get size
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);

// Get bytes per row
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);

// At `data` you have a bytesPerRow * height bytes of the image data

// To get pixel info you can call CVPixelBufferGetPixelFormatType, ...
// you can call CVImageBufferGetColorSpace and inspect it, ...

// When you're done, unlock the base address
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

return 0;
}

There're couple of things you should be aware of.

First one is that it can be planar. Check the CVPixelBufferIsPlanar, CVPixelBufferGetPlaneCount, CVPixelBufferGetBytesPerRowOfPlane, etc.

Second one is that you have to calculate pixel size based on CVPixelBufferGetPixelFormatType. Something like:

CVPixelBufferGetPixelFormatType(imageBuffer)

size_t pixelSize;
switch (pixelFormat) {
case kCVPixelFormatType_32BGRA:
case kCVPixelFormatType_32ARGB:
case kCVPixelFormatType_32ABGR:
case kCVPixelFormatType_32RGBA:
pixelSize = 4;
break;
// + other cases
}

Let's say that the buffer is not planar and:

  • CVPixelBufferGetWidth returns 200 (pixels)
  • Your pixelSize is 4 (calcuated bytes per row is 200 * 4 = 800)
  • CVPixelBufferGetBytesPerRow can return anything >= 800

In other words, the pointer you have is not a pointer to a contiguous buffer. If you need row data you have to do something like this:

uint8_t* data = CVPixelBufferGetBaseAddress(imageBuffer);

// Get size
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);

size_t pixelSize = 4; // Let's pretend it's calculated pixel size
size_t realRowSize = width * pixelSize;

size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);

for (int row = 0 ; row < height ; row++) {
// bytesPerRow acts like an offset where the next row starts
// bytesPerRow can be >= realRowSize
uint8_t *rowData = data + row * bytesPerRow;

// realRowSize = how many bytes are available for this row
// copy them somewhere
}

You have to allocate a buffer and copy these row data there if you'd like to have contiguous buffer. How many bytes to allocate? CVPixelBufferGetDataSize.

Make an UIImage from a CMSampleBuffer

With Swift 3 and iOS 10 AVCapturePhotoOutput :
Includes :

import UIKit
import CoreData
import CoreMotion
import AVFoundation

Create an UIView for preview and link it to the Main Class

  @IBOutlet var preview: UIView!

Create this to setup the camera session (kCVPixelFormatType_32BGRA is important !!) :

  lazy var cameraSession: AVCaptureSession = {
let s = AVCaptureSession()
s.sessionPreset = AVCaptureSessionPresetHigh
return s
}()

lazy var previewLayer: AVCaptureVideoPreviewLayer = {
let previewl:AVCaptureVideoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.cameraSession)
previewl.frame = self.preview.bounds
return previewl
}()

func setupCameraSession() {
let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo) as AVCaptureDevice

do {
let deviceInput = try AVCaptureDeviceInput(device: captureDevice)

cameraSession.beginConfiguration()

if (cameraSession.canAddInput(deviceInput) == true) {
cameraSession.addInput(deviceInput)
}

let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: **kCVPixelFormatType_32BGRA** as UInt32)]
dataOutput.alwaysDiscardsLateVideoFrames = true

if (cameraSession.canAddOutput(dataOutput) == true) {
cameraSession.addOutput(dataOutput)
}

cameraSession.commitConfiguration()

let queue = DispatchQueue(label: "fr.popigny.videoQueue", attributes: [])
dataOutput.setSampleBufferDelegate(self, queue: queue)

}
catch let error as NSError {
NSLog("\(error), \(error.localizedDescription)")
}
}

In WillAppear :

  override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
setupCameraSession()
}

In Didappear :

  override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
preview.layer.addSublayer(previewLayer)
cameraSession.startRunning()
}

Create a function to capture output :

  func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {

// Here you collect each frame and process it
let ts:CMTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
self.mycapturedimage = imageFromSampleBuffer(sampleBuffer: sampleBuffer)
}

Here is the code that convert an kCVPixelFormatType_32BGRA CMSampleBuffer to an UIImage the key things is the bitmapInfo that must correspond to 32BGRA 32 little with premultfirst and alpha info :

  func imageFromSampleBuffer(sampleBuffer : CMSampleBuffer) -> UIImage
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);

// Get the number of bytes per row for the pixel buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer!);

// Get the number of bytes per row for the pixel buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!);
// Get the pixel buffer width and height
let width = CVPixelBufferGetWidth(imageBuffer!);
let height = CVPixelBufferGetHeight(imageBuffer!);

// Create a device-dependent RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB();

// Create a bitmap graphics context with the sample buffer data
var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue
bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
//let bitmapInfo: UInt32 = CGBitmapInfo.alphaInfoMask.rawValue
let context = CGContext.init(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
// Create a Quartz image from the pixel data in the bitmap graphics context
let quartzImage = context?.makeImage();
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);

// Create an image object from the Quartz image
let image = UIImage.init(cgImage: quartzImage!);

return (image);
}


Related Topics



Leave a reply



Submit