How to turn a CVPixelBuffer into a UIImage?
First of all the obvious stuff that doesn't relate directly to your question: AVCaptureVideoPreviewLayer
is the cheapest way to pipe video from either of the cameras into an independent view if that's where the data is coming from and you've no immediate plans to modify it. You don't have to do any pushing yourself, the preview layer is directly connected to the AVCaptureSession
and updates itself.
I have to admit to lacking confidence about the central question. There's a semantic difference between a CIImage
and the other two types of image — a CIImage
is a recipe for an image and is not necessarily backed by pixels. It can be something like "take the pixels from here, transform like this, apply this filter, transform like this, merge with this other image, apply this filter". The system doesn't know what a CIImage
looks like until you chose to render it. It also doesn't inherently know the appropriate bounds in which to rasterise it.
UIImage
purports merely to wrap a CIImage
. It doesn't convert it to pixels. Presumably UIImageView
should achieve that, but if so then I can't seem to find where you'd supply the appropriate output rectangle.
I've had success just dodging around the issue with:
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer))];
UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
CGImageRelease(videoImage);
With gives an obvious opportunity to specify the output rectangle. I'm sure there's a route through without using a CGImage
as an intermediary so please don't assume this solution is best practice.
How can you make a CVPixelBuffer directly from a CIImage instead of a UIImage in Swift?
Create a CIContext
and use it to render the CIImage
directly to your CVPixelBuffer
using CIContext.render(_: CIImage, to buffer: CVPixelBuffer)
.
Question regarding UIImage - CVPixelBuffer - UIImage conversion
You can also use CGImage
objects with Core ML, but you have to create the MLFeatureValue
object by hand and then put it into an MLFeatureProvider
to give it to the model. But that only takes care of the model input, not the output.
Another option is to use the code from my CoreMLHelpers repo.
Displaying a cropped cvpixelbuffer as an uiimage in a swfitui view
Solved. Yeas in deed the wrong line was when I created the UIImage.
This is the correct way of creating a UIImage:
if observationWidthBiggherThan180 {
let ciimage : CIImage = CIImage(cvPixelBuffer: pixelBuffer)
let context:CIContext = CIContext(options: nil)
let cgImage:CGImage = context.createCGImage(ciimage, from: ciimage.extent)!
let myImage:UIImage = UIImage(cgImage: cgImage)
sendImageDelegate.sendImage(image:myImage)
}
Related Topics
Iphone: Where the .Dsym File Is Located in Crash Report
Autolayout Is Ignored in Custom Uitableviewcell
iOS Nsdate() Returns Incorrect Time
iOS 8 - Buttons in Horizontal Scroll View Intercepting Pan Event - Scroll Does Not Work
iPhone App Under Test Crashes After a Few Days
Custom Rounding Corners on Uiview
Change Default Icon for Moving Cells in Uitableview
React/Rctbridgemodule.H' File Not Found
How to Detect Delete Key on an Uitextfield in iOS 8
How to Throttle Search (Based on Typing Speed) in iOS Uisearchbar
Saving Image to Documents Directory and Retrieving for Email Attachment
How to Password Protect Writing to Nfc Ntag216 Tag on iOS 13 Using Nfc Core
Uitextfield Securetextentry Bullets with a Custom Font
How to Detect One Button in Tableview Cell