How can you make a CVPixelBuffer directly from a CIImage instead of a UIImage in Swift?
Create a CIContext
and use it to render the CIImage
directly to your CVPixelBuffer
using CIContext.render(_: CIImage, to buffer: CVPixelBuffer)
.
Question regarding UIImage - CVPixelBuffer - UIImage conversion
You can also use CGImage
objects with Core ML, but you have to create the MLFeatureValue
object by hand and then put it into an MLFeatureProvider
to give it to the model. But that only takes care of the model input, not the output.
Another option is to use the code from my CoreMLHelpers repo.
Convert Image to CVPixelBuffer for Machine Learning Swift
You don't need to do a bunch of image mangling yourself to use a Core ML model with an image — the new Vision framework can do that for you.
import Vision
import CoreML
let model = try VNCoreMLModel(for: MyCoreMLGeneratedModelClass().model)
let request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod)
let handler = VNImageRequestHandler(url: myImageURL)
handler.perform([request])
func myResultsMethod(request: VNRequest, error: Error?) {
guard let results = request.results as? [VNClassificationObservation]
else { fatalError("huh") }
for classification in results {
print(classification.identifier, // the scene label
classification.confidence)
}
}
The WWDC17 session on Vision should have a bit more info — it's tomorrow afternoon.
How to turn a CVPixelBuffer into a UIImage?
First of all the obvious stuff that doesn't relate directly to your question: AVCaptureVideoPreviewLayer
is the cheapest way to pipe video from either of the cameras into an independent view if that's where the data is coming from and you've no immediate plans to modify it. You don't have to do any pushing yourself, the preview layer is directly connected to the AVCaptureSession
and updates itself.
I have to admit to lacking confidence about the central question. There's a semantic difference between a CIImage
and the other two types of image — a CIImage
is a recipe for an image and is not necessarily backed by pixels. It can be something like "take the pixels from here, transform like this, apply this filter, transform like this, merge with this other image, apply this filter". The system doesn't know what a CIImage
looks like until you chose to render it. It also doesn't inherently know the appropriate bounds in which to rasterise it.
UIImage
purports merely to wrap a CIImage
. It doesn't convert it to pixels. Presumably UIImageView
should achieve that, but if so then I can't seem to find where you'd supply the appropriate output rectangle.
I've had success just dodging around the issue with:
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer))];
UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
CGImageRelease(videoImage);
With gives an obvious opportunity to specify the output rectangle. I'm sure there's a route through without using a CGImage
as an intermediary so please don't assume this solution is best practice.
Convert UIImage to CVImageBufferRef
It sounds like it might be that relationship. Possibly have it be a jpg and RGB instead of indexed colors with a png?
How to convert CMSampleBuffer to OpenCV's Mat instance in swift
First, convert CMSampleBuffer To UIImage.
extension CMSampleBuffer {
func asUIImage()-> UIImage? {
guard let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(self) else {
return nil
}
let ciImage = CIImage(cvPixelBuffer: imageBuffer)
return convertToUiImage(ciImage: ciImage)
}
func convertToUiImage(ciImage: CIImage) -> UIImage? {
let context = CIContext(options: nil)
context.clearCaches()
guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else {
return nil
}
let image = UIImage(cgImage: cgImage)
return image
}
}
Then you can easily convert UIImage to Mat and return UIImage with/without doing something. OpenCVWrapper.h file
#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>
NS_ASSUME_NONNULL_BEGIN
@interface OpenCVWrapper : NSObject
+ (UIImage *) someOperation : (UIImage *) uiImage;
@end
NS_ASSUME_NONNULL_END
OpenCVWrapper.mm file
#import <opencv2/opencv.hpp>
#import <opencv2/imgcodecs/ios.h>
#import <opencv2/core/types_c.h>
#import "OpenCVWrapper.h"
#import <opencv2/Mat.h>
#include <iostream>
#include <random>
#include <vector>
#include <chrono>
#include <stdint.h>
@implementation OpenCVWrapper
+ (UIImage *) someOperation : (UIImage *) uiImage {
cv::Mat sourceImage;
UIImageToMat(uiImage, sourceImage);
// Do whatever you need
return MatToUIImage(sourceImage);
}
@end
Related Topics
How to Set a New Root View Controller
How to Satisfy Swift Protocol and Add Defaulted Arguments
Using Some Protocol as a Concrete Type Conforming to Another Protocol Is Not Supported
Best Way to Make Amazon Aws Dynamodb Queries Using Swift
Use Resources in Unit Tests with Swift Package Manager
How to Pass a Class Type as a Function Parameter
How to Print Out the Method Name and Line Number in Swift
How to Reason When I Have to Choose Between a Class, Struct and Enum in Swift
Scrollview + Navigationview Animation Glitch Swiftui
Compress Image in iOS 12. How Will This Code Be Updated
How to Cycle Through the Entire Alphabet with Swift While Assigning Values
Swift: Overriding == in Subclass Results Invocation of == in Superclass Only
Check If Key Exists in Dictionary of Type [Type:Type]
Querying Below Autoid's in Firebase
Break a Number Up to an Array of Individual Digits
Swift Imagepickercontroller Didfinishpickingmediawithinfo Not Fired