Video Processing with Opencv in iOS Swift Project

Video processing with OpenCV in IOS Swift project

This is an update to my initial answer after I had a chance to play with this myself. Yes, it is possible to use CvVideoCamera with a view controller written in Swift. If you just want to use it to display video from the camera in your app, it's really easy.

#import <opencv2/highgui/cap_ios.h> via the bridging header. Then, in your view controller:

class ViewController: UIViewController, CvVideoCameraDelegate {
...
var myCamera : CvVideoCamera!
override func viewDidLoad() {
...
myCamera = CvVideoCamera(parentView: imageView)
myCamera.delegate = self
...
}
}

The ViewController cannot actually conform to the CvVideoCameraDelegate protocol, but CvVideoCamera won't work without a delegate, so we work around this problem by declaring ViewController to adopt the protocol without implementing any of its methods. This will trigger a compiler warning, but the video stream from the camera will be displayed in the image view.

Of course, you might want to implement the CvVideoCameraDelegate's (only) processImage() method to process video frames before displaying them. The reason you cannot implement it in Swift is because it uses a C++ type, Mat.

So, you will need to write an Objective-C++ class whose instance can be set as camera's delegate. The processImage() method in that Objective-C++ class will be called by CvVideoCamera and will in turn call code in your Swift class. Here are some sample code snippets.
In OpenCVWrapper.h:

// Need this ifdef, so the C++ header won't confuse Swift
#ifdef __cplusplus
#import <opencv2/opencv.hpp>
#endif

// This is a forward declaration; we cannot include *-Swift.h in a header.
@class ViewController;

@interface CvVideoCameraWrapper : NSObject
...
-(id)initWithController:(ViewController*)c andImageView:(UIImageView*)iv;
...
@end

In the wrapper implementation, OpenCVWrapper.mm (it's an Objective-C++ class, hence the .mm extension):

#import <opencv2/highgui/cap_ios.h>
using namespace cv;
// Class extension to adopt the delegate protocol
@interface CvVideoCameraWrapper () <CvVideoCameraDelegate>
{
}
@end
@implementation CvVideoCameraWrapper
{
ViewController * viewController;
UIImageView * imageView;
CvVideoCamera * videoCamera;
}

-(id)initWithController:(ViewController*)c andImageView:(UIImageView*)iv
{
viewController = c;
imageView = iv;

videoCamera = [[CvVideoCamera alloc] initWithParentView:imageView];
// ... set up the camera
...
videoCamera.delegate = self;

return self;
}
// This #ifdef ... #endif is not needed except in special situations
#ifdef __cplusplus
- (void)processImage:(Mat&)image
{
// Do some OpenCV stuff with the image
...
}
#endif
...
@end

Then you put #import "OpenCVWrapper.h" in the bridging header, and the Swift view controller might look like this:

class ViewController: UIViewController {
...
var videoCameraWrapper : CvVideoCameraWrapper!

override func viewDidLoad() {
...
self.videoCameraWrapper = CvVideoCameraWrapper(controller:self, andImageView:imageView)
...
}

See https://developer.apple.com/library/ios/documentation/Swift/Conceptual/BuildingCocoaApps/MixandMatch.html about forward declarations and Swift/C++/Objective-C interop. There is plenty of info on the web about #ifdef __cplusplus and extern "C" (if you need it).

In the processImage() delegate method you will likely need to interact with some OpenCV API, for which you will also have to write wrappers. You can find some info on that elsewhere, for example here: Using OpenCV in Swift iOS

Update 09/03/2019

At the community request, see comments, the sample code has been placed on GitHub at https://github.com/aperedera/opencv-swift-examples.

Also, the current, as of this writing, version of the OpenCV iOS framework no longer allows Swift code to use the header (now it's in videoio/cap_ios.h) that declares the CvVideoCameraDelegate protocol, so you cannot just include it in the bridging header and declare the view controller to conform to the protocol to simply display camera video in your app.

open cv ios video processing

Here is the conversion that I use. You lock the pixel buffer, create a cv::Mat, process with the cv::Mat, then unlock the pixel buffer.

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

CVPixelBufferLockBaseAddress( pixelBuffer, 0 );

int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
int bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);
cv::Mat image = cv::Mat(bufferHeight,bufferWidth,CV_8UC4,pixel, bytesPerRow); //put buffer in open cv, no memory copied
//Processing here

//End processing
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
}

The above method does not copy any memory and as such you do not own the memory, pixelBuffer will free it for you. If you want your own copy of the buffer, just do

cv::Mat copied_image = image.clone();

Using OpenCV in Swift iOS

OpenCV is a framework written in C++. Apple's reference tell us that

You cannot import C++ code directly into Swift. Instead, create an Objective-C or C wrapper for C++ code.

so you cannot directly import and use OpenCV in a swift project, but this is actually not bad at all because you (need) continue to use the C++ syntax of the framework which is pretty well documented all over the net.

So how do you proceed?

  1. Create a new Objective-C++ class (.h, .mm) for calling C++ OpenCV

OpenCVWrapper.h

#import <UIKit/UIKit.h>
#import <Foundation/Foundation.h>

@interface OpenCVWrapper : NSObject

+ (UIImage *)processImageWithOpenCV:(UIImage*)inputImage;

@end

OpenCVWrapper.mm (use the File -> New... Wizard for Objective-C and rename the .m file to .mm)

#include "OpenCVWrapper.h"
#import "UIImage+OpenCV.h" // See below to create this

#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

@implementation OpenCVWrapper : NSObject

+ (UIImage *)processImageWithOpenCV:(UIImage*)inputImage {
Mat mat = [inputImage CVMat];

// do your processing here
...

return [UIImage imageWithCVMat:mat];
}

@end

As an alternative to creating new classes such as the example OpenCVWrapper.h/mm you can use Objective-C categories to extend existing Objective-C classes with OpenCV functionality. For instance UIImage+OpenCV category:

UIImage+OpenCV.h

#import <UIKit/UIKit.h>
#import <opencv2/opencv.hpp>

@interface UIImage (OpenCV)

//cv::Mat to UIImage
+ (UIImage *)imageWithCVMat:(const cv::Mat&)cvMat;
- (id)initWithCVMat:(const cv::Mat&)cvMat;

//UIImage to cv::Mat
- (cv::Mat)CVMat;
- (cv::Mat)CVMat3; // no alpha channel
- (cv::Mat)CVGrayscaleMat;

@end

UIImage+OpenCV.mm

See https://github.com/foundry/OpenCVSwiftStitch/blob/master/SwiftStitch/UIImage%2BOpenCV.mm


  1. Update the Bridging-Header to make all Objective-C++ classes you created available to Swift by importing our newly created wrappers (#import "OpenCVWrapper.h")

  2. Use your wrapper in your Swift files:

    let image = UIImage(named: "image.jpeg")
    let processedImage = OpenCVWrapper.processImageWithOpenCV(image)

All Objective-C++ classes included in the bridge header are available directly from Swift.

camera view does not appear in a swift/objective-c++ (opencv) project - ios 10.3 xcode 8

I resolve this problem. The solution was simply to connect the UI (main.storyboard) to the ViewController.swift by dragging the specific UI components.

Both paradigms work:

  1. Source code posted above adapted from: https://github.com/akira108/MinimumOpenCVLiveCamera
    This require to connect the UIView of the main.storyboard to the previewView (UIView) in the ViewController.swift (just drag and drop to create the connection)

  2. Involving the CvVideoCameraDelegate class in the swift Viewcontroller (Video processing with OpenCV in IOS Swift project). Here, I inserted a UIImage object at the main.storyboard and connected this object to previewImage at the ViewController. Because this example requires to use specific opencv header within swift (cap_ios.h), I only tested with with OpenCV 2.4.

iPhone and OpenCV

OpenCV now (since 2012) has an official port for the iPhone (iOS).

You can find all of OpenCV's releases here.

And find install instructions here:

Tutorials & introduction for the new version 3.x

How do I analyze video stream on iOS?

It sounds like you are asking for information about several discreet steps. There are a multitude of ways to do each of them and if you get stuck on any individual step it would be a good idea to post a question about it individually.

1: Get video Frame

Like chaitanya.varanasi said, AVFoundation Framework is the best way of getting access to an video frame on IOS. If you want something less flexible and quicker try looking at open CV's video capture. The goal of this step is to get access to a pixel buffer from the camera. If you have trouble with this, ask about it specifically.

2: Put pixel buffer into OpenCV

This part is really easy. If you get it from openCV's video capture you are already done. If you get it from an AVFoundation you will need to put it into openCV like this

//Buffer is of type CVImageBufferRef, which is what AVFoundation should be giving you
//I assume it is BGRA or RGBA formatted, if it isn't, change CV_8UC4 to the appropriate format

CVPixelBufferLockBaseAddress( Buffer, 0 );

int bufferWidth = CVPixelBufferGetWidth(Buffer);
int bufferHeight = CVPixelBufferGetHeight(Buffer);

unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(Buffer);
cv::Mat image = cv::Mat(bufferHeight,bufferWidth,CV_8UC4,pixel); //put buffer in open cv, no memory copied

//Process image Here

//End processing
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );

note I am assuming you plan to do this in OpenCV since you used its tag. Also I assume you can get the OpenCV framework to link to your project. If that is an issue, ask a specific question about it.

3: Process Image

This part is by far the most open ended. All you have said about your problem is that you are trying to detect a strong light source. One very quick and easy way of doing that would be to detect the mean pixel value in a greyscale image. If you get the image in colour you can convert with cvtColor. Then just call Avg on it to get the mean value. Hopefully you can tell if the light is on by how that value fluctuates.

chaitanya.varanasi suggested another option, you should check it out too.

openCV is a very large library that can do a wide wide variety of things. Without knowing more about your problem I don't know what else to tell you.



Related Topics



Leave a reply



Submit