Using Opencv in Swift iOS

Using OpenCV in Swift iOS

OpenCV is a framework written in C++. Apple's reference tell us that

You cannot import C++ code directly into Swift. Instead, create an Objective-C or C wrapper for C++ code.

so you cannot directly import and use OpenCV in a swift project, but this is actually not bad at all because you (need) continue to use the C++ syntax of the framework which is pretty well documented all over the net.

So how do you proceed?

  1. Create a new Objective-C++ class (.h, .mm) for calling C++ OpenCV

OpenCVWrapper.h

#import <UIKit/UIKit.h>
#import <Foundation/Foundation.h>

@interface OpenCVWrapper : NSObject

+ (UIImage *)processImageWithOpenCV:(UIImage*)inputImage;

@end

OpenCVWrapper.mm (use the File -> New... Wizard for Objective-C and rename the .m file to .mm)

#include "OpenCVWrapper.h"
#import "UIImage+OpenCV.h" // See below to create this

#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

@implementation OpenCVWrapper : NSObject

+ (UIImage *)processImageWithOpenCV:(UIImage*)inputImage {
Mat mat = [inputImage CVMat];

// do your processing here
...

return [UIImage imageWithCVMat:mat];
}

@end

As an alternative to creating new classes such as the example OpenCVWrapper.h/mm you can use Objective-C categories to extend existing Objective-C classes with OpenCV functionality. For instance UIImage+OpenCV category:

UIImage+OpenCV.h

#import <UIKit/UIKit.h>
#import <opencv2/opencv.hpp>

@interface UIImage (OpenCV)

//cv::Mat to UIImage
+ (UIImage *)imageWithCVMat:(const cv::Mat&)cvMat;
- (id)initWithCVMat:(const cv::Mat&)cvMat;

//UIImage to cv::Mat
- (cv::Mat)CVMat;
- (cv::Mat)CVMat3; // no alpha channel
- (cv::Mat)CVGrayscaleMat;

@end

UIImage+OpenCV.mm

See https://github.com/foundry/OpenCVSwiftStitch/blob/master/SwiftStitch/UIImage%2BOpenCV.mm


  1. Update the Bridging-Header to make all Objective-C++ classes you created available to Swift by importing our newly created wrappers (#import "OpenCVWrapper.h")

  2. Use your wrapper in your Swift files:

    let image = UIImage(named: "image.jpeg")
    let processedImage = OpenCVWrapper.processImageWithOpenCV(image)

All Objective-C++ classes included in the bridge header are available directly from Swift.

Video processing with OpenCV in IOS Swift project

This is an update to my initial answer after I had a chance to play with this myself. Yes, it is possible to use CvVideoCamera with a view controller written in Swift. If you just want to use it to display video from the camera in your app, it's really easy.

#import <opencv2/highgui/cap_ios.h> via the bridging header. Then, in your view controller:

class ViewController: UIViewController, CvVideoCameraDelegate {
...
var myCamera : CvVideoCamera!
override func viewDidLoad() {
...
myCamera = CvVideoCamera(parentView: imageView)
myCamera.delegate = self
...
}
}

The ViewController cannot actually conform to the CvVideoCameraDelegate protocol, but CvVideoCamera won't work without a delegate, so we work around this problem by declaring ViewController to adopt the protocol without implementing any of its methods. This will trigger a compiler warning, but the video stream from the camera will be displayed in the image view.

Of course, you might want to implement the CvVideoCameraDelegate's (only) processImage() method to process video frames before displaying them. The reason you cannot implement it in Swift is because it uses a C++ type, Mat.

So, you will need to write an Objective-C++ class whose instance can be set as camera's delegate. The processImage() method in that Objective-C++ class will be called by CvVideoCamera and will in turn call code in your Swift class. Here are some sample code snippets.
In OpenCVWrapper.h:

// Need this ifdef, so the C++ header won't confuse Swift
#ifdef __cplusplus
#import <opencv2/opencv.hpp>
#endif

// This is a forward declaration; we cannot include *-Swift.h in a header.
@class ViewController;

@interface CvVideoCameraWrapper : NSObject
...
-(id)initWithController:(ViewController*)c andImageView:(UIImageView*)iv;
...
@end

In the wrapper implementation, OpenCVWrapper.mm (it's an Objective-C++ class, hence the .mm extension):

#import <opencv2/highgui/cap_ios.h>
using namespace cv;
// Class extension to adopt the delegate protocol
@interface CvVideoCameraWrapper () <CvVideoCameraDelegate>
{
}
@end
@implementation CvVideoCameraWrapper
{
ViewController * viewController;
UIImageView * imageView;
CvVideoCamera * videoCamera;
}

-(id)initWithController:(ViewController*)c andImageView:(UIImageView*)iv
{
viewController = c;
imageView = iv;

videoCamera = [[CvVideoCamera alloc] initWithParentView:imageView];
// ... set up the camera
...
videoCamera.delegate = self;

return self;
}
// This #ifdef ... #endif is not needed except in special situations
#ifdef __cplusplus
- (void)processImage:(Mat&)image
{
// Do some OpenCV stuff with the image
...
}
#endif
...
@end

Then you put #import "OpenCVWrapper.h" in the bridging header, and the Swift view controller might look like this:

class ViewController: UIViewController {
...
var videoCameraWrapper : CvVideoCameraWrapper!

override func viewDidLoad() {
...
self.videoCameraWrapper = CvVideoCameraWrapper(controller:self, andImageView:imageView)
...
}

See https://developer.apple.com/library/ios/documentation/Swift/Conceptual/BuildingCocoaApps/MixandMatch.html about forward declarations and Swift/C++/Objective-C interop. There is plenty of info on the web about #ifdef __cplusplus and extern "C" (if you need it).

In the processImage() delegate method you will likely need to interact with some OpenCV API, for which you will also have to write wrappers. You can find some info on that elsewhere, for example here: Using OpenCV in Swift iOS

Update 09/03/2019

At the community request, see comments, the sample code has been placed on GitHub at https://github.com/aperedera/opencv-swift-examples.

Also, the current, as of this writing, version of the OpenCV iOS framework no longer allows Swift code to use the header (now it's in videoio/cap_ios.h) that declares the CvVideoCameraDelegate protocol, so you cannot just include it in the bridging header and declare the view controller to conform to the protocol to simply display camera video in your app.

Integrate OpenCV into Xcode using Swift Language

Swift cannot work with C++ directly, so you need to use at a minimum Objective C wrappers to communicate with OpenCV.

For example, I have the following method in my C++ class:

cv::Mat crop(const cv::Mat &src, cv::Rect2f &rect);

I then have an Objective-C method that exists in an Objective-C++ files (this means the filename in XCode has a second m (ex: MyWrapper.mm).

The function to call this in Obj-C would look something like this:

- (void)crop:(nonnull UIImage *)source
rect:(CGRect)rect
completionHandler:(nonnull ImageCompletionHandler)completionHandler {
MAT src = [Conversion cvMatWithImage:source];
RECT_2F rect_2f = [Conversion cvRectFromRect:rect];
MAT dst = self.basicPtr->crop(src, rect_2f);

completionHandler([Conversion imageFromCVMat:dst]);
}
  1. The first Conversion call converts a UIImage into a Mat for OpenCV to process
  2. The second Conversion call converts a CGRect into a cv::Rect2f
  3. The self.basicPts->crop(src, rect_2f) function is using a pointer to my class to access the crop function above
  4. Finally, I have a closure that takes the Mat object OpenCV gives me and converts it into a UIImage for iOS to work with

Then, back in Swift land, you need to first have a bridging header for the Objective-C wrapper so that Swift can access it.

Once you have that, then calling the function is trivial:

func crop(with image: UIImage, rect: CGRect) {
let wrapper = BasicWrapper()
wrapper.crop(image, rect: rect) { (image) in
// Handle image
}
}

It took a lot of work to fill in all those pieces, but if you look around here, you can find all the pieces you need (I know I did).

Using opencv fonctions with swift4.2 on ios

To answer your question:-

  • You could use "std::vector<std::vector<cv::Point> >" type for your function parameters in case of contours or use "std::vector<cv::Point2f>" for points. There are other types too but I didn't use anything more on my project

  • Make sure to include "#import<opencv2/imgcodecs/ios.h>" to your header

  • Use 'cv::Mat' instead of 'Mat' and try prefixing 'cv::' before most of the unrecognised types and functions as well(This mostly works )

Here's how your function should probably be like (Tested and has no errors):-

+(UIImage *) hdrImaging:(std::vector<std::vector<cv::Point> >)images :(std::vector<std::vector<cv::Point> >)times{

cv::Mat response;

cv::Ptr<cv::CalibrateDebevec> calibrate = cv::createCalibrateDebevec();
calibrate->process(images, response, times);

cv::Mat hdr;
cv::Ptr<cv::MergeDebevec> merge_debevec = cv::createMergeDebevec();
merge_debevec->process(images, hdr, times, response);
cv::Mat ldr;
cv::Ptr<cv::TonemapDurand> tonemap = cv::createTonemapDurand(2.2f);
tonemap->process(hdr, ldr);
cv::Mat fusion;
cv::Ptr<cv::MergeMertens> merge_mertens = cv::createMergeMertens();
merge_mertens->process(images, fusion);

response = fusion * 255;

return MatToUIImage(response);

}

Make sure you change your parameter type to something that matches your use case, as I'm just giving an example

Hope this helps!

How to reduce the openCV framework size in iOS project

The opencv.framework you quoted is 500M, the volume of this project will increase by 500M, but the archive APP volume will not be 500M, because you only use some symbols, and the others are optimized by XCode. If you are concerned about this volume, it is recommended to download the complete opencv project from the official website and rebuild it according to your needs.
You only need 'core. Core functionality' and 'imgcodecs. Image file reading and writing'.



Related Topics



Leave a reply



Submit