How to Apply Filter to Video Real-Time Using Swift

How to apply filter to Video real-time using Swift

There's another alternative, use an AVCaptureSession to create instances of CIImage to which you can apply CIFilters (of which there are loads, from blurs to color correction to VFX).

Here's an example using the ComicBook effect. In a nutshell, create an AVCaptureSession:

let captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSessionPresetPhoto

Create an AVCaptureDevice to represent the camera, here I'm setting the back camera:

let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)

Then create a concrete implementation of the device and attach it to the session. In Swift 2, instantiating AVCaptureDeviceInput can throw an error, so we need to catch that:

 do
{
let input = try AVCaptureDeviceInput(device: backCamera)

captureSession.addInput(input)
}
catch
{
print("can't access camera")
return
}

Now, here's a little 'gotcha': although we don't actually use an AVCaptureVideoPreviewLayer but it's required to get the sample delegate working, so we create one of those:

// although we don't use this, it's required to get captureOutput invoked
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)

view.layer.addSublayer(previewLayer)

Next, we create a video output, AVCaptureVideoDataOutput which we'll use to access the video feed:

let videoOutput = AVCaptureVideoDataOutput()

Ensuring that self implements AVCaptureVideoDataOutputSampleBufferDelegate, we can set the sample buffer delegate on the video output:

 videoOutput.setSampleBufferDelegate(self, 
queue: dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL))

The video output is then attached to the capture session:

 captureSession.addOutput(videoOutput)

...and, finally, we start the capture session:

captureSession.startRunning()

Because we've set the delegate, captureOutput will be invoked with each frame capture. captureOutput is passed a sample buffer of type CMSampleBuffer and it just takes two lines of code to convert that data to a CIImage for Core Image to handle:

let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(CVPixelBuffer: pixelBuffer!)

...and that image data is passed to our Comic Book effect which, in turn, is used to populate an image view:

let comicEffect = CIFilter(name: "CIComicEffect")

comicEffect!.setValue(cameraImage, forKey: kCIInputImageKey)

let filteredImage = UIImage(CIImage: comicEffect!.valueForKey(kCIOutputImageKey) as! CIImage!)

dispatch_async(dispatch_get_main_queue())
{
self.imageView.image = filteredImage
}

I have the source code for this project available in my GitHub repo here.

Recording videos with real-time filters in Swift

I've added some comments to the critical part below:

func captureOutput(_ captureOutput: AVCaptureOutput, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection) {
autoreleasepool {

connection.videoOrientation = AVCaptureVideoOrientation.landscapeLeft;

// COMMENT: This line makes sense - this is your pixelbuffer from the camera.
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }

// COMMENT: OK, so you turn pixelBuffer into a CIImage...
let cameraImage = CIImage(cvPixelBuffer: pixelBuffer)

// COMMENT: And now you've create a CIImage with a Filter instruction...
let filter = CIFilter(name: "Fİlter")!
filter.setValue(cameraImage, forKey: kCIInputImageKey)

let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer)!
self.currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDescription)
self.currentSampleTime = CMSampleBufferGetOutputPresentationTimeStamp(sampleBuffer)

if self.isWriting {
if self.assetWriterPixelBufferInput?.assetWriterInput.isReadyForMoreMediaData == true {
// COMMENT: Here's where it gets weird. You've declared a new, empty pixelBuffer... but you already have one (pixelBuffer) that contains the image you want to write...
var newPixelBuffer: CVPixelBuffer? = nil

// COMMENT: And you grabbed memory from the pool.
CVPixelBufferPoolCreatePixelBuffer(nil, self.assetWriterPixelBufferInput!.pixelBufferPool!, &newPixelBuffer)

// COMMENT: And now you wrote an empty pixelBuffer back <-- this is what's causing the black frame.
let success = self.assetWriterPixelBufferInput?.append(newPixelBuffer!, withPresentationTime: self.currentSampleTime!)

if success == false {
print("Pixel Buffer failed")
}
}
}

// COMMENT: And now you're sending the filtered image back to the screen.
DispatchQueue.main.async {

if let outputValue = filter.value(forKey: kCIOutputImageKey) as? CIImage {
let filteredImage = UIImage(ciImage: outputValue)
self.imageView.image = filteredImage
}
}
}
}

It looks to me like you're basically getting the screen image, creating a filtered copy, then making a NEW pixel buffer which is empty and writing that out.

If you write the pixelBuffer you grabbed instead of the new one you're creating, you should successfully write the image.

What you need to successfully write out the filtered video is to create a new CVPixelBuffer from a CIImage - that solution exists here on StackOverflow already, I know because I needed that step myself!

Applying filter to real time camera preview - Swift

There are a few things wrong with your code on top

You are using a AVCaptureVideoPreviewLayer but this is going to transport pixels capture by the camera directly to the screen, skipping your image processing and CIFilter and is not necessary.

Your conformance to AVCaptureVideoDataOutputSampleBufferDelegate is out of date. func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) is now called func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)

Because you won't be using AVCaptureVideoPreviewLayer you'll need to ask for permission before you'll be able to start getting pixels from the camera. This is typically done in viewDidAppear(_:) Like:

override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
if AVCaptureDevice.authorizationStatus(for: AVMediaType.video) != .authorized
{
AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler:
{ (authorized) in
DispatchQueue.main.async
{
if authorized
{
self.setupInputOutput()
}
}
})
}
}

Also, if you are supporting rotation you will also need to update the AVCaptureConnection on rotation in your didOutput callback.

After making these changes (full source code) your code worked, producing an image like so:

Screenshot

Swift: Play a local video and apply CIFilter in realtime

The problem is that you re-create and re-assign the video composition to the player item every time the slider value changes. This is very costly and unnecessary. You can do the following instead:

  • Create the filter somewhere outside the composition block and keep a reference to it, for instance in a property.
  • Also, create the composition only once and let it apply the referenced filter (instead of creating a new one with every frame).
  • When the slider value changes, only set the corresponding parameter value of the filter. The next time the composition will render a frame, it will automatically use the new parameter value because it uses a reference to the just-changed filter.

Something like this:

let exposureFilter = CIFilter.exposureAdjust()

init() {
// set initial composition
self.updateComposition()
}

func updateComposition() {
player.currentItem?.videoComposition = AVVideoComposition(asset: player.currentItem!.asset, applyingCIFiltersWithHandler: { request in
self.exposureFilter.inputImage = request.sourceImage.clampedToExtent()
let output = self.exposureFilter.outputImage!.cropped(to: request.sourceImage.extent)
request.finish(with: output, context: nil)
})
}

@objc func exposureChanged(slider: UISlider) {
self.exposureFilter.ev = slider.value
// we need to re-set the composition if the player is paused to cause an update (see remark below)
if player.rate == 0.0 {
self.updateComposition()
}
}

(By the way, you can just do slider.addTarget(self, action:#selector(exposureChanged(slider:)), for: .valueChanged) to get notified when the slider value changes. No need to evaluate events.)

One final note: There actually is a use case when you want to re-assign the composition, which is when the player is currently paused but you still want to show a preview of the current frame with the filter values change. Please refer to this technical note from Apple on how to do that.

Apply custom camera filters on live camera preview - Swift

Yes, you can apply image filters to the camera feed by capturing video with the AVFoundation Capture system and using your own renderer to process and display video frames.

Apple has a sample code project called AVCamPhotoFilter that does just this, and shows multiple approaches to the process, using Metal or Core Image. The key points are to:

  1. Use AVCaptureVideoDataOutput to get live video frames.
  2. Use CVMetalTextureCache or CVPixelBufferPool to get the video pixel buffers accessible to your favorite rendering technology.
  3. Draw the textures using Metal (or OpenGL or whatever) with a Metal shader or Core Image filter to do pixel processing on the CPU during your render pass.

BTW, ARKit is overkill if all you want to do is apply image processing to the camera feed. ARKit is for when you want to know about the camera’s relationship to real-world space, primarily for purposes like drawing 3D content that appears to inhabit the real world.



Related Topics



Leave a reply



Submit