Apply Custom Filters to Camera Output

Apply custom filters to camera output

OK, there are several ways to do this. But there is a significant problem with performance. The byte[] from a camera is in YUV format, which has to be converted to some sort of RGB format, if you want to display it. This conversion is quite expensive operation and significantly lowers the output fps.

It depends on what you actually want to do with the camera preview. Because the best solution is to draw the camera preview without callback and make some effects over the camera preview. That is the usual way to do argumented reallity stuff.

But if you really need to display the output manually, there are several ways to do that. Your example does not work for several reasons. First, you are not displaying the image at all. If you call this:

mCamera.setPreviewCallback(new CameraGreenFilter());
mCamera.setPreviewDisplay(null);

than your camera is not displaying preview at all, you have to display it manually. And you can't do any expensive operations in onPreviewFrame method, beacause the lifetime of data is limited, it's overwriten on the next frame. One hint, use setPreviewCallbackWithBuffer, it's faster, because it reuses one buffer and does not have to allocate new memory on each frame.

So you have to do something like this:

private byte[] cameraFrame;
private byte[] buffer;
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
cameraFrame = data;
addCallbackBuffer(data); //actually, addCallbackBuffer(buffer) has to be called once sowhere before you call mCamera.startPreview();
}

private ByteOutputStream baos;
private YuvImage yuvimage;
private byte[] jdata;
private Bitmap bmp;
private Paint paint;

@Override //from SurfaceView
public void onDraw(Canvas canvas) {
baos = new ByteOutputStream();
yuvimage=new YuvImage(cameraFrame, ImageFormat.NV21, prevX, prevY, null);

yuvimage.compressToJpeg(new Rect(0, 0, width, height), 80, baos); //width and height of the screen
jdata = baos.toByteArray();

bmp = BitmapFactory.decodeByteArray(jdata, 0, jdata.length);

canvas.drawBitmap(bmp , 0, 0, paint);
invalidate(); //to call ondraw again
}

To make this work, you need to call setWillNotDraw(false) in the class constructor or somewhere.

In onDraw, you can for example apply paint.setColorFilter(filter), if you want to modify colors. I can post some example of that, if you want.

So this will work, but the performance will be low (less than 8fps), cause BitmapFactory.decodeByteArray is slow. You can try to convert data from YUV to RGB with native code and android NDK, but that's quite complicated.

The other option is to use openGL ES. You need GLSurfaceView, where you bind camera frame as a texture (in GLSurfaceView implement Camera.previewCallback, so you use onPreviewFrame same way as in regular surface). But there is the same problem, you need to convert YUV data. There is one chance - you can display only luminance data from the preview (greyscale image) quite fast, because the first half of byte array in YUV is only luminance data without colors. So on onPreviewFrame you use arraycopy to copy the first half of the array, and than you bind the texture like this:

gl.glGenTextures(1, cameraTexture, 0);
int tex = cameraTexture[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, tex);
gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE,
this.prevX, this.prevY, 0, GL10.GL_LUMINANCE,
GL10.GL_UNSIGNED_BYTE, ByteBuffer.wrap(this.cameraFrame)); //cameraFrame is the first half od byte[] from onPreviewFrame

gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);

You cant get about 16-18 fps this way and you can use openGL to make some filters. I can send you some more code to this if you want, but it's too long to put in here...

For some more info, you can see my simillar question, but there is not a good solution either...

Apply custom camera filters on live camera preview - Swift

Yes, you can apply image filters to the camera feed by capturing video with the AVFoundation Capture system and using your own renderer to process and display video frames.

Apple has a sample code project called AVCamPhotoFilter that does just this, and shows multiple approaches to the process, using Metal or Core Image. The key points are to:

  1. Use AVCaptureVideoDataOutput to get live video frames.
  2. Use CVMetalTextureCache or CVPixelBufferPool to get the video pixel buffers accessible to your favorite rendering technology.
  3. Draw the textures using Metal (or OpenGL or whatever) with a Metal shader or Core Image filter to do pixel processing on the CPU during your render pass.

BTW, ARKit is overkill if all you want to do is apply image processing to the camera feed. ARKit is for when you want to know about the camera’s relationship to real-world space, primarily for purposes like drawing 3D content that appears to inhabit the real world.

Android Camera Filters

I know it's an old question, but maybe someone will be looking for the same thing and might find this useful....

If you want to apply filter to the live preview, it's not easy, because the conversion from YUV to RGB is expensive and the performance is low on android devices. Please check my other answer on this topic and one other similar question.

If you only want to apply filter to a single taken image, it shouldn't be a problem, it's just a matter of image processing. The code will be similar to the examples in the linked questions.

How to apply filter to Video real-time using Swift

There's another alternative, use an AVCaptureSession to create instances of CIImage to which you can apply CIFilters (of which there are loads, from blurs to color correction to VFX).

Here's an example using the ComicBook effect. In a nutshell, create an AVCaptureSession:

let captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSessionPresetPhoto

Create an AVCaptureDevice to represent the camera, here I'm setting the back camera:

let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)

Then create a concrete implementation of the device and attach it to the session. In Swift 2, instantiating AVCaptureDeviceInput can throw an error, so we need to catch that:

 do
{
let input = try AVCaptureDeviceInput(device: backCamera)

captureSession.addInput(input)
}
catch
{
print("can't access camera")
return
}

Now, here's a little 'gotcha': although we don't actually use an AVCaptureVideoPreviewLayer but it's required to get the sample delegate working, so we create one of those:

// although we don't use this, it's required to get captureOutput invoked
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)

view.layer.addSublayer(previewLayer)

Next, we create a video output, AVCaptureVideoDataOutput which we'll use to access the video feed:

let videoOutput = AVCaptureVideoDataOutput()

Ensuring that self implements AVCaptureVideoDataOutputSampleBufferDelegate, we can set the sample buffer delegate on the video output:

 videoOutput.setSampleBufferDelegate(self, 
queue: dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL))

The video output is then attached to the capture session:

 captureSession.addOutput(videoOutput)

...and, finally, we start the capture session:

captureSession.startRunning()

Because we've set the delegate, captureOutput will be invoked with each frame capture. captureOutput is passed a sample buffer of type CMSampleBuffer and it just takes two lines of code to convert that data to a CIImage for Core Image to handle:

let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(CVPixelBuffer: pixelBuffer!)

...and that image data is passed to our Comic Book effect which, in turn, is used to populate an image view:

let comicEffect = CIFilter(name: "CIComicEffect")

comicEffect!.setValue(cameraImage, forKey: kCIInputImageKey)

let filteredImage = UIImage(CIImage: comicEffect!.valueForKey(kCIOutputImageKey) as! CIImage!)

dispatch_async(dispatch_get_main_queue())
{
self.imageView.image = filteredImage
}

I have the source code for this project available in my GitHub repo here.

How to modify (add filters to) the camera stream that WebRTC is sending to other peers/server

I found a way out. So basically you need to build your own WebRTC pod and then you can add a hook for using a custom AVCaptureVideoDataOutputSampleBufferDelegate on the videoOutput object. Then handle sampleBuffer, modify the buffer and then pass on to webrtc.

Implementation

Open the file webrtc/sdk/objc/Frameworks/Classes/RTCAVFoundationVideoCapturerInternal.mm

and on the line:

[videoDataOutput setSampleBufferDelegate:self queue:self.frameQueue];

use a custom delegate instead of self.

In that delegate

  class YourDelegate : AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)

//modify the pixelBuffer
//get the modifiedSampleBuffer from modified pixelBuffer
DispatchQueue.main.async {
//show the modified buffer to the user
}

//To pass the modified buffer to webrtc (warning: [this is objc code]):
//(_capturer object is found in RTCAVFoundationVideoCapturerInternal.mm)
_capturer->CaptureSampleBuffer(modifiedSampleBuffer, _rotation);

}
}


Related Topics



Leave a reply



Submit