How to Apply Custom Filters in a Camera [Surfaceview Preview]...

Apply custom filters to camera output

OK, there are several ways to do this. But there is a significant problem with performance. The byte[] from a camera is in YUV format, which has to be converted to some sort of RGB format, if you want to display it. This conversion is quite expensive operation and significantly lowers the output fps.

It depends on what you actually want to do with the camera preview. Because the best solution is to draw the camera preview without callback and make some effects over the camera preview. That is the usual way to do argumented reallity stuff.

But if you really need to display the output manually, there are several ways to do that. Your example does not work for several reasons. First, you are not displaying the image at all. If you call this:

mCamera.setPreviewCallback(new CameraGreenFilter());
mCamera.setPreviewDisplay(null);

than your camera is not displaying preview at all, you have to display it manually. And you can't do any expensive operations in onPreviewFrame method, beacause the lifetime of data is limited, it's overwriten on the next frame. One hint, use setPreviewCallbackWithBuffer, it's faster, because it reuses one buffer and does not have to allocate new memory on each frame.

So you have to do something like this:

private byte[] cameraFrame;
private byte[] buffer;
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
cameraFrame = data;
addCallbackBuffer(data); //actually, addCallbackBuffer(buffer) has to be called once sowhere before you call mCamera.startPreview();
}

private ByteOutputStream baos;
private YuvImage yuvimage;
private byte[] jdata;
private Bitmap bmp;
private Paint paint;

@Override //from SurfaceView
public void onDraw(Canvas canvas) {
baos = new ByteOutputStream();
yuvimage=new YuvImage(cameraFrame, ImageFormat.NV21, prevX, prevY, null);

yuvimage.compressToJpeg(new Rect(0, 0, width, height), 80, baos); //width and height of the screen
jdata = baos.toByteArray();

bmp = BitmapFactory.decodeByteArray(jdata, 0, jdata.length);

canvas.drawBitmap(bmp , 0, 0, paint);
invalidate(); //to call ondraw again
}

To make this work, you need to call setWillNotDraw(false) in the class constructor or somewhere.

In onDraw, you can for example apply paint.setColorFilter(filter), if you want to modify colors. I can post some example of that, if you want.

So this will work, but the performance will be low (less than 8fps), cause BitmapFactory.decodeByteArray is slow. You can try to convert data from YUV to RGB with native code and android NDK, but that's quite complicated.

The other option is to use openGL ES. You need GLSurfaceView, where you bind camera frame as a texture (in GLSurfaceView implement Camera.previewCallback, so you use onPreviewFrame same way as in regular surface). But there is the same problem, you need to convert YUV data. There is one chance - you can display only luminance data from the preview (greyscale image) quite fast, because the first half of byte array in YUV is only luminance data without colors. So on onPreviewFrame you use arraycopy to copy the first half of the array, and than you bind the texture like this:

gl.glGenTextures(1, cameraTexture, 0);
int tex = cameraTexture[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, tex);
gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE,
this.prevX, this.prevY, 0, GL10.GL_LUMINANCE,
GL10.GL_UNSIGNED_BYTE, ByteBuffer.wrap(this.cameraFrame)); //cameraFrame is the first half od byte[] from onPreviewFrame

gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);

You cant get about 16-18 fps this way and you can use openGL to make some filters. I can send you some more code to this if you want, but it's too long to put in here...

For some more info, you can see my simillar question, but there is not a good solution either...

Apply custom camera filters on live camera preview - Swift

Yes, you can apply image filters to the camera feed by capturing video with the AVFoundation Capture system and using your own renderer to process and display video frames.

Apple has a sample code project called AVCamPhotoFilter that does just this, and shows multiple approaches to the process, using Metal or Core Image. The key points are to:

  1. Use AVCaptureVideoDataOutput to get live video frames.
  2. Use CVMetalTextureCache or CVPixelBufferPool to get the video pixel buffers accessible to your favorite rendering technology.
  3. Draw the textures using Metal (or OpenGL or whatever) with a Metal shader or Core Image filter to do pixel processing on the CPU during your render pass.

BTW, ARKit is overkill if all you want to do is apply image processing to the camera feed. ARKit is for when you want to know about the camera’s relationship to real-world space, primarily for purposes like drawing 3D content that appears to inhabit the real world.

How apply a filter on MediaPlayer using a SurfaceView?

If you just need a player using NDK + OpenGL (when you compile the player set the rgb565 = 0, this enable the alpha channel)

http://code.google.com/p/dolphin-player/

Now the solution to my problem bellow:

http://code.google.com/p/javacv/

import java.io.File;

import com.googlecode.javacv.CanvasFrame;
import com.googlecode.javacv.FFmpegFrameGrabber;

public class TestCV{

public static void main(String[] args) throws Exception {

File f = new File("input.mp4");
FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(f);

final CanvasFrame canvas = new CanvasFrame("My Image");

canvas.showImage(grabber.grab());

}
}

After just lock the canvas and work with bytes, like we can do on CameraPreview canvas,

Thanks for all who helped,

Camera preview on multiple views - initialize/release handling

Here's how I solved it.

There are some problems you need to solve. First, the matrix/color conversions of the OpenCV matrix, which is a very expensive operation in Android. Therefore, it is recommended to do that only once. This class makes also sure that the original matrix is only converted once and only if the UI is requesting the bitmap.

import org.opencv.android.CameraBridgeViewBase.CvCameraViewFrame;
import org.opencv.core.Mat;

import android.graphics.Bitmap;

public interface CameraFrame extends CvCameraViewFrame {
Bitmap toBitmap();

@Override
Mat rgba();

@Override
Mat gray();
}

The implementation of this class looks like this:

    private class CameraAccessFrame implements CameraFrame {
private Mat mYuvFrameData;
private Mat mRgba;
private int mWidth;
private int mHeight;
private Bitmap mCachedBitmap;
private boolean mRgbaConverted;
private boolean mBitmapConverted;

@Override
public Mat gray() {
return mYuvFrameData.submat(0, mHeight, 0, mWidth);
}

@Override
public Mat rgba() {
if (!mRgbaConverted) {
Imgproc.cvtColor(mYuvFrameData, mRgba,
Imgproc.COLOR_YUV2BGR_NV12, 4);
mRgbaConverted = true;
}
return mRgba;
}

@Override
public Bitmap toBitmap() {
if (mBitmapConverted)
return mCachedBitmap;

Mat rgba = this.rgba();
Utils.matToBitmap(rgba, mCachedBitmap);
mBitmapConverted = true;
return mCachedBitmap;
}

public CameraAccessFrame(Mat Yuv420sp, int width, int height) {
super();
mWidth = width;
mHeight = height;
mYuvFrameData = Yuv420sp;
mRgba = new Mat();

this.mCachedBitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.ARGB_8888);
}

public void release() {
mRgba.release();
mCachedBitmap.recycle();
}

public void invalidate() {
mRgbaConverted = false;
mBitmapConverted = false;
}
};

The Matrix conversion is done by the OpenCV Utils org.opencv.android.Utils.matToBitmap(converted, bmp);. Since we want to receive the camera image only once, but displaying it on multiple views, it is a 1:n relationship. The 1 is the component that receives the image (will be explained later), while the n is any UI view that wants to use the image. For those UI callbacks, I created this interface.

    public interface CameraFrameCallback {
void onCameraInitialized(int frameWidth, int frameHeight);

void onFrameReceived(CameraFrame frame);

void onCameraReleased();
}

It is implemented by the CameraCanvasView, which is an Android view. SurfaceView and SurfaceHolder can be found in android.graphics. The real UI of the view is the Surface. So, when the surface is created (show on the display), the view will register itself to the CameraAccess (the 1 from the 1:n relationship shown later). Whenever a new camera image is received by the CameraAcess it will invoke onFrameReceived on all registered callbacks. Since the view is such a callback, it will read the bitmap from the CameraFrame and display it.

public class CameraCanvasView extends SurfaceView implements CameraFrameCallback, SurfaceHolder.Callback {

Context context;
CameraAccess mCamera;
Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
Rect mBackgroundSrc = new Rect();

public CameraCanvasView(Context context) {
super(context);

this.context = context;

SurfaceHolder sh = this.getHolder();
sh.addCallback(this);

setFocusable(true);

this.mCamera = CameraAccess.getInstance(context,
CameraInfo.CAMERA_FACING_BACK);
}

@Override
public void onCameraInitialized(int frameWidth, int frameHeight) {
}

@Override
public void onFrameReceived(CameraFrame frame) {
this.setBackgroundImage(frame.toBitmap());
}

@Override
public void onCameraReleased() {

setBackgroundImage(null);
}

@Override
public void surfaceCreated(SurfaceHolder arg0) {
this.setWillNotDraw(false);
this.mCamera.addCallback(this);
}

@Override
public void surfaceChanged(SurfaceHolder arg0, int arg1, int arg2, int arg3) {
}

@Override
public void surfaceDestroyed(SurfaceHolder arg0) {
this.mCamera.removeCallback(this);
}

public void setBackgroundImage(Bitmap image) {
this.mBackground = image;

if (image != null)
this.mBackgroundSrc.set(0, 0, image.getWidth(), image.getHeight());
else
this.mBackgroundSrc.setEmpty();

invalidate();
}

@Override
public void onDraw(Canvas canvas) {
canvas.drawColor(Color.BLACK);

if (mBackground != null && !mBackground.isRecycled())
canvas.drawBitmap(mBackground, mBackgroundSrc, boundingBox, paint);
}
}

Finally, you need the camera handler, the 1 in our 1:n relationship. This is the CameraAccess. It handles the camera initialization and registers itself as a callback that is notified by Android whenever a new frame is received. This is the android.hardware.Camera.PreviewCallback. The base class for this can be found in Android itself. The CameraAccess also holds one single CameraAccessFrame with one single OpenCV Matrix. Whenever a new image is received, that image is put to the existing OpenCV matrix thus overwriting the matrix values and invalidating the CameraAcessFrame to notify any UI element, that is bound to it. Overwriting an existing matrix is saving you memory operations for freeing and reserving memory. Therefore, do not destroy and recreate a matrix, but overwrite it. It is important to mention, that the CameraAccess is a logical invisible component and not a visual Android view. Usually camera images are directly shown on UI elements and Android needs a surface/view to render on. Since my component is invisible, I need to create a SurfaceTexture manually. The camera will automatically render into that texture.

public class CameraAccess implements Camera.PreviewCallback,
LoaderCallbackInterface {

// see http://developer.android.com/guide/topics/media/camera.html for more
// details

final static String TAG = "CameraAccess";
Context context;
int cameraIndex; // example: CameraInfo.CAMERA_FACING_FRONT or
// CameraInfo.CAMERA_FACING_BACK
Camera mCamera;
int mFrameWidth;
int mFrameHeight;
Mat mFrame;
CameraAccessFrame mCameraFrame;
List<CameraFrameCallback> mCallbacks = new ArrayList<CameraFrameCallback>();
boolean mOpenCVloaded;
byte mBuffer[]; // needed to avoid OpenCV error:
// "queueBuffer: BufferQueue has been abandoned!"

private static CameraAccess mInstance;

public static CameraAccess getInstance(Context context, int cameraIndex) {
if (mInstance != null)
return mInstance;

mInstance = new CameraAccess(context, cameraIndex);
return mInstance;
}

private CameraAccess(Context context, int cameraIndex) {
this.context = context;
this.cameraIndex = cameraIndex;

if (!OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_7, context,
this)) {
Log.e(TAG, "Cannot connect to OpenCVManager");
} else
Log.d(TAG, "OpenCVManager successfully connected");
}

private boolean checkCameraHardware() {
if (context.getPackageManager().hasSystemFeature(
PackageManager.FEATURE_CAMERA)) {
// this device has a camera
return true;
} else {
// no camera on this device
return false;
}
}

public static Camera getCameraInstance(int cameraIndex) {
Camera c = null;
try {
c = Camera.open(cameraIndex); // attempt to get a
// Camera
// instance

Log.d(TAG, "Camera opened. index: " + cameraIndex);
} catch (Exception e) {
// Camera is not available (in use or does not exist)
}
return c; // returns null if camera is unavailable
}

public void addCallback(CameraFrameCallback callback) {
// we don't care if the callback is already in the list
this.mCallbacks.add(callback);

if (mCamera != null)
callback.onCameraInitialized(mFrameWidth, mFrameHeight);
else if (mOpenCVloaded)
connectCamera();
}

public void removeCallback(CameraFrameCallback callback) {
boolean removed = false;
do {
// someone might have added the callback multiple times
removed = this.mCallbacks.remove(callback);

if (removed)
callback.onCameraReleased();

} while (removed == true);

if (mCallbacks.size() == 0)
releaseCamera();
}

@Override
public void onPreviewFrame(byte[] frame, Camera arg1) {
mFrame.put(0, 0, frame);
mCameraFrame.invalidate();

for (CameraFrameCallback callback : mCallbacks)
callback.onFrameReceived(mCameraFrame);

if (mCamera != null)
mCamera.addCallbackBuffer(mBuffer);
}

private void connectCamera() {
synchronized (this) {
if (true) {// checkCameraHardware()) {
mCamera = getCameraInstance(cameraIndex);

Parameters params = mCamera.getParameters();
List<Camera.Size> sizes = params.getSupportedPreviewSizes();

// Camera.Size previewSize = sizes.get(0);
Collections.sort(sizes, new PreviewSizeComparer());
Camera.Size previewSize = null;
for (Camera.Size s : sizes) {
if (s == null)
break;

previewSize = s;
}

// List<Integer> formats = params.getSupportedPictureFormats();
// params.setPreviewFormat(ImageFormat.NV21);

params.setPreviewSize(previewSize.width, previewSize.height);
mCamera.setParameters(params);

params = mCamera.getParameters();

mFrameWidth = params.getPreviewSize().width;
mFrameHeight = params.getPreviewSize().height;

int size = mFrameWidth * mFrameHeight;
size = size
* ImageFormat
.getBitsPerPixel(params.getPreviewFormat()) / 8;
mBuffer = new byte[size];

mFrame = new Mat(mFrameHeight + (mFrameHeight / 2),
mFrameWidth, CvType.CV_8UC1);
mCameraFrame = new CameraAccessFrame(mFrame, mFrameWidth,
mFrameHeight);

SurfaceTexture texture = new SurfaceTexture(0);

try {
mCamera.setPreviewTexture(texture);
mCamera.addCallbackBuffer(mBuffer);
mCamera.setPreviewCallbackWithBuffer(this);
mCamera.startPreview();

Log.d(TAG, "Camera preview started");
} catch (Exception e) {
Log.d(TAG,
"Error starting camera preview: " + e.getMessage());
}

for (CameraFrameCallback callback : mCallbacks)
callback.onCameraInitialized(mFrameWidth, mFrameHeight);
}
}
}

private void releaseCamera() {
synchronized (this) {
if (mCamera != null) {
mCamera.stopPreview();
mCamera.setPreviewCallback(null);

mCamera.release();

Log.d(TAG, "Preview stopped and camera released");
}
mCamera = null;

if (mFrame != null) {
mFrame.release();
}

if (mCameraFrame != null) {
mCameraFrame.release();
}

for (CameraFrameCallback callback : mCallbacks)
callback.onCameraReleased();
}
}

public interface CameraFrameCallback {
void onCameraInitialized(int frameWidth, int frameHeight);

void onFrameReceived(CameraFrame frame);

void onCameraReleased();
}

@Override
public void onManagerConnected(int status) {
mOpenCVloaded = true;

if (mCallbacks.size() > 0)
connectCamera();
}

@Override
public void onPackageInstall(int operation,
InstallCallbackInterface callback) {
}

private class PreviewSizeComparer implements Comparator<Camera.Size> {
@Override
public int compare(Size arg0, Size arg1) {
if (arg0 != null && arg1 == null)
return -1;
if (arg0 == null && arg1 != null)
return 1;

if (arg0.width < arg1.width)
return -1;
else if (arg0.width > arg1.width)
return 1;
else
return 0;
}

}
}

Most of the code in CameraAccess is about the initialization and handling of the Android camera. I will not explain any further how a camera is being initialized. There is plenty of documentation out there.

Update 12. May 2020: On request, I added more details and explanation to the quite long code. In case of any further question about the other classes, let me know.



Related Topics



Leave a reply



Submit