Camera Preview Image Data Processing with Android L and Camera2 API

Camera preview image data processing with Android L and Camera2 API

Since the Camera2 API is very different from the current Camera API, it might help to go through the documentation.

A good starting point is camera2basic example. It demonstrates how to use Camera2 API and configure ImageReader to get JPEG images and register ImageReader.OnImageAvailableListener to receive those images

To receive preview frames, you need to add your ImageReader's surface to setRepeatingRequest's CaptureRequest.Builder.

Also, you should set ImageReader's format to YUV_420_888, which will give you 30fps at 8MP (The documentation guarantees 30fps at 8MP for Nexus 5).

Android Camera2 API Showing Processed Preview Image

Edit after clarification of the question; original answer at bottom

Depends on where you're doing your processing.

If you're using RenderScript, you can connect a Surface from a SurfaceView or a TextureView to an Allocation (with setSurface), and then write your processed output to that Allocation and send it out with Allocation.ioSend(). The HDR Viewfinder demo uses this approach.

If you're doing EGL shader-based processing, you can connect a Surface to an EGLSurface with eglCreateWindowSurface, with the Surface as the native_window argument. Then you can render your final output to that EGLSurface and when you call eglSwapBuffers, the buffer will be sent to the screen.

If you're doing native processing, you can use the NDK ANativeWindow methods to write to a Surface you pass from Java and convert to an ANativeWindow.

If you're doing Java-level processing, that's really slow and you probably don't want to. But can use the new Android M ImageWriter class, or upload a texture to EGL every frame.

Or as you say, draw to an ImageView every frame, but that'll be slow.


Original answer:

If you are capturing JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() into a byte[], and then use BitmapFactory.decodeByteArray to convert it to a Bitmap.

If you are capturing YUV_420_888 images, then you need to write your own conversion code from the 3-plane YCbCr 4:2:0 format to something you can display, such as a int[] of RGB values to create a Bitmap from; unfortunately there's not yet a convenient API for this.

If you are capturing RAW_SENSOR images (Bayer-pattern unprocessed sensor data), then you need to do a whole lot of image processing or just save a DNG.

Real-time image process and display using Android Camera2 api and ANativeWindow

As mentioned by yakobom, you're trying to copy a YUV_420_888 image directly into a RGBA_8888 destination (that's the default, if you haven't changed it). That won't work with just a memcpy.

You need to actually convert the data, and you need to ensure you don't copy too much - the sample code you have copies width*height*4 bytes, while a YUV_420_888 image takes up only stride*height*1.5 bytes (roughly). So when you copied, you were running way off the end of the buffer.

You also have to account for the stride provided at the Java level to correctly index into the buffer. This link from Microsoft has a useful diagram.

If you just care about the luminance (so grayscale output is enough), just duplicate the luminance channel into the R, G, and B channels. The pseudocode would be roughly:

uint8_t *outPtr = buffer.bits;
for (size_t y = 0; y < height; y++) {
uint8_t *rowPtr = srcLumaPtr + y * srcLumaStride;
for (size_t x = 0; x < width; x++) {
*(outPtr++) = *rowPtr;
*(outPtr++) = *rowPtr;
*(outPtr++) = *rowPtr;
*(outPtr++) = 255; // gamma for RGBA_8888
++rowPtr;
}
}

You'll need to read the srcLumaStride from the Image object (row stride of the first Plane) and pass it down via JNI as well.

Previewing processed image with Camera2

An ImageReader gives you a set of ByteBuffers in each Image you acquire from it; you can operate on those either in Java or in native code.

The simplest case is capturing a JPEG and just saving it to disk, but you can also request YUV_420_888 data and then process it however you want.

Edit in response to comment:

If you've gotten a SurfaceTexture from a TextureView, and passed it to the camera, then you can't intercept the buffers in between.
If you want to modify them, then you need to create an intermediate target that the camera sends buffers to, edit them, and then send them for display to the TextureView.

There are several options for that. Possibly the most efficient is using EGL in the middle:

Camera -> SurfaceTexture -> EGL -> SurfaceTexture -> TextureView

This requires a lot of boilerplate code to create the EGL context, but works well if your edits can be written as an EGL shader.
You can render to the SurfaceTexture given by the TextureView by creating an EGLImage from it, if I recall correctly, and then you can create another SurfaceTexture that you pass to camera, which you use in the EGL shader as a texture to render from.

I'd recommend finding EGL tutorials since this requires quite a bit of code.

Android SurfaceView or Camera2 API for camera preview

I simply followed the Android docs to use the camera API. Camera2 API is fairly new and I want to make sure all users can have it.



Related Topics



Leave a reply



Submit