Replacing Glreadpixels with Egl_Khr_Image_Base for Faster Pixel Copy

Image processing in android using OpenGL. glReadPixels is slow and don't understand how to get EGL_KHR_image_base included and working in my project

Copying pixels with glReadPixels() can be slow, though it may vary significantly depending on the specific device and pixel format. Some tests with using glReadPixels() to save frames from video data (which is also initially YUV) found that 96.5% of the time was in PNG compression and file I/O on a Nexus 5.

In some cases, the time required goes up substantially if the source and destination formats don't match. On one particular device I found that copying to RGBA, instead of RGB, reduced the time required.

The EGL calls can work but require non-public API calls. And it's a bit tricky; see e.g. this answer. (I think the comment in the edit would allow it to work, but I never got back around to trying it, and I'm not in a position to do so now.)

what is wrong when i use Eglimage replace glreadpixels in NDK program?

You need a glFinish() call to tell the driver to do the actual drawing. The glReadPixels() call is causing things to work because it forces the rendering to happen -- you've told the driver that you want to read the pixels back, so it pauses until rendering is complete.

If is it possible to get BGR from MediaCodec?

The frames come out of MediaCodec in whatever format the codec likes to work in. Most digital cameras and video codecs work with YUV format frames. The camera is required to use one of two specific formats when capturing still images, but MediaCodec is more of a free-for-all. If you want the data to be in a specific format, something has to do the conversion.

On Android you can get the hardware to do it for you by involving the GPU, which is required to accept whatever format MediaCodec decides to generate on that device. You latch each frame as an "external" texture by feeding it into a SurfaceTexture, and then tell GLES to render it in RGB on a pbuffer, which you can then access in different ways. For an example that uses glReadPixels(), see ExtractMpegFramesTest.

There are alternatives to glReadPixels() that rely on private native methods to access the pbuffer (e.g. this and this), but using non-public APIs is unwise unless you have complete control of the device.

There may be some newer approaches for accessing the pbuffer that I haven't used, e.g. using native hardware buffers seems potentially useful (or maybe not?).

get screenshots from SurfaceComposerClient

Yes, assuming you're running as shell or root, and you don't mind using non-public native APIs (i.e. you don't care if your app breaks every time a new version of the OS rolls out).

The canonical example is screenrecord, introduced in Android 4.4. It creates a virtual display and directs the output to a Surface. For normal operation a MediaCodec input surface receives the output. For the "bugreport" mode introduced in screenrecord v1.1, the output goes to a GLConsumer (roughly equivalent to a SurfaceTexture), which is rendered to a Surface with overlaid text.

Android: Is it possible to create a SurfaceTexture without a SurfaceView?

You can see a number of examples that manipulate Camera output, SurfaceTexture, and EGL in Grafika. The "Continuous capture" activity is one, but it uses the technique you mentioned: to avoid having to create an EGLSurface, it just borrows the one from the nearby SurfaceView.

You do need to have an EGLSurface, but it doesn't need to be a window surface. You can create a 1x1 pbuffer surface and just use that. This is done with the eglCreatePbufferSurface() call; see the EglCore class in Grafika for an example.

These examples are in Java, but the Java implementation just wraps the native EGL/GLES calls.

android_createDisplaySurface() is an internal call that, as you discovered, doesn't work on newer devices. Search the NDK for ANativeWindow instead.

Update: for anyone who gets here by searching, android_createDisplaySurface() relied on an internal class called FramebufferNativeWindow, which was marked obsolete in Android 5.0 in this change. The internal OpenGL ES test code that used it was updated with a SurfaceFlinger-based replacement in this change. The original version of the test code required shutting down the Android app framework so it could grab the frame buffer; the newer version just asks SurfaceFlinger for a window that covers the screen and is composited on top of everything else.



Related Topics



Leave a reply



Submit