Surfacetexture's Onframeavailable() Method Always Called Too Late

SurfaceTexture's onFrameAvailable() method always called too late

The way SurfaceTexture works makes this a bit tricky to get right.

The docs say the frame-available callback "is called on an arbitrary thread". The SurfaceTexture class has a bit of code that does the following when initializing (line 318):

if (this thread has a looper) {
handle events on this thread
} else if (there's a "main" looper) {
handle events on the main UI thread
} else {
no events for you
}

The frame-available events are delivered to your app through the usual Looper / Handler mechanism. That mechanism is just a message queue, which means the thread needs to be sitting in the Looper event loop waiting for them to arrive. The trouble is, if you're sleeping in awaitNewImage(), you're not watching the Looper queue. So the event arrives, but nobody sees it. Eventually awaitNewImage() times out, and the thread returns to watching the event queue, where it immediately discovers the pending "new frame" message.

So the trick is to make sure that frame-available events arrive on a different thread from the one sitting in awaitNewImage(). In the ExtractMpegFramesTest example, this is done by running the test in a newly-created thread (see the ExtractMpegFramesWrapper class), which does not have a Looper. (For some reason the thread that executes CTS tests has a looper.) The frame-available events arrive on the main UI thread.

Update (for "edit 3"): I'm a bit sad that ignoring the "size" field helped, but pre-4.3 it's hard to predict how devices will behave.

If you just want to display the frame, pass the Surface you get from the SurfaceView or TextureView into the MediaCodec decoder configure() call. Then you don't have to mess with SurfaceTexture at all -- the frames will be displayed as you decode them. See the two "Play video" activities in Grafika for examples.

If you really want to go through a SurfaceTexture, you need to change CodecOutputSurface to render to a window surface rather than a pbuffer. (The off-screen rendering is done so we can use glReadPixels() in a headless test.)

Modify ExtractMpegFramesTest example to render decoded output on screen

If I correctly understand what you're trying to do, you'd want to decode each frame to a SurfaceTexture, which gives you a GLES "external" texture with the data in it. You could then render that to the TextureView, calling glReadPixels() just before eglSwapBuffers().

You can't read data back once it has been sent to a screen Surface, as the consumer of the data lives in a different process. The efficient video path just passes the "external" texture to the Surface, but that won't work here. Ideally you would clone the external texture reference, forwarding one copy to the display Surface and using the other for rendering to an off-screen buffer that you can pull pixels from. (The Camera2 API can do multi-output tricks like this, but I don't know if it's exposed in MediaCodec. I haven't looked in a while though.)

Why does Surface frame wait timed out encoding with proguard turned on?

After adding -dontoptimize to the proguard file everything works again. Obviously this is not an ideal solution, but it seems it is the only thing that can be done, since the suspect here is proguard optimising away some of the loops in the encoding/decoding threads. In my particular case I could avoid using -dontoptimize and use instead the following line which doesn't disable all optimizations:

-optimizations !code/removal/advanced,!method/inlining/short,!method/inlining/unique,!method/removal/*,!method/marking/*

Problems with MediaExtractor

Thanks to fadden comment, I have to keep feeding the encoder since the I-frame has the full picture and the P and B frames have differences (this is how compression is achieved). So I need to start with an I-frame (it was same as sync frame) and keep feeding the other frames to the decoder to receive the full image.



Related Topics



Leave a reply



Submit