How to Use Avsamplebufferdisplaylayer in iOS 8 for Rtp H264 Streams with Gstreamer

How to use AVSampleBufferDisplayLayer in iOS 8 for RTP H264 Streams with GStreamer?

Got it working now. The length of each NALU does not contain the length header itself. So i have do subtract 4 from my info.size before using it for my sourceBytes.

Display H.264 encoded images via AVSampleBufferDisplayLayer

In iOS 8 the AVSampleBufferDisplayLayer class is available now.

Take a Look and have Fun

AVSampleBufferDisplayLayer renders half of each frame when using high preset

The problem turned out to be that at the preset AVCaptureSessionPresetHigh one frame would get split into more than one type 5 (or type 1) NALU. On the receiving side I was combining the SPS, PPS and type 1 (or type 5) NAL units into a CMSampleBuffer, but ignoring the second part of the frame if they were split, which caused the problem.

In order to recognize if two successive NAL units belong to the same frame it is necessary to parse the slice header of the picture NAL units. This requires delving into the specification, but goes more or less like this: the first field of the slice header is first_mb_in_slice which is encoded in Golomb encoding. Next come slice_type and the pic_aprameter_set_id, also in Golomb encoding, and finally the frame_number, as an unsigned integer of length (log2_max_frame_num_minus_4 + 4) bits (to get the value of log2_max_frame_num_minus_4 it is necessary to parse the PPS corresponding to this frame). If two consecutive NAL units have the same frame_num they are part of the same frame and should be put into the same CMSampleBuffer.



Related Topics



Leave a reply



Submit