Extracting H264 from Cmblockbuffer

Extracting h264 from CMBlockBuffer

I've been struggling with this myself for quite some time now, and have finally figured everything out.

The function CMBlockBufferGetDataPointer gives you access to all the data you need, but there are a few not very obvious things you need to do to convert it to an elementary stream.

AVCC vs Annex B format

The data in the CMBlockBuffer is stored in AVCC format, while elementary streams are typically following the Annex B specification (here is an excellent overview of the two formats). In the AVCC format, the 4 first bytes contains the length of the NAL unit (another word for H264 packet). You need to replace this header with the 4 byte start code: 0x00 0x00 0x00 0x01, which functions as a separator between NAL units in an Annex B elementary stream (the 3 byte version 0x00 0x00 0x01 works fine too).

Multiple NAL units in a single CMBlockBuffer

The next not very obvious thing is that a single CMBlockBuffer will sometimes contain multiple NAL units. Apple seems to add an additional NAL unit (SEI) containing metadata to every I-Frame NAL unit (also called IDR). This is probably why you are seeing multiple buffers in a single CMBlockBuffer object. However, the CMBlockBufferGetDataPointer function gives you a single pointer with access to all the data. That being said, the presence of multiple NAL units complicates the conversion of the AVCC headers. Now you actually have to read the length value contained in the AVCC header to find the next NAL unit, and continue converting headers until you have reached the end of the buffer.

Big-Endian vs Little-Endian

The next not very obvious thing is that the AVCC header is stored in Big-Endian format, and iOS is Little-Endian natively. So when you are reading the length value contained in an AVCC header pass it to the CFSwapInt32BigToHost function first.

SPS and PPS NAL units

The final not very obvious thing is that the data inside the CMBlockBuffer does not contain the parameter NAL units SPS and PPS, which contains configuration parameters for the decoder such as profile, level, resolution, frame rate. These are stored as metadata in the sample buffer's format description and can be accessed via the function CMVideoFormatDescriptionGetH264ParameterSetAtIndex. Note that you have to add the start codes to these NAL units before sending. The SPS and PPS NAL units does not have to be sent with every new frame. A decoder only needs to read them once, but it is common to resend them periodically, for example before every new I-frame NAL unit.

Code Example

Below is a code example taking all of these things into account.

static void videoFrameFinishedEncoding(void *outputCallbackRefCon,
void *sourceFrameRefCon,
OSStatus status,
VTEncodeInfoFlags infoFlags,
CMSampleBufferRef sampleBuffer) {
// Check if there were any errors encoding
if (status != noErr) {
NSLog(@"Error encoding video, err=%lld", (int64_t)status);
return;
}

// In this example we will use a NSMutableData object to store the
// elementary stream.
NSMutableData *elementaryStream = [NSMutableData data];

// Find out if the sample buffer contains an I-Frame.
// If so we will write the SPS and PPS NAL units to the elementary stream.
BOOL isIFrame = NO;
CFArrayRef attachmentsArray = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, 0);
if (CFArrayGetCount(attachmentsArray)) {
CFBooleanRef notSync;
CFDictionaryRef dict = CFArrayGetValueAtIndex(attachmentsArray, 0);
BOOL keyExists = CFDictionaryGetValueIfPresent(dict,
kCMSampleAttachmentKey_NotSync,
(const void **)¬Sync);
// An I-Frame is a sync frame
isIFrame = !keyExists || !CFBooleanGetValue(notSync);
}

// This is the start code that we will write to
// the elementary stream before every NAL unit
static const size_t startCodeLength = 4;
static const uint8_t startCode[] = {0x00, 0x00, 0x00, 0x01};

// Write the SPS and PPS NAL units to the elementary stream before every I-Frame
if (isIFrame) {
CMFormatDescriptionRef description = CMSampleBufferGetFormatDescription(sampleBuffer);

// Find out how many parameter sets there are
size_t numberOfParameterSets;
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(description,
0, NULL, NULL,
&numberOfParameterSets,
NULL);

// Write each parameter set to the elementary stream
for (int i = 0; i < numberOfParameterSets; i++) {
const uint8_t *parameterSetPointer;
size_t parameterSetLength;
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(description,
i,
¶meterSetPointer,
¶meterSetLength,
NULL, NULL);

// Write the parameter set to the elementary stream
[elementaryStream appendBytes:startCode length:startCodeLength];
[elementaryStream appendBytes:parameterSetPointer length:parameterSetLength];
}
}

// Get a pointer to the raw AVCC NAL unit data in the sample buffer
size_t blockBufferLength;
uint8_t *bufferDataPointer = NULL;
CMBlockBufferGetDataPointer(CMSampleBufferGetDataBuffer(sampleBuffer),
0,
NULL,
&blockBufferLength,
(char **)&bufferDataPointer);

// Loop through all the NAL units in the block buffer
// and write them to the elementary stream with
// start codes instead of AVCC length headers
size_t bufferOffset = 0;
static const int AVCCHeaderLength = 4;
while (bufferOffset < blockBufferLength - AVCCHeaderLength) {
// Read the NAL unit length
uint32_t NALUnitLength = 0;
memcpy(&NALUnitLength, bufferDataPointer + bufferOffset, AVCCHeaderLength);
// Convert the length value from Big-endian to Little-endian
NALUnitLength = CFSwapInt32BigToHost(NALUnitLength);
// Write start code to the elementary stream
[elementaryStream appendBytes:startCode length:startCodeLength];
// Write the NAL unit without the AVCC length header to the elementary stream
[elementaryStream appendBytes:bufferDataPointer + bufferOffset + AVCCHeaderLength
length:NALUnitLength];
// Move to the next NAL unit in the block buffer
bufferOffset += AVCCHeaderLength + NALUnitLength;
}
}

CMBlockBuffer ownership in CMSampleBuffer

CMSampleBuffers and CMBlockBuffers - the objects themselves - follow typical CF Retain/Release semantics. You should hold a retain so long as you need these objects, and assume that the interfaces that accept them do the same. This means you are free to release the CMBlockBuffer as soon as you hand it to a CMSampleBuffer, and you're free to release the CMSampleBuffer after you pass it into the rendering chain.

The memory pointed to by a CMBlockBuffer created with CMBlockBufferCreateWithMemoryBlock() follows slightly different rules. First, that method does not copy the data pointed to by memoryBlock; it uses that pointer directly. This means the BlockBuffer needs to have some knowledge of how that memory should be managed. This is handled by either the fourth or fifth arguments to CMBlockBufferCreateWithMemoryBlock(): if either are non-kCFAllocatorNull/NULL, the BlockBuffer will call the deallocator of one of those when it's done with the memory. This is typically done in the BlockBuffer's Finalize(). If they're both kCFAllocatorNull/NULL (which you have in your code snippet), the BlockBuffer will just drop the pointer on the floor when it's done with memory.

This means that if you create a CMBlockBuffer using CMBlockBufferCreateWithMemoryBlock() and intend to release your retain on that BlockBuffer after passing it down the rendering pipeline, you should use non-NULL arguments for the allocator/deallocators so the memory can be reclaimed later. The implementations of these allocators/deallocators, of course, is dependent on the origin of the memoryBlock.

How can I live stream encoded CMSampleBuffer and play it on server side?]

Found the instructions how to do it: Extracting h264 from CMBlockBuffer

You can receive stream like that with ffmpeg

Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream

First off, it's important to understand that there is no single standard H.264 elementary bitstream format. The specification document does contain an Annex, specifically Annex B, that describes one possible format, but it is not an actual requirement. The standard specifies how video is encoded into individual packets. How these packets are stored and transmitted is left open to the integrator.



1. Annex B

Network Abstraction Layer Units

The packets are called Network Abstraction Layer Units. Often abbreviated NALU (or sometimes just NAL) each packet can be individually parsed and processed. The first byte of each NALU contains the NALU type, specifically bits 3 through 7. (bit 0 is always off, and bits 1-2 indicate whether a NALU is referenced by another NALU).

There are 19 different NALU types defined separated into two categories, VCL and non-VCL:

  • VCL, or Video Coding Layer packets contain the actual visual information.
  • Non-VCLs contain metadata that may or may not be required to decode the video.

A single NALU, or even a VCL NALU is NOT the same thing as a frame. A frame can be ‘sliced’ into several NALUs. Just like you can slice a pizza. One or more slices are then virtually grouped into a Access Units (AU) that contain one frame. Slicing does come at a slight quality cost, so it is not often used.

Below is a table of all defined NALUs.

0      Unspecified                                                    non-VCL
1 Coded slice of a non-IDR picture VCL
2 Coded slice data partition A VCL
3 Coded slice data partition B VCL
4 Coded slice data partition C VCL
5 Coded slice of an IDR picture VCL
6 Supplemental enhancement information (SEI) non-VCL
7 Sequence parameter set non-VCL
8 Picture parameter set non-VCL
9 Access unit delimiter non-VCL
10 End of sequence non-VCL
11 End of stream non-VCL
12 Filler data non-VCL
13 Sequence parameter set extension non-VCL
14 Prefix NAL unit non-VCL
15 Subset sequence parameter set non-VCL
16 Depth parameter set non-VCL
17..18 Reserved non-VCL
19 Coded slice of an auxiliary coded picture without partitioning non-VCL
20 Coded slice extension non-VCL
21 Coded slice extension for depth view components non-VCL
22..23 Reserved non-VCL
24..31 Unspecified non-VCL

There are a couple of NALU types where having knowledge of may be helpful later.

  • Sequence Parameter Set (SPS). This non-VCL NALU contains information required to configure the decoder such as profile, level, resolution, frame rate.
  • Picture Parameter Set (PPS). Similar to the SPS, this non-VCL contains information on entropy coding mode, slice groups, motion prediction and deblocking filters.
  • Instantaneous Decoder Refresh (IDR). This VCL NALU is a self contained image slice. That is, an IDR can be decoded and displayed without referencing any other NALU save SPS and PPS.
  • Access Unit Delimiter (AUD). An AUD is an optional NALU that can be use to delimit frames in an elementary stream. It is not required (unless otherwise stated by the container/protocol, like TS), and is often not included in order to save space, but it can be useful to finds the start of a frame without having to fully parse each NALU.

NALU Start Codes

A NALU does not contain is its size. Therefore simply concatenating the NALUs to create a stream will not work because you will not know where one stops and the next begins.

The Annex B specification solves this by requiring ‘Start Codes’ to precede each NALU. A start code is 2 or 3 0x00 bytes followed with a 0x01 byte. e.g. 0x000001 or 0x00000001.

The 4 byte variation is useful for transmission over a serial connection as it is trivial to byte align the stream by looking for 31 zero bits followed by a one. If the next bit is 0 (because every NALU starts with a 0 bit), it is the start of a NALU. The 4 byte variation is usually only used for signaling random access points in the stream such as a SPS PPS AUD and IDR Where as the 3 byte variation is used everywhere else to save space.

Emulation Prevention Bytes

Start codes work because the four byte sequences 0x000000, 0x000001, 0x000002 and 0x000003 are illegal within a non-RBSP NALU. So when creating a NALU, care is taken to escape these values that could otherwise be confused with a start code. This is accomplished by inserting an ‘Emulation Prevention’ byte 0x03, so that 0x000001 becomes 0x00000301.

When decoding, it is important to look for and ignore emulation prevention bytes. Because emulation prevention bytes can occur almost anywhere within a NALU, it is often more convenient in documentation to assume they have already been removed. A representation without emulation prevention bytes is called Raw Byte Sequence Payload (RBSP).

Example

Let's look at a complete example.

0x0000 | 00 00 00 01 67 64 00 0A AC 72 84 44 26 84 00 00
0x0010 | 03 00 04 00 00 03 00 CA 3C 48 96 11 80 00 00 00
0x0020 | 01 68 E8 43 8F 13 21 30 00 00 01 65 88 81 00 05
0x0030 | 4E 7F 87 DF 61 A5 8B 95 EE A4 E9 38 B7 6A 30 6A
0x0040 | 71 B9 55 60 0B 76 2E B5 0E E4 80 59 27 B8 67 A9
0x0050 | 63 37 5E 82 20 55 FB E4 6A E9 37 35 72 E2 22 91
0x0060 | 9E 4D FF 60 86 CE 7E 42 B7 95 CE 2A E1 26 BE 87
0x0070 | 73 84 26 BA 16 36 F4 E6 9F 17 DA D8 64 75 54 B1
0x0080 | F3 45 0C 0B 3C 74 B3 9D BC EB 53 73 87 C3 0E 62
0x0090 | 47 48 62 CA 59 EB 86 3F 3A FA 86 B5 BF A8 6D 06
0x00A0 | 16 50 82 C4 CE 62 9E 4E E6 4C C7 30 3E DE A1 0B
0x00B0 | D8 83 0B B6 B8 28 BC A9 EB 77 43 FC 7A 17 94 85
0x00C0 | 21 CA 37 6B 30 95 B5 46 77 30 60 B7 12 D6 8C C5
0x00D0 | 54 85 29 D8 69 A9 6F 12 4E 71 DF E3 E2 B1 6B 6B
0x00E0 | BF 9F FB 2E 57 30 A9 69 76 C4 46 A2 DF FA 91 D9
0x00F0 | 50 74 55 1D 49 04 5A 1C D6 86 68 7C B6 61 48 6C
0x0100 | 96 E6 12 4C 27 AD BA C7 51 99 8E D0 F0 ED 8E F6
0x0110 | 65 79 79 A6 12 A1 95 DB C8 AE E3 B6 35 E6 8D BC
0x0120 | 48 A3 7F AF 4A 28 8A 53 E2 7E 68 08 9F 67 77 98
0x0130 | 52 DB 50 84 D6 5E 25 E1 4A 99 58 34 C7 11 D6 43
0x0140 | FF C4 FD 9A 44 16 D1 B2 FB 02 DB A1 89 69 34 C2
0x0150 | 32 55 98 F9 9B B2 31 3F 49 59 0C 06 8C DB A5 B2
0x0160 | 9D 7E 12 2F D0 87 94 44 E4 0A 76 EF 99 2D 91 18
0x0170 | 39 50 3B 29 3B F5 2C 97 73 48 91 83 B0 A6 F3 4B
0x0180 | 70 2F 1C 8F 3B 78 23 C6 AA 86 46 43 1D D7 2A 23
0x0190 | 5E 2C D9 48 0A F5 F5 2C D1 FB 3F F0 4B 78 37 E9
0x01A0 | 45 DD 72 CF 80 35 C3 95 07 F3 D9 06 E5 4A 58 76
0x01B0 | 03 6C 81 20 62 45 65 44 73 BC FE C1 9F 31 E5 DB
0x01C0 | 89 5C 6B 79 D8 68 90 D7 26 A8 A1 88 86 81 DC 9A
0x01D0 | 4F 40 A5 23 C7 DE BE 6F 76 AB 79 16 51 21 67 83
0x01E0 | 2E F3 D6 27 1A 42 C2 94 D1 5D 6C DB 4A 7A E2 CB
0x01F0 | 0B B0 68 0B BE 19 59 00 50 FC C0 BD 9D F5 F5 F8
0x0200 | A8 17 19 D6 B3 E9 74 BA 50 E5 2C 45 7B F9 93 EA
0x0210 | 5A F9 A9 30 B1 6F 5B 36 24 1E 8D 55 57 F4 CC 67
0x0220 | B2 65 6A A9 36 26 D0 06 B8 E2 E3 73 8B D1 C0 1C
0x0230 | 52 15 CA B5 AC 60 3E 36 42 F1 2C BD 99 77 AB A8
0x0240 | A9 A4 8E 9C 8B 84 DE 73 F0 91 29 97 AE DB AF D6
0x0250 | F8 5E 9B 86 B3 B3 03 B3 AC 75 6F A6 11 69 2F 3D
0x0260 | 3A CE FA 53 86 60 95 6C BB C5 4E F3

This is a complete AU containing 3 NALUs. As you can see, we begin with a Start code followed by an SPS (SPS starts with 67). Within the SPS, you will see two Emulation Prevention bytes. Without these bytes the illegal sequence 0x000000 would occur at these positions. Next you will see a start code followed by a PPS (PPS starts with 68) and one final start code followed by an IDR slice. This is a complete H.264 stream. If you type these values into a hex editor and save the file with a .264 extension, you will be able to convert it to this image:

Lena

Annex B is commonly used in live and streaming formats such as transport streams, over the air broadcasts, and DVDs. In these formats it is common to repeat the SPS and PPS periodically, usually preceding every IDR thus creating a random access point for the decoder. This enables the ability to join a stream already in progress.



2. AVCC

The other common method of storing an H.264 stream is the AVCC format. In this format, each NALU is preceded with its length (in big endian format). This method is easier to parse, but you lose the byte alignment features of Annex B. Just to complicate things, the length may be encoded using 1, 2 or 4 bytes. This value is stored in a header object. This header is often called ‘extradata’ or ‘sequence header’. Its basic format is as follows:

bits    
8 version ( always 0x01 )
8 avc profile ( sps[0][1] )
8 avc compatibility ( sps[0][2] )
8 avc level ( sps[0][3] )
6 reserved ( all bits on )
2 NALULengthSizeMinusOne
3 reserved ( all bits on )
5 number of SPS NALUs (usually 1)

repeated once per SPS:
16 SPS size
variable SPS NALU data

8 number of PPS NALUs (usually 1)

repeated once per PPS:
16 PPS size
variable PPS NALU data

Using the same example above, the AVCC extradata will look like this:

0x0000 | 01 64 00 0A FF E1 00 19 67 64 00 0A AC 72 84 44
0x0010 | 26 84 00 00 03 00 04 00 00 03 00 CA 3C 48 96 11
0x0020 | 80 01 00 07 68 E8 43 8F 13 21 30

You will notice SPS and PPS is now stored out of band. That is, separate from the elementary stream data. Storage and transmission of this data is the job of the file container, and beyond the scope of this document. Notice that even though we are not using start codes, emulation prevention bytes are still inserted.

Additionally, there is a new variable called NALULengthSizeMinusOne. This confusingly named variable tells us how many bytes to use to store the length of each NALU. So, if NALULengthSizeMinusOne is set to 0, then each NALU is preceded with a single byte indicating its length. Using a single byte to store the size, the max size of a NALU is 255 bytes. That is obviously pretty small. Way too small for an entire key frame. Using 2 bytes gives us 64k per NALU. It would work in our example, but is still a pretty low limit. 3 bytes would be perfect, but for some reason is not universally supported. Therefore, 4 bytes is by far the most common, and it is what we used here:

0x0000 | 00 00 02 41 65 88 81 00 05 4E 7F 87 DF 61 A5 8B
0x0010 | 95 EE A4 E9 38 B7 6A 30 6A 71 B9 55 60 0B 76 2E
0x0020 | B5 0E E4 80 59 27 B8 67 A9 63 37 5E 82 20 55 FB
0x0030 | E4 6A E9 37 35 72 E2 22 91 9E 4D FF 60 86 CE 7E
0x0040 | 42 B7 95 CE 2A E1 26 BE 87 73 84 26 BA 16 36 F4
0x0050 | E6 9F 17 DA D8 64 75 54 B1 F3 45 0C 0B 3C 74 B3
0x0060 | 9D BC EB 53 73 87 C3 0E 62 47 48 62 CA 59 EB 86
0x0070 | 3F 3A FA 86 B5 BF A8 6D 06 16 50 82 C4 CE 62 9E
0x0080 | 4E E6 4C C7 30 3E DE A1 0B D8 83 0B B6 B8 28 BC
0x0090 | A9 EB 77 43 FC 7A 17 94 85 21 CA 37 6B 30 95 B5
0x00A0 | 46 77 30 60 B7 12 D6 8C C5 54 85 29 D8 69 A9 6F
0x00B0 | 12 4E 71 DF E3 E2 B1 6B 6B BF 9F FB 2E 57 30 A9
0x00C0 | 69 76 C4 46 A2 DF FA 91 D9 50 74 55 1D 49 04 5A
0x00D0 | 1C D6 86 68 7C B6 61 48 6C 96 E6 12 4C 27 AD BA
0x00E0 | C7 51 99 8E D0 F0 ED 8E F6 65 79 79 A6 12 A1 95
0x00F0 | DB C8 AE E3 B6 35 E6 8D BC 48 A3 7F AF 4A 28 8A
0x0100 | 53 E2 7E 68 08 9F 67 77 98 52 DB 50 84 D6 5E 25
0x0110 | E1 4A 99 58 34 C7 11 D6 43 FF C4 FD 9A 44 16 D1
0x0120 | B2 FB 02 DB A1 89 69 34 C2 32 55 98 F9 9B B2 31
0x0130 | 3F 49 59 0C 06 8C DB A5 B2 9D 7E 12 2F D0 87 94
0x0140 | 44 E4 0A 76 EF 99 2D 91 18 39 50 3B 29 3B F5 2C
0x0150 | 97 73 48 91 83 B0 A6 F3 4B 70 2F 1C 8F 3B 78 23
0x0160 | C6 AA 86 46 43 1D D7 2A 23 5E 2C D9 48 0A F5 F5
0x0170 | 2C D1 FB 3F F0 4B 78 37 E9 45 DD 72 CF 80 35 C3
0x0180 | 95 07 F3 D9 06 E5 4A 58 76 03 6C 81 20 62 45 65
0x0190 | 44 73 BC FE C1 9F 31 E5 DB 89 5C 6B 79 D8 68 90
0x01A0 | D7 26 A8 A1 88 86 81 DC 9A 4F 40 A5 23 C7 DE BE
0x01B0 | 6F 76 AB 79 16 51 21 67 83 2E F3 D6 27 1A 42 C2
0x01C0 | 94 D1 5D 6C DB 4A 7A E2 CB 0B B0 68 0B BE 19 59
0x01D0 | 00 50 FC C0 BD 9D F5 F5 F8 A8 17 19 D6 B3 E9 74
0x01E0 | BA 50 E5 2C 45 7B F9 93 EA 5A F9 A9 30 B1 6F 5B
0x01F0 | 36 24 1E 8D 55 57 F4 CC 67 B2 65 6A A9 36 26 D0
0x0200 | 06 B8 E2 E3 73 8B D1 C0 1C 52 15 CA B5 AC 60 3E
0x0210 | 36 42 F1 2C BD 99 77 AB A8 A9 A4 8E 9C 8B 84 DE
0x0220 | 73 F0 91 29 97 AE DB AF D6 F8 5E 9B 86 B3 B3 03
0x0230 | B3 AC 75 6F A6 11 69 2F 3D 3A CE FA 53 86 60 95
0x0240 | 6C BB C5 4E F3

An advantage to this format is the ability to configure the decoder at the start and jump into the middle of a stream. This is a common use case where the media is available on a random access medium such as a hard drive, and is therefore used in common container formats such as MP4 and MKV.



Related Topics



Leave a reply



Submit