Why Is Audio Coming Up Garbled When Using Avassetreader with Audio Queue

why is audio coming up garbled when using AVAssetReader with audio queue

So here's what I think is happening and also how I think you can fix it.

You're pulling a predefined item out of the ipod (music) library on an iOS device. you are then using an asset reader to collect it's buffers, and queue those buffers, where possible, in an AudioQueue.

The problem you are having, I think, is that you are setting the audio queue buffer's input format to Linear Pulse Code Modulation (LPCM - hope I got that right, I might be off on the acronym). The output settings you are passing to the asset reader output are nil, which means that you'll get an output that is most likely NOT LPCM, but is instead aiff or aac or mp3 or whatever the format is of the song as it exists in iOS's media library. You can, however, remedy this situation by passing in different output settings.

Try changing

readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track outputSettings:nil];

to:

[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)],
AVChannelLayoutKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];

output = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track audioSettings:outputSettings];

It's my understanding (per the documentation at Apple1) that passing nil as the output settings param gives you samples of the same file type as the original audio track. Even if you have a file that is LPCM, some other settings might be off, which might cause your problems. At the very least, this will normalize all the reader output, which should make things a bit easier to trouble shoot.

Hope that helps!

Edit:

the reason why I provided nul as a parameter for AVURLAsset *songAsset
= [AVURLAsset URLAssetWithURL:assetURL options:audioReadSettings];

was because according to the documentation and trial and error, I...

AVAssetReaders do 2 things; read back an audio file as it exists on disk (i.e.: mp3, aac, aiff), or convert the audio into lpcm.

If you pass nil as the output settings, it will read the file back as it exists, and in this you are correct. I apologize for not mentioning that an asset reader will only allow nil or LPCM. I actually ran into that problem myself (it's in the docs somewhere, but requires a bit of diving), but didn't elect to mention it here as it wasn't on my mind at the time. Sooooo... sorry about that?

If you want to know the AudioStreamBasicDescription (ASBD) of the track you are reading before you read it, you can get it by doing this:

AVURLAsset* uasset = [[AVURLAsset URLAssetWithURL:<#assetURL#> options:nil]retain];
AVAssetTrack*track = [uasset.tracks objectAtIndex:0];
CMFormatDescriptionRef formDesc = (CMFormatDescriptionRef)[[track formatDescriptions] objectAtIndex:0];
const AudioStreamBasicDescription* asbdPointer = CMAudioFormatDescriptionGetStreamBasicDescription(formDesc);
//because this is a pointer and not a struct we need to move the data into a struct so we can use it
AudioStreamBasicDescription asbd = {0};
memcpy(&asbd, asbdPointer, sizeof(asbd));
//asbd now contains a basic description for the track

You can then convert asbd to binary data in whatever format you see fit and transfer it over the network. You should then be able to start sending audio buffer data over the network and successfully play it back with your AudioQueue.

I actually had a system like this working not that long ago, but since I could't keep the connection alive when the iOS client device went to the background, I wasn't able to use it for my purpose. Still, if all that work lets me help someone else who can actually use the info, seems like a win to me.

AVAssetReader to AudioQueueBuffer

For some reason, even though every example I've seen of the audio queue using LPCM had

ASBD.mBitsPerChannel = 8* sizeof (AudioUnitSampleType);

For me it turns out I needed

ASBD.mBitsPerChannel    = 2*bytesPerSample;

for a description of:

ASBD.mFormatID          = kAudioFormatLinearPCM;
ASBD.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
ASBD.mBytesPerPacket = bytesPerSample;
ASBD.mBytesPerFrame = bytesPerSample;
ASBD.mFramesPerPacket = 1;
ASBD.mBitsPerChannel = 2*bytesPerSample;
ASBD.mChannelsPerFrame = 2;
ASBD.mSampleRate = 48000;

I have no idea why this works, which bothers me a great deal... but hopefully I can figure it all out eventually.

If anyone can explain to me why this works, I'd be very thankful.

How to correctly read decoded PCM samples on iOS using AVAssetReader -- currently incorrect decoding

Currently, I am also working on a project which involves extracting audio samples from iTunes Library into AudioUnit.

The audiounit render call back is included for your reference. The input format is set as SInt16StereoStreamFormat.

I have made use of Michael Tyson's circular buffer implementation - TPCircularBuffer as the buffer storage. Very easy to use and understand!!! Thanks Michael!

- (void) loadBuffer:(NSURL *)assetURL_
{
if (nil != self.iPodAssetReader) {
[iTunesOperationQueue cancelAllOperations];

[self cleanUpBuffer];
}

NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];

AVURLAsset *asset = [AVURLAsset URLAssetWithURL:assetURL_ options:nil];
if (asset == nil) {
NSLog(@"asset is not defined!");
return;
}

NSLog(@"Total Asset Duration: %f", CMTimeGetSeconds(asset.duration));

NSError *assetError = nil;
self.iPodAssetReader = [AVAssetReader assetReaderWithAsset:asset error:&assetError];
if (assetError) {
NSLog (@"error: %@", assetError);
return;
}

AVAssetReaderOutput *readerOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:asset.tracks audioSettings:outputSettings];

if (! [iPodAssetReader canAddOutput: readerOutput]) {
NSLog (@"can't add reader output... die!");
return;
}

// add output reader to reader
[iPodAssetReader addOutput: readerOutput];

if (! [iPodAssetReader startReading]) {
NSLog(@"Unable to start reading!");
return;
}

// Init circular buffer
TPCircularBufferInit(&playbackState.circularBuffer, kTotalBufferSize);

__block NSBlockOperation * feediPodBufferOperation = [NSBlockOperation blockOperationWithBlock:^{
while (![feediPodBufferOperation isCancelled] && iPodAssetReader.status != AVAssetReaderStatusCompleted) {
if (iPodAssetReader.status == AVAssetReaderStatusReading) {
// Check if the available buffer space is enough to hold at least one cycle of the sample data
if (kTotalBufferSize - playbackState.circularBuffer.fillCount >= 32768) {
CMSampleBufferRef nextBuffer = [readerOutput copyNextSampleBuffer];

if (nextBuffer) {
AudioBufferList abl;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(nextBuffer, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
UInt64 size = CMSampleBufferGetTotalSampleSize(nextBuffer);

int bytesCopied = TPCircularBufferProduceBytes(&playbackState.circularBuffer, abl.mBuffers[0].mData, size);

if (!playbackState.bufferIsReady && bytesCopied > 0) {
playbackState.bufferIsReady = YES;
}

CFRelease(nextBuffer);
CFRelease(blockBuffer);
}
else {
break;
}
}
}
}
NSLog(@"iPod Buffer Reading Finished");
}];

[iTunesOperationQueue addOperation:feediPodBufferOperation];
}

static OSStatus ipodRenderCallback (

void *inRefCon, // A pointer to a struct containing the complete audio data
// to play, as well as state information such as the
// first sample to play on this invocation of the callback.
AudioUnitRenderActionFlags *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence
// between sounds; for silence, also memset the ioData buffers to 0.
const AudioTimeStamp *inTimeStamp, // Unused here.
UInt32 inBusNumber, // The mixer unit input bus that is requesting some new
// frames of audio data to play.
UInt32 inNumberFrames, // The number of frames of audio to provide to the buffer(s)
// pointed to by the ioData parameter.
AudioBufferList *ioData // On output, the audio data to play. The callback's primary
// responsibility is to fill the buffer(s) in the
// AudioBufferList.
)
{
Audio* audioObject = (Audio*)inRefCon;

AudioSampleType *outSample = (AudioSampleType *)ioData->mBuffers[0].mData;

// Zero-out all the output samples first
memset(outSample, 0, inNumberFrames * kUnitSize * 2);

if ( audioObject.playingiPod && audioObject.bufferIsReady) {
// Pull audio from circular buffer
int32_t availableBytes;

AudioSampleType *bufferTail = TPCircularBufferTail(&audioObject.circularBuffer, &availableBytes);

memcpy(outSample, bufferTail, MIN(availableBytes, inNumberFrames * kUnitSize * 2) );
TPCircularBufferConsume(&audioObject.circularBuffer, MIN(availableBytes, inNumberFrames * kUnitSize * 2) );
audioObject.currentSampleNum += MIN(availableBytes / (kUnitSize * 2), inNumberFrames);

if (availableBytes <= inNumberFrames * kUnitSize * 2) {
// Buffer is running out or playback is finished
audioObject.bufferIsReady = NO;
audioObject.playingiPod = NO;
audioObject.currentSampleNum = 0;

if ([[audioObject delegate] respondsToSelector:@selector(playbackDidFinish)]) {
[[audioObject delegate] performSelector:@selector(playbackDidFinish)];
}
}
}

return noErr;
}

- (void) setupSInt16StereoStreamFormat {

// The AudioUnitSampleType data type is the recommended type for sample data in audio
// units. This obtains the byte size of the type for use in filling in the ASBD.
size_t bytesPerSample = sizeof (AudioSampleType);

// Fill the application audio format struct's fields to define a linear PCM,
// stereo, noninterleaved stream at the hardware sample rate.
SInt16StereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
SInt16StereoStreamFormat.mFormatFlags = kAudioFormatFlagsCanonical;
SInt16StereoStreamFormat.mBytesPerPacket = 2 * bytesPerSample; // *** kAudioFormatFlagsCanonical <- implicit interleaved data => (left sample + right sample) per Packet
SInt16StereoStreamFormat.mFramesPerPacket = 1;
SInt16StereoStreamFormat.mBytesPerFrame = SInt16StereoStreamFormat.mBytesPerPacket * SInt16StereoStreamFormat.mFramesPerPacket;
SInt16StereoStreamFormat.mChannelsPerFrame = 2; // 2 indicates stereo
SInt16StereoStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
SInt16StereoStreamFormat.mSampleRate = graphSampleRate;

NSLog (@"The stereo stream format for the \"iPod\" mixer input bus:");
[self printASBD: SInt16StereoStreamFormat];
}

difference between how AVAssetReader and AudioFileReadPackets reads Audio

The short answer is that it simply doesn't make sense to have packets of audio data be parsed by AudioFileStreamParseBytes.. in the docs AudioFileStreamParseBytes is a function dependent on the existence of an audio file (thus the parameter inAudioFileStream.. which is defined as the ID of the parser to which you wish to pass data. The parser ID is returned by the AudioFileStreamOpen function.)

so lesson learned: don't try to pigeon hole iOS functions to fit your situation.. it should be the other way around.

What I ended up doing was feeding the data directly to an Audio Queue.. without going through all these unnecessary intermediary functions.. a more in depth way would be feeding the data to audio units.. but my application didn't need that level of control

How do I stream AVAsset audio wirelessly form one iOS device to another?

I will answer your second question first - don't wait for the app to crash, you can stop pulling audio from the track by checking the number of samples that are available in the CMSampleBufferRef you are reading; for example (this code will also be included in the 2nd half of my answer):

CMSampleBufferRef sample;
sample = [readerOutput copyNextSampleBuffer];

CMItemCount numSamples = CMSampleBufferGetNumSamples(sample);

if (!sample || (numSamples == 0)) {
// handle end of audio track here
return;
}

Regarding your first question, it depends on the type of audio you are grabbing - it could be wither PCM (non-compressed) or VBR (compressed) format. I'm not even going to bother addressing the PCM part because it's simply not smart to send uncompressed audio data from one phone to another over the network - it's unnecessarily expensive and will clog your networking bandwidth. So we're left with VBR data. For that you've got to send the contents of AudioBuffer and AudioStreamPacketDescription you pulled from the sample. But then again, it's probably best to explain what I'm saying by code:

-(void)broadcastSample
{
[broadcastLock lock];

CMSampleBufferRef sample;
sample = [readerOutput copyNextSampleBuffer];

CMItemCount numSamples = CMSampleBufferGetNumSamples(sample);

if (!sample || (numSamples == 0)) {
Packet *packet = [Packet packetWithType:PacketTypeEndOfSong];
packet.sendReliably = NO;
[self sendPacketToAllClients:packet];
[sampleBroadcastTimer invalidate];
return;
}

NSLog(@"SERVER: going through sample loop");
Boolean isBufferDataReady = CMSampleBufferDataIsReady(sample);

CMBlockBufferRef CMBuffer = CMSampleBufferGetDataBuffer( sample );
AudioBufferList audioBufferList;

CheckError(CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sample,
NULL,
&audioBufferList,
sizeof(audioBufferList),
NULL,
NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&CMBuffer
),
"could not read sample data");

const AudioStreamPacketDescription * inPacketDescriptions;

size_t packetDescriptionsSizeOut;
size_t inNumberPackets;

CheckError(CMSampleBufferGetAudioStreamPacketDescriptionsPtr(sample,
&inPacketDescriptions,
&packetDescriptionsSizeOut),
"could not read sample packet descriptions");

inNumberPackets = packetDescriptionsSizeOut/sizeof(AudioStreamPacketDescription);

AudioBuffer audioBuffer = audioBufferList.mBuffers[0];

for (int i = 0; i < inNumberPackets; ++i)
{

NSLog(@"going through packets loop");
SInt64 dataOffset = inPacketDescriptions[i].mStartOffset;
UInt32 dataSize = inPacketDescriptions[i].mDataByteSize;

size_t packetSpaceRemaining = MAX_PACKET_SIZE - packetBytesFilled - packetDescriptionsBytesFilled;
size_t packetDescrSpaceRemaining = MAX_PACKET_DESCRIPTIONS_SIZE - packetDescriptionsBytesFilled;

if ((packetSpaceRemaining < (dataSize + AUDIO_STREAM_PACK_DESC_SIZE)) ||
(packetDescrSpaceRemaining < AUDIO_STREAM_PACK_DESC_SIZE))
{
if (![self encapsulateAndShipPacket:packet packetDescriptions:packetDescriptions packetID:assetOnAirID])
break;
}

memcpy((char*)packet + packetBytesFilled,
(const char*)(audioBuffer.mData + dataOffset), dataSize);

memcpy((char*)packetDescriptions + packetDescriptionsBytesFilled,
[self encapsulatePacketDescription:inPacketDescriptions[i]
mStartOffset:packetBytesFilled
],
AUDIO_STREAM_PACK_DESC_SIZE);

packetBytesFilled += dataSize;
packetDescriptionsBytesFilled += AUDIO_STREAM_PACK_DESC_SIZE;

// if this is the last packet, then ship it
if (i == (inNumberPackets - 1)) {
NSLog(@"woooah! this is the last packet (%d).. so we will ship it!", i);
if (![self encapsulateAndShipPacket:packet packetDescriptions:packetDescriptions packetID:assetOnAirID])
break;

}

}

[broadcastLock unlock];
}

Some methods that I've used in the above code are methods you don't need to worry about, such as adding headers to each packet (I was creating my own protocol, you can create your own). For more info see this tutorial.

- (BOOL)encapsulateAndShipPacket:(void *)source
packetDescriptions:(void *)packetDescriptions
packetID:(NSString *)packetID
{

// package Packet
char * headerPacket = (char *)malloc(MAX_PACKET_SIZE + AUDIO_BUFFER_PACKET_HEADER_SIZE + packetDescriptionsBytesFilled);

appendInt32(headerPacket, 'SNAP', 0);
appendInt32(headerPacket,packetNumber, 4);
appendInt16(headerPacket,PacketTypeAudioBuffer, 8);
// we use this so that we can add int32s later
UInt16 filler = 0x00;
appendInt16(headerPacket,filler, 10);
appendInt32(headerPacket, packetBytesFilled, 12);
appendInt32(headerPacket, packetDescriptionsBytesFilled, 16);
appendUTF8String(headerPacket, [packetID UTF8String], 20);

int offset = AUDIO_BUFFER_PACKET_HEADER_SIZE;
memcpy((char *)(headerPacket + offset), (char *)source, packetBytesFilled);

offset += packetBytesFilled;

memcpy((char *)(headerPacket + offset), (char *)packetDescriptions, packetDescriptionsBytesFilled);

NSData *completePacket = [NSData dataWithBytes:headerPacket length: AUDIO_BUFFER_PACKET_HEADER_SIZE + packetBytesFilled + packetDescriptionsBytesFilled];

NSLog(@"sending packet number %lu to all peers", packetNumber);
NSError *error;
if (![_session sendDataToAllPeers:completePacket withDataMode:GKSendDataReliable error:&error]) {
NSLog(@"Error sending data to clients: %@", error);
}

Packet *packet = [Packet packetWithData:completePacket];

// reset packet
packetBytesFilled = 0;
packetDescriptionsBytesFilled = 0;

packetNumber++;
free(headerPacket);
// free(packet); free(packetDescriptions);
return YES;

}

- (char *)encapsulatePacketDescription:(AudioStreamPacketDescription)inPacketDescription
mStartOffset:(SInt64)mStartOffset
{
// take out 32bytes b/c for mStartOffset we are using a 32 bit integer, not 64
char * packetDescription = (char *)malloc(AUDIO_STREAM_PACK_DESC_SIZE);

appendInt32(packetDescription, (UInt32)mStartOffset, 0);
appendInt32(packetDescription, inPacketDescription.mVariableFramesInPacket, 4);
appendInt32(packetDescription, inPacketDescription.mDataByteSize,8);

return packetDescription;
}

receiving data:

- (void)receiveData:(NSData *)data fromPeer:(NSString *)peerID inSession:(GKSession *)session context:(void *)context
{

Packet *packet = [Packet packetWithData:data];
if (packet == nil)
{
NSLog(@"Invalid packet: %@", data);
return;
}

Player *player = [self playerWithPeerID:peerID];

if (player != nil)
{
player.receivedResponse = YES; // this is the new bit
} else {
Player *player = [[Player alloc] init];
player.peerID = peerID;
[_players setObject:player forKey:player.peerID];
}

if (self.isServer)
{
[Logger Log:@"SERVER: we just received packet"];
[self serverReceivedPacket:packet fromPlayer:player];

}
else
[self clientReceivedPacket:packet];
}

notes:

  1. There are a lot of networking details that I didn't cover here (ie, in the receiving data part. I used a lot of custom made objects without expanding on their definition). I didn't because explaining all of that is beyond the scope of just one answer on SO. However, you can follow the excellent tutorial of Ray Wenderlich. He takes his time in explaining networking principles, and the architecture I use above is almost taken verbatim from him. HOWEVER THERE IS A CATCH (see next point)

  2. Depending on your project, GKSession may not be suitable (especially if your project is realtime, or if you need more than 2-3 devices to connect simultaneously) it has a lot of limitations. You will have to dig down deeper and use Bonjour directly instead. iPhone cool projects has a nice quick chapter that gives a nice example of using Bonjour services. It's not as scary as it sounds (and the apple documentation is kinda overbearing on that subject).

  3. I noticed you use GCD for your multithreading. Again, if you are dealing with real time then you don't want to use advanced frameworks that do the heavy lifting for you (GCD is one of them). For more on this subject read this excellent article. Also read the prolonged discussion between me and justin in the comments of this answer.

  4. You may want to check out MTAudioProcessingTap introduced in iOS 6. It can potentially save you some hassle while dealing with AVAssets. I didn't test this stuff though. It came out after I did all my work.

  5. Last but not least, you may want to check out the learning core audio book. It's a widely acknowledged reference on this subject. I remember being as stuck as you were at the point you asked the question. Core audio is heavy duty and it takes time to sink in. SO will only give you pointers. You will have to take your time to absorb the material yourself then you will figure out how things work out. Good luck!

Using AudioUnits to play and eq songs from music library

I think you would need to use an Output-Only with a Render Callback Function. The callback function should be responsible for reading/decoding the audio data, and applying the EQ effect.

By the way, I don't know if this might be useful in any way, but here it says that there's an already existing EQ audio unit that you could use.

Corrupt video capturing audio and video using AVAssetWriter

I figured it out. I was setting the assetWriter.startSession source time to 0, and then subtracting the start time from current CACurrentMediaTime() for writing the pixel data.

I changed the assetWriter.startSession source time to the CACurrentMediaTime() and don't subtract the current time when writing the video frame.

Old start session code:

assetWriter.startWriting()
assetWriter.startSession(atSourceTime: kCMTimeZero)

New code that works:

let presentationStartTime = CMTimeMakeWithSeconds(CACurrentMediaTime(), 240)

assetWriter.startWriting()
assetWriter.startSession(atSourceTime: presentationStartTime)

iOS UI are causing a glitch in my audio stream

Ok wrote a different style of Circular buffer that seems to have done the trick, very similar latency and no glitching. I still don't fully understand why this is better, any one with experience in this please share.

Due to very little of this stuff being posted on by apple, below is my Circular buffer implementation that works well with my VOIP setup, feel free to use it, any suggestions are welcome, just don't come after me if it doesn't work for you. This time its an objective-c class.

Please note that this was designed to use with ALAW format not linearPCM, "0xd5" is a byte of silence in ALAW, unsure what this will be in PCM but would expect it to be noise.

CircularBuffer.h:

//
// CircularBuffer.h
// clevercall
//
// Created by Simon Mcloughlin on 10/1/2013.
//
//

#import <Foundation/Foundation.h>

@interface CircularBuffer : NSObject

-(int) availableBytes;
-(id) initWithLength:(int)length;
-(void) produceToBuffer:(const void*)data ofLength:(int)length;
-(void) consumeBytesTo:(void *)buf OfLength:(int)length;

@end

CircularBuffer.m:

//
// CircularBuffer.m
// clevercall
//
// Created by Simon Mcloughlin on 10/1/2013.
//
//

#import "CircularBuffer.h"

@implementation CircularBuffer
{
unsigned int gBufferLength;
unsigned int gAvailableBytes;
unsigned int gHead;
unsigned int gTail;
void *gBuffer;
}

// Init instance with a certain length and alloc the space
-(id)initWithLength:(int)length
{
self = [super init];

if (self != nil)
{
gBufferLength = length;
gBuffer = malloc(length);
memset(gBuffer, 0xd5, length);

gAvailableBytes = 0;
gHead = 0;
gTail = 0;
}

return self;
}

// return the number of bytes stored in the buffer
-(int) availableBytes
{
return gAvailableBytes;
}

-(void) produceToBuffer:(const void*)data ofLength:(int)length
{
// if the number of bytes to add to the buffer will go past the end.
// copy enough to fill to the end
// go back to the start
// fill the remaining
if((gHead + length) > gBufferLength-1)
{
int remainder = ((gBufferLength-1) - gHead);
memcpy(gBuffer + gHead, data, remainder);
gHead = 0;
memcpy(gBuffer + gHead, data + remainder, (length - remainder));
gHead += (length - remainder);
gAvailableBytes += length;
}
// if there is room in the buffer for these bytes add them
else if((gAvailableBytes + length) <= gBufferLength-1)
{
memcpy(gBuffer + gHead, data, length);
gAvailableBytes += length;
gHead += length;
}
else
{
//NSLog(@"--- Discarded ---");
}
}

-(void) consumeBytesTo:(void *)buf OfLength:(int)length
{
// if the tail is at a point where there is not enough between it and the end to fill the buffer.
// copy out whats left
// move back to the start
// copy out the rest
if((gTail + length) > gBufferLength-1 && length <= gAvailableBytes)
{
int remainder = ((gBufferLength-1) - gTail);
memcpy(buf, gBuffer + gTail, remainder);
gTail = 0;
memcpy(buf + remainder, gBuffer, (length -remainder));
gAvailableBytes-=length;
gTail += (length -remainder);
}
// if there is enough bytes in the buffer
else if(length <= gAvailableBytes)
{
memcpy(buf, gBuffer + gTail, length);
gAvailableBytes-=length;
gTail+=length;
}
// else play silence
else
{
memset(buf, 0xd5, length);
}
}

@end

Undefined symbols for _CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer

The linker failing to find a symbol indicates that the library/framework containing that symbol is not listed as a dependency of your build target. In Xcode, select your target, go to 'Build Phases', open 'Link Binary with Libraries' and add CoreMedia.



Related Topics



Leave a reply



Submit