Popping Noise Between Audioqueuebuffers

Why do I get popping noises from my Core Audio program?

I believe you were beginning to zero in on, or at least suspect, the cause of the popping you are hearing: it's caused by discontinuities in your waveform.

My initial hunch was that you were generating the buffers independently (i.e. assuming that each buffer starts at time=0), but I checked out your code and it wasn't that. I suspect some of the calculations in makeWave were at fault. To check this theory I replaced your makeWave with the following:

func makeWave(offset: Double, numSamples: Int, sampleRate: Float64, frequency: Float64, numChannels: Int) -> [Int16] {
var data = [Int16]()
for sample in 0..<numSamples / numChannels {
// time in s
let t = offset + Double(sample) / sampleRate
let value = Double(Int16.max) * sin(2 * Double.pi * frequency * t)
for _ in 0..<numChannels {
data.append(Int16(value))
}
}
return data
}

This function removes the double loop in the original, accepts an offset so it knows which part of the wave is being generated and makes some changes to the sampling of the sine wave.

When Player is modified to use this function you get a lovely steady tone. I'll add the changes to player soon. I can't in good conscience show the quick and dirty mess it is now to the public.


Based on your comments below I refocused on your player. The issue was that the audio buffers expect byte counts but the slice count and some other calculations were based on Int16 counts. The following version of outputCallback will fix it. Concentrate on the use of the new variable bytesPerChannel.

func outputCallback(inUserData: UnsafeMutableRawPointer?, inAQ: AudioQueueRef, inBuffer: AudioQueueBufferRef) {
guard let player = inUserData?.assumingMemoryBound(to: Player.PlayingState.self) else {
print("missing user data in output callback")
return
}

let bytesPerChannel = MemoryLayout<Int16>.size
let sliceStart = lastIndexRead
let sliceEnd = min(audioData.count, lastIndexRead + bufferByteSize/bytesPerChannel)

if sliceEnd >= audioData.count {
player.pointee.running = false
print("found end of audio data")
return
}

let slice = Array(audioData[sliceStart ..< sliceEnd])
let sliceCount = slice.count

print("slice start:", sliceStart, "slice end:", sliceEnd, "audioData.count", audioData.count, "slice count:", sliceCount)

// need to be careful to convert from counts of Ints to bytes
memcpy(inBuffer.pointee.mAudioData, slice, sliceCount*bytesPerChannel)
inBuffer.pointee.mAudioDataByteSize = UInt32(sliceCount*bytesPerChannel)
lastIndexRead += sliceCount

// enqueue the buffer, or re-enqueue it if it's a used one
check(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil))
}

I did not look at the Recorder code, but you may want to check if the same sort of error crept in there.

Change AudioQueueBuffer's mAudioData

You're close!

Try this:

inBuffer.pointee.mAudioData.copyMemory(from: lastItemOfArray, byteCount: Int(numBytes))

or this:

memcpy(inBuffer.pointee.mAudioData, lastItemOfArray, Int(numBytes))

Audio Queue Services was tough enough to work with when it was pure C. Now that we have to do so much bridging to get the API to work with Swift, it's a real pain. If you have the option, try out AVAudioEngine.


A few other things to check:

Make sure your AudioQueue has the same format that you've defined in your AudioStreamBasicDescription.

var queue: AudioQueueRef?

// assumes userData has already been initialized and configured
AudioQueueNewOutput(&dataFormat, callBack, &userData, nil, nil, 0, &queue)

Confirm you have allocated and primed the queue's buffers.

let numBuffers = 3

// using forced optionals here for brevity
for _ in 0..<numBuffers {
var buffer: AudioQueueBufferRef?
if AudioQueueAllocateBuffer(queue!, userData.bufferByteSize, &buffer) == noErr {
userData.mBuffers.append(buffer!)
callBack(inUserData: &userData, inAQ: queue!, inBuffer: buffer!)
}
}

Consider making your callback a function.

func callBack(inUserData: UnsafeMutableRawPointer?, inAQ: AudioQueueRef, inBuffer: AudioQueueBufferRef) {

let numBytes: UInt32 = inBuffer.pointee.mAudioDataBytesCapacity
memcpy(inBuffer.pointee.mAudioData, pcmData, Int(numBytes))
inBuffer.pointee.mAudioDataByteSize = numBytes

AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil)

}

Also, see if you can get some basic PCM data to play through your audio queue before attempting to bring in the server side data.

var pcmData: [Int16] = []
for i in 0..<frameCount {
let element = Int16.random(in: Int16.min...Int16.max) // noise
pcmData.append(Int16(element))

}

limit or extend the number of samples that are played

Two possibilities.

  1. High-level: When you start playing, make an NSTimer that calls a method to call -stop after one second. Or use -[NSObject performSelector:withObject:afterDelay:].

  2. Low-level: In RenderTone(), keep track of how many samples have been played already. Across calls to RenderTone(), keep that value in an ivar in the view controller, exactly the same way as it does with theta. In the sample-generation loop, if the sample count is >= 44100, set buffer[frame] to 0.

The fundamental thing to understand is: CoreAudio calls your RenderTone() function repeatedly, whenever it needs more audio data to play. It asks you for a certain amount of data (inSampleCount), and you need to provide exactly that much, no more, no less. If you want it to play silence, then you need to fill the buffer with zeros.

AudioQueue ate my buffer (first 15 milliseconds of it)

I'm not familiar with the iPhone audio APIs but it appears to be similar to other ones where generally you would queue up more than one buffer, This way when the system is finished processing the first buffer, it can immediately start processing the next buffer (since it's already been queued up) while the completion callback on the 1st buffer is being executed.

Something like:

AudioQueueRef aq;
AudioQueueBufferRef aq_buffer[2];
AudioStreamBasicDescription asbd;

void aq_callback (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
// note that the callback tells us which buffer has been completed, so all
// we have to do is queue it back up
OSStatus s = AudioQueueEnqueueBuffer(aq, inBuffer, 0, NULL);
}

void aq_init(void) {
OSStatus s;

asbd.mSampleRate = AUDIO_SAMPLES_PER_S;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd.mBytesPerPacket = 1;
asbd.mFramesPerPacket = 1;
asbd.mBytesPerFrame = 1;
asbd.mChannelsPerFrame = 1;
asbd.mBitsPerChannel = 8;
asbd.mReserved = 0;

int PPM_PACKETS_PER_SECOND = 50;
// one buffer is as long as one PPM frame
int BUFFER_SIZE_BYTES = asbd.mSampleRate/PPM_PACKETS_PER_SECOND*asbd.mBytesPerFrame;

s = AudioQueueNewOutput(&asbd, aq_callback, NULL, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aq);
s = AudioQueueAllocateBuffer(aq, BUFFER_SIZE_BYTES, &aq_buffer[0]);
s = AudioQueueAllocateBuffer(aq, BUFFER_SIZE_BYTES, &aq_buffer[1]);

// put samples in the buffer - fill both buffers
buffer_data(my_data, aq_buffer[0]);
buffer_data(my_data, aq_buffer[1]);

s = AudioQueueStart(aq, NULL);
s = AudioQueueEnqueueBuffer(aq, aq_buffer[0], 0, NULL);
s = AudioQueueEnqueueBuffer(aq, aq_buffer[1], 0, NULL);
}


Related Topics



Leave a reply



Submit