Webaudio - Seamlessly Playing Sequence of Audio Chunks

WebAudio - seamlessly playing sequence of audio chunks

I've written a small class in TypeScript that serves as buffer for now. It has bufferSize defined for controlling how many chunks it can hold. It's short and self-descriptive so I'll paste it here. There is much to improve so any ideas are welcome.

( you can quickly convert it to JS using: https://www.typescriptlang.org/play/ )

class SoundBuffer {
private chunks : Array<AudioBufferSourceNode> = [];
private isPlaying: boolean = false;
private startTime: number = 0;
private lastChunkOffset: number = 0;

constructor(public ctx:AudioContext, public sampleRate:number,public bufferSize:number = 6, private debug = true) { }

private createChunk(chunk:Float32Array) {
var audioBuffer = this.ctx.createBuffer(2, chunk.length, this.sampleRate);
audioBuffer.getChannelData(0).set(chunk);
var source = this.ctx.createBufferSource();
source.buffer = audioBuffer;
source.connect(this.ctx.destination);
source.onended = (e:Event) => {
this.chunks.splice(this.chunks.indexOf(source),1);
if (this.chunks.length == 0) {
this.isPlaying = false;
this.startTime = 0;
this.lastChunkOffset = 0;
}
};

return source;
}

private log(data:string) {
if (this.debug) {
console.log(new Date().toUTCString() + " : " + data);
}
}

public addChunk(data: Float32Array) {
if (this.isPlaying && (this.chunks.length > this.bufferSize)) {
this.log("chunk discarded");
return; // throw away
} else if (this.isPlaying && (this.chunks.length <= this.bufferSize)) { // schedule & add right now
this.log("chunk accepted");
let chunk = this.createChunk(data);
chunk.start(this.startTime + this.lastChunkOffset);
this.lastChunkOffset += chunk.buffer.duration;
this.chunks.push(chunk);
} else if ((this.chunks.length < (this.bufferSize / 2)) && !this.isPlaying) { // add & don't schedule
this.log("chunk queued");
let chunk = this.createChunk(data);
this.chunks.push(chunk);
} else { // add & schedule entire buffer
this.log("queued chunks scheduled");
this.isPlaying = true;
let chunk = this.createChunk(data);
this.chunks.push(chunk);
this.startTime = this.ctx.currentTime;
this.lastChunkOffset = 0;
for (let i = 0;i<this.chunks.length;i++) {
let chunk = this.chunks[i];
chunk.start(this.startTime + this.lastChunkOffset);
this.lastChunkOffset += chunk.buffer.duration;
}
}
}
}

Web Audio API: How to play a stream of MP3 chunks

No, you can't reuse an AudioBufferSourceNode, and you cant push onto an AudioBuffer. Their lengths are immutable.

This article (http://www.html5rocks.com/en/tutorials/audio/scheduling/) has some good information about scheduling with the Web Audio API. But you're on the right track.

Webaudio Playback from WebSocket has drop-outs

Your stream is really guaranteed to get complete audio files in each network chunk? (decodeAudioData does not work with partial MP3 chunks.)

It seems like (from the code snippet above) you're just relying on network timing to get the stream chunks started at the right time? That's guaranteed not to line up properly; you need to keep a bit of latency in the stream (to handle inconsistent network), and carefully schedule each chunk. (The bit above that makes me cringe is source.start() - with no time param that will keep the chunks scheduled one right after another. i.e.:

var nextStartTime = 0;

function addChunkToQueue( buffer ) {
if (!nextStartTime) {
// we've not yet started the queue - just queue this up,
// leaving a "latency gap" so we're not desperately trying
// to keep up. Note if the network is slow, this is going
// to fail. Latency gap here is 1 second.
nextStartTime = audioContext.currentTime + 1;
}
var bsn = audioContext.createBufferSource();
bsn.buffer = buffer;
bsn.connect( audioContext.destination );
bsn.start( nextStartTime );

// Ensure the next chunk will start at the right time
nextStartTime += buffer.duration;
}

In addition, depending on how big your chunks are, I'd wonder if garbage collection isn't contributing to the problem. You should check it out in the profiler.

The onended path is not going to work well; it's reliant on JS event handling, and only fires AFTER the audio system is done playing; so there will ALWAYS be a gap.

Finally - this is not going to work well if the sound stream does not match the default audio device's sample rate; there are always going to be clicks, because decodeAudioData will resample to the device rate, which will not have a perfect duration. It will work, but there will likely be artifacts like clicks at the boundaries of chunks. You need a feature that's not yet spec'ed or implemented - selectable AudioContext sample rates - in order to fix this.

Altering the Curve of a WebAudio WaveShaper Node While Playing

I think this is where it goes wrong:

var k = typeof amount === 'number' ? amount : 50

The slider's value is a string, not a number, which would explain why it works the first time (it evaluates to 50). So if you do

makeDistortionCurve(parseInt(distortionAmountSlider.value, 10));

you should be good to go! (Or use parseFloat if you need a float..)

When connecting to a merger node is there any reason to use a number other than 0 as the second argument if the input is not channel splitter

In short, no.

Splitter is currently the only node that has multiple outputs, so it's the only node for which you would ever need to specify an output other than 0.

There are scenarios where you would do this with a splitter. For example, imagine how to create a graph that flips stereo channels:

var merger = context.createMerger(2);
var splitter = context.createSplitter(2);

splitter.connect(merger,0,1);
splitter.connect(merger,1,0);

In the future, some other nodes might acquire other outputs (like, I've proposed using a separate output for the envelope in a noise gate/expander node), and then there might be other cases (and this answer would change).



Related Topics



Leave a reply



Submit