Attaching Audiounit Effects to Scnaudiosource Nodes in Scenekit

Attaching AudioUnit effects to SCNAudioSource nodes in SceneKit

SO after a lot of researching around to get any of the available AVAudioUnitEffec* effects into a SceneKit scene, I've finally got a solution, tested, tried and played around.

The following sub-class of AVAudioEngine will
1-Instantiate an AVAudioEngine with certain configurations
2-Added a few methods to encapsulate error handling and effects preset loading
3-Have a wire method to put every player and effect nodes into the audio engine graph
4-Create AVAudioPCMBuffer instances with configured frame count, file format as helper method to ease calling these functions from SceneKit

Note: Multi-Channel code was not included as I don't have a surround sound 5.1 system and am already very happy with the HRTF (Head Related Transfer Functions) algorithm exposed from the AVAudioEnvironmentNode class. Beware as this algorithm is the most computer intensive though it is a binaural format.

Possible additions:
1-Adding a reverb zone preset switcher which will require disconnecting the audio engine, rewiring the environment node to a new reverb preset (large hall, small room, etc)
2-Create a RayCast based echo transfer dimension from the SceneKit SCNNode list to add more realistic effects, IE: you are at the central bar of a T junction, an enemy is creaming on the left of the top bar of the junction, the sound traverses the RayCast leaving the enemy and bounces against a wall that is facing you. The AVAudioUnitDelay class has internal functions to change early delay to create the echo effect desired without washing the node with the same ffect wherever you are.

Code here:

import Foundation
import SceneKit
import AVFoundation

class AudioLayerEngine:AVAudioEngine{
var engine:AVAudioEngine!
var environment:AVAudioEnvironmentNode!
var outputBuffer:AVAudioPCMBuffer!
var voicePlayer:AVAudioPlayerNode!
var multiChannelEnabled:Bool!
//audio effects
let delay = AVAudioUnitDelay()
let distortion = AVAudioUnitDistortion()
let reverb = AVAudioUnitReverb()

override init(){
super.init()
engine = AVAudioEngine()
environment = AVAudioEnvironmentNode()

engine.attachNode(self.environment)
voicePlayer = AVAudioPlayerNode()
engine.attachNode(voicePlayer)
voicePlayer.volume = 1.0
outputBuffer = loadVoice()
wireEngine()
startEngine()
voicePlayer.scheduleBuffer(self.outputBuffer, completionHandler: nil)
voicePlayer.play()
}

func startEngine(){
do{
try engine.start()
}catch{
print("error loading engine")
}
}

func loadVoice()->AVAudioPCMBuffer{
let URL = NSURL(fileURLWithPath: NSBundle.mainBundle().pathForResource("art.scnassets/sounds/interface/test", ofType: "aiff")!)
do{
let soundFile = try AVAudioFile(forReading: URL, commonFormat: AVAudioCommonFormat.PCMFormatFloat32, interleaved: false)
outputBuffer = AVAudioPCMBuffer(PCMFormat: soundFile.processingFormat, frameCapacity: AVAudioFrameCount(soundFile.length))
do{
try soundFile.readIntoBuffer(outputBuffer)
}catch{
print("somethign went wrong with loading the buffer into the sound fiel")
}
print("returning buffer")
return outputBuffer
}catch{
}
return outputBuffer
}

func wireEngine(){
loadDistortionPreset(AVAudioUnitDistortionPreset.MultiCellphoneConcert)
engine.attachNode(distortion)
engine.attachNode(delay)
engine.connect(voicePlayer, to: distortion, format: self.outputBuffer.format)
engine.connect(distortion, to: delay, format: self.outputBuffer.format)
engine.connect(delay, to: environment, format: self.outputBuffer.format)
engine.connect(environment, to: engine.outputNode, format: constructOutputFormatForEnvironment())

}

func constructOutputFormatForEnvironment()->AVAudioFormat{
let outputChannelCount = self.engine.outputNode.outputFormatForBus(1).channelCount
let hardwareSampleRate = self.engine.outputNode.outputFormatForBus(1).sampleRate
let environmentOutputConnectionFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: outputChannelCount)
multiChannelEnabled = false
return environmentOutputConnectionFormat
}

func loadDistortionPreset(preset: AVAudioUnitDistortionPreset){
distortion.loadFactoryPreset(preset)
}

func createPlayer(node: SCNNode){
let player = AVAudioPlayerNode()
distortion.loadFactoryPreset(AVAudioUnitDistortionPreset.SpeechCosmicInterference)
engine.attachNode(player)
engine.attachNode(distortion)
engine.connect(player, to: distortion, format: outputBuffer.format)
engine.connect(distortion, to: environment, format: constructOutputFormatForEnvironment())
let algo = AVAudio3DMixingRenderingAlgorithm.HRTF
player.renderingAlgorithm = algo
player.reverbBlend = 0.3
player.renderingAlgorithm = AVAudio3DMixingRenderingAlgorithm.HRTF
}

}

e

Looking to change speed of 3D-spatialized audio using SceneKit

I've found an answer to my question so wanted to post this in case others may be having the same / similar issues with this.

It turns out that I needed to explicitly set the audioListener as pointOfView before I could access the audioEnvironmentNode of the AVAudioEngine instance that SceneKit instantiates.

Also, I needed to set the camera on the audioListener before the listener is added to the scene, rather than afterward.

According to Apple's documentation though, if there is only one camera in the scene, which there is in my case, the node that the camera is set on is supposed to automatically default to becoming the pointOfView and thereby also default to being the audioListener.

In practice though, this does not seem to be the case. -And, since the pointOfView did not seem to be associated with the node that my camera was associated with, it seemed that I could not access the current audio environment node.

So the code I have listed in my question does work but with a minor tweak.

let engine = scnView.audioEngine
let environmentNode = scnView.audioEnvironmentNode
let pitch = AVAudioUnitVarispeed()

engine.attach(audioPlayer)
engine.attach(environmentNode)
engine.attach(pitch)

engine.connect(audioPlayer, to: pitch, format: file?.processingFormat)
engine.connect(pitch, to: environmentNode, format: file?.processingFormat)
engine.connect(environmentNode, to: engine.mainMixerNode, format: nil)

Once this is done, then as before, all I need to do is create the SCNAudioPlayer from the returned AVAudioPlayerNode and associate the SCNAudioPlayer with the SCNNode and all is well! :) The audio is presented in 3D based on the SCNNode's position and is easily modifiable using the parameters of the connected AVAudioUnit.

So hope this helps others and please have a wonderful day / weekend!

Cheers! :)

ModGirl

What is the difference between AUAudioUnit and AudioUnit

1) I think the best way to describe the difference between AUAudioUnit and AudioUnit is that AUAudioUnit is an Objective-C class and AudioUnit is a set of (C/C++) API's which together act as the AudioUnit framework.
(BTW AudioUnit is now part of the AudioToolbox framework)

2) AUAudioUnit is an Objective-C class. I don't know if it's possible to link to objc from C/C++ but if so, that'll not very easy to do. And depending on your underlying problem probably not even the best thing to do.

3) It really depends on what you intent to do with your application. There are a lot of C/C++ API's in the CoreAudio & AudioToolBox framework including AudioComponent.h. So if that's what you're looking for you can perfectly use that directly in you C/C++ application.

I will be really helpful to look at the CoreAudio & AudioToolbox headers directly. There's a lot of useful information in there.



Related Topics



Leave a reply



Submit