How to Use The Coreaudio API in Swift

How to use the CoreAudio API in Swift

You can't (currently) use an API requiring a C callback pointer from pure Swift code. Calling Swift functions or methods using a C function pointer is not supported by the current beta 4 language implementation, according to replies in the Swift forum at devforums.apple.com

UPDATE: The above answer is obsolete as of Swift 2.0

One alternative is to put some small trampoline C callback functions in an Objective C file, which can interoperate with Swift, and have those C functions in turn call a block or closure, which can be in Swift code. Configure the C callbacks with your Swift closures, and then pass those C callbacks to the CoreAudio functions.

How to use Core Audio in Swift language

I think I found the answers so I will leave it here if somebody is interested

  1. The bridge approach is preferred. As invalidname (Chris Adamson) said in media framework talk you have to Render unto Caesar the things that are Caesar's, and unto God the things that are God's i.e use C for C API and Swift for swifty things.
  2. Talking about performance I found an article which discusses about it. The conclusion is that for primitive type there is no problem by making all the type conversion, calling C function and making the backward type conversion. But for some type like String/char*, struct and more complicated type you could encoure performance decrease.

Btw don't hesitate to add more things if you think it could help other people.

Can code using Core Audio be compatible across iOS & macOS?

The canonical sample format is now stereo float 32 on iOS too.

MacOS supports custom v3 and v2 Audio units, while iOS supports custom v3 audio units, but only system provided v2 audio units.

AVAudioEngine and friends wrap much of the core audio C API in Swift/ObjC and I believe there are very few platform differences if any. I recommend trying AVAudioEngine first, then use the C API if it doesn't meet your needs.

Much of the C API is cross platform, but there are some areas where something is supported on macOS only, or iOS only. You can look through the headers to see the differences. For example, here are the definitions for the output audio units sub-types (with documentation removed).

#if !TARGET_OS_IPHONE

CF_ENUM(UInt32) {
kAudioUnitSubType_HALOutput = 'ahal',
kAudioUnitSubType_DefaultOutput = 'def ',
kAudioUnitSubType_SystemOutput = 'sys ',
};

#else

CF_ENUM(UInt32) {
kAudioUnitSubType_RemoteIO = 'rioc',
};

#endif

If you want to write a cross-platform wrapper, you have to use preprocessor directives around the platform specifics. Here is a cross-platform function that creates an AudioComponentDescription for an output audio unit using the platform specific sub-types.

AudioComponentDescription outputDescription() {

AudioComponentDescription description;
description.componentType = kAudioUnitType_Output;
description.componentManufacturer = kAudioUnitManufacturer_Apple;
description.componentFlags = 0;
description.componentFlagsMask = 0;

#if TARGET_OS_IPHONE
description.componentSubType = kAudioUnitSubType_RemoteIO;
#else
description.componentSubType = kAudioUnitSubType_DefaultOutput;
#endif

return description;
}

There are some other audio units that are only supported on iOS or macOS, and the API that manages "system" level audio interaction is completely different. MacOS uses a C API, while iOS has AVAudioSession.

I'm sure I'm missing some things :)

Confusion With Audio Stream Formats and Data Types with Core Audio

Given your [Float] data, instead of kAudioFormatFlagIsSignedInteger and 16 bits per channel, you probably want to use kAudioFormatFlagIsFloat and 32 bits (8 bytes per packet and frame).

Note that for all recent iOS devices the native audio format in 32-bit float, not 16-bit int, using a native (hardware?) sample rate of 48000, not 44100.

Also, note that Apple recommends not using Swift inside the audio callback context (see 2017 or 2018 WWDC sessions on audio), so your Audio Unit render callback should probably call a C function to do all the work (anything touching ioData or inRefCon).

You might also want to check to make sure your array index does not exceed your array bounds.

Swift Core Audio Learning Resources

Chris Adamson's book is in Objective-C, but covers Core Audio quite well. Ask The Google for his name and you'll find some of his articles.
Many things transfer to Swift fairly easily. My blog has several examples.

Core MIDI is another thing though. Swift support of Core MIDI is still problematic.

CoreAudio: the proper method of reading an AudioFileMarkerList?

Here is the answer, in case anyone comes across this in the future.

// get size of markers property (dictionary)
UInt32 propSize;
UInt32 writable;

[EZAudioUtilities checkResult:AudioFileGetPropertyInfo( self.audioFileID,
kAudioFilePropertyMarkerList,
&propSize,
&writable)
operation:"Failed to get the size of the marker list"];

size_t length = NumBytesToNumAudioFileMarkers( propSize );

// allocate enough space for the markers.
AudioFileMarkerList markers[ length ];

if ( length > 0 ) {
// pull marker list
[EZAudioUtilities checkResult:AudioFileGetProperty( self.audioFileID,
kAudioFilePropertyMarkerList,
&propSize,
&markers)
operation:"Failed to get the markers list"];

} else {
return NULL;
}

//NSLog(@"# of markers: %d\n", markers->mNumberMarkers );

CoreAudio AudioObjectRemovePropertyListener not working in Swift

Yeah, it could be a bug actually because the listener block can not be removed by AudioObjectRemovePropertyListenerBlock. However, I find that to register an AudioObjectPropertyListenerProc with an AudioObject can be a workaround with Swift.

//var queue = dispatch_queue_create("testqueue", nil)
//var listener:AudioObjectPropertyListenerBlock = {
// _, _ in
// AudioObjectGetPropertyData(outputDeviceID, &volumeAOPA, 0, nil, &volSize, &volume)
// print(volume)
//}
//
//AudioObjectAddPropertyListenerBlock(outputDeviceID, &volumeAOPA, queue, listener)
//AudioObjectRemovePropertyListenerBlock(outputDeviceID, &volumeAOPA, queue, listener)
var data: UInt32 = 0
func listenerProc() -> AudioObjectPropertyListenerProc {
return { _, _, _, _ in
AudioObjectGetPropertyData(outputDeviceID, &volumeAOPA, 0, nil, &volSize, &volume)
print(volume)
return 0
}
}
AudioObjectAddPropertyListener(outputDeviceID, &volumeAOPA, listenerProc(), &data)
AudioObjectRemovePropertyListener(outputDeviceID, &volumeAOPA, listenerProc(), &data)

How to use Core Audio's Clock API?

The answer is that there is almost no documentation, and the only reference I found was an Apple list serv stating that it's not a fully developed API.

Instead, if you need audio clock data, register a render callback with your generator audio unit like this.

AudioUnitAddRenderNotify(m_generatorAudioUnit, auRenderCallback, this);

OSStatus auRenderCallback (
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData
)
{
AudioEngineModel* pAudioEngineModel= (AudioEngineModel*)inRefCon;
pAudioEngineModel->m_f64SampleTime= inTimeStamp->mSampleTime;

return noErr;
}

You can get seconds by dividing the mSampleTime by the sampleRate.



Related Topics



Leave a reply



Submit