How to Apply Audio Effect to a File and Write to Filesystem - iOS

How to apply filters to previously recorded sound and save modified version using AudioKit?

In addition to opening the file with AKFileInput, you have to play it back, and to do so, you'll be creating an AKInstrument. In that AKInstrument you can process the output with any of AudioKit's signal modifier operations.

Read audio file, perform filters (i.e. reverb), and then write audio file without playback on iOS

In general,the steps are:

  1. Set up an AUGraph like this - AudioFilePlayer -> Reverb/Limiter/EQ/etc. -> GenericOutput
  2. Open the input file and schedule it on the AudioFilePlayer.
  3. Create an output file and repeatedly call AudioUnitRender on the GenericOutput unit, writing the rendered buffers to the output file.

I'm not sure about the speed of this, but it should be acceptable.

There is a comprehensive example of offline rendering in this thread that covers the setup and rendering process.

Render audio file offline using AVAudioEngine

You need to nil outputFile to flush the header and close the m4a file.

iOS: How to intercept and manipulate bytes in AVPlayer

You'll want to use MTAudioProcessingTap for this, which is a property of AVPlayer's AVAudioMix. In your tap process callback, use MTAudioProcessingTapGetSourceAudio to grab your buffers. Once you have your buffer reference you can xor the data.

There's some boilerplate needed to get the AVAudioMix and MTAudioProcessingTap setup properly. The sample code that Apple has is pretty old, but still should work.
https://developer.apple.com/library/archive/samplecode/AudioTapProcessor/Introduction/Intro.html#//apple_ref/doc/
uid/DTS40012324

Also note that it will be easier to do this in Objective C, for several reasons. The interop with your C file will be easier, and it is much more straightforward reading/writing to the buffer in Objc. It will also run faster than in swift. If you are interested to seeing what this would look like in swift, there is a sample project here:
https://github.com/gchilds/MTAudioProcessingTap-in-Swift

iOS: Process audio from AVPlayer video track

I can think of one fairly simple way to do this.

Basically you just need to open your video file in an AKPlayer instance. Then, you mute your video audio. Now, you have the video audio in AudioKit. It's pretty simple to lock the video and audio together using a common clock. Pseudo-code of the flow:

// This will represent a common clock using the host time
var audioClock = CMClockGetHostTimeClock()

// your video player
let videoPlayer = AVPlayer( url: videoURL )
videoPlayer.masterClock = audioClock
videoPlayer.automaticallyWaitsToMinimizeStalling = false

....

var audioPlayer: AKPlayer?

// your video-audio player
if let player = try? AKPlayer(url: videoURL) {
audioPlayer = player
}

func schedulePlayback(videoTime: TimeInterval, audioTime: TimeInterval, hostTime: UInt64 ) {
audioPlay( audioTime, hostTime: hostTime )
videoPlay(at: 0, hostTime: hostTime)
}

func audioPlay(at time: TimeInterval = 0, hostTime: UInt64 = 0) {
audioPlayer.play(when: time, hostTime: hostTime)
}

func videoPlay(at time: TimeInterval = 0, hostTime: UInt64 = 0 ) {
let cmHostTime = CMClockMakeHostTimeFromSystemUnits(hostTime)
let cmVTime = CMTimeMakeWithSeconds(time, 1000000)
let futureTime = CMTimeAdd(cmHostTime, cmVTime)
videoPlayer.setRate(1, time: kCMTimeInvalid, atHostTime: futureTime)
}

You can connect the player up to any AudioKit processing chain in the normal way.

When you want to export your audio, run an AKNodeRecorder on the final output processing chain. Record this to file, then merge your audio into your video. I'm not sure if the AudioKit offline processing that is being worked on is ready yet, so you may need to play the audio in real time to capture the processing output.



Related Topics



Leave a reply



Submit