Reverse an Audio File Swift/Objective-C

Reverse an audio file Swift/Objective-C

Yes, there is a way you can process, then export, any of the audio files for which there is iOS support.

However, most of these formats (mp3 to name one) are lossy and compressed. You must first decompress the data, apply the transformation, and recompress. Most transformation you will apply to the audio information should likely be done at the raw, PCM level.

Combining these two statements, you do this in a few passes:

  1. convert original file to a kAudioFormatLinearPCM compliant audio file, like AIFF
  2. process that temporary file (reverse its content)
  3. convert the temporary file back to the original format

Just like if you were applying a transformation to, say, a compressed jpeg image, there will be degradation in the process. The final audio will have, at best, suffered one more compression cycle.

So the true mathematical answer to this approach is actually no.


Just for reference, here is some starter code in swift 3. It needs further refinement to skip the file headers.

var outAudioFile:AudioFileID?
var pcm = AudioStreamBasicDescription(mSampleRate: 44100.0,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger,
mBytesPerPacket: 2,
mFramesPerPacket: 1,
mBytesPerFrame: 2,
mChannelsPerFrame: 1,
mBitsPerChannel: 16,
mReserved: 0)

var theErr = AudioFileCreateWithURL(destUrl as CFURL!,
kAudioFileAIFFType,
&pcm,
.eraseFile,
&outAudioFile)
if noErr == theErr, let outAudioFile = outAudioFile {
var inAudioFile:AudioFileID?
theErr = AudioFileOpenURL(sourceUrl as! CFURL, .readPermission, 0, &inAudioFile)

if noErr == theErr, let inAudioFile = inAudioFile {

var fileDataSize:UInt64 = 0
var thePropertySize:UInt32 = UInt32(MemoryLayout<UInt64>.stride)
theErr = AudioFileGetProperty(inAudioFile,
kAudioFilePropertyAudioDataByteCount,
&thePropertySize,
&fileDataSize)

if( noErr == theErr) {
let dataSize:Int64 = Int64(fileDataSize)
let theData = UnsafeMutableRawPointer.allocate(bytes: Int(dataSize),
alignedTo: MemoryLayout<UInt8>.alignment)

var readPoint:Int64 = Int64(dataSize)
var writePoint:Int64 = 0

while( readPoint > 0 )
{
var bytesToRead = UInt32(2)

AudioFileReadBytes( inAudioFile, false, readPoint, &bytesToRead, theData)
AudioFileWriteBytes( outAudioFile, false, writePoint, &bytesToRead, theData)

writePoint += 2
readPoint -= 2
}

theData.deallocate(bytes: Int(dataSize), alignedTo: MemoryLayout<UInt8>.alignment)

AudioFileClose(inAudioFile);
AudioFileClose(outAudioFile);
}
}
}

Reverse playback audio file in ios objective c

There is no direct way to achieve such feature on audio file.

My suggestion to do this is to crearte chunks of audio files and merge them in reverse orderin one audio file.

You can use AVMutableComposition,AVMutableCompositionTrack,AVAssetExportSession APIs to achieve this.

You can also get more help from trimming movie file mentioned in Apple documents and same you can do for audio file.

iOS reverse audio through AVAssetWriter

Print out the size of each buffer in number of samples (through the "reading" readerOuput while loop), and repeat in the "writing" writerInput for-loop. This way you can see all the buffer sizes and see if they add up.

For example, are you missing or skipping a buffer if (writerInput.readyForMoreMediaData) is false, you "sleep", but then proceed to the next reversedSample in reversedSamples (that buffer effectively gets dropped from the writerInput)

UPDATE (based on comments):
I found in the code, there are two problems:

  1. The output settings is incorrect (the input file is mono (1 channel), but the output settings is configured to be 2 channels. It should be: [NSNumber numberWithInt:1], AVNumberOfChannelsKey. Look at the info on output and input files:

Sample Image
Sample Image


  1. The second problem is that you are reversing 643 buffers of 8192 audio samples, instead of reversing the index of each audio sample. To see each buffer, I changed your debugging from looking at the size of each sample to looking at the size of the buffer, which is 8192. So line 76 is now: size_t sampleSize = CMSampleBufferGetNumSamples(sample);

The output looks like:

2015-03-19 22:26:28.171 audioReverse[25012:4901250] Reading [0]: 8192
2015-03-19 22:26:28.172 audioReverse[25012:4901250] Reading [1]: 8192
...
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [640]: 8192
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [641]: 8192
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [642]: 5056

2015-03-19 22:26:28.651 audioReverse[25012:4901250] Writing [0]: 5056
2015-03-19 22:26:28.652 audioReverse[25012:4901250] Writing [1]: 8192
...
2015-03-19 22:26:29.134 audioReverse[25012:4901250] Writing [640]: 8192
2015-03-19 22:26:29.135 audioReverse[25012:4901250] Writing [641]: 8192
2015-03-19 22:26:29.135 audioReverse[25012:4901250] Writing [642]: 8192

This shows that you're reversing the order of each buffer of 8192 samples, but in each buffer the audio is still "facing forward". We can see this in this screen shot I took of a correctly reversed (sample-by-sample) versus your buffer reversal:

Sample Image

I think your current scheme can work if you also reverse each sample each 8192 buffer. I personally would not recommend using NSArray enumerators for signal-processing, but it can work if you operate at the sample-level.

Objective-c/IOS: What's the simplest way to play an audio file backwards

In wave data samples are interleaved. This means the data is organised like this.

Sample 1 Left | Sample 1 Right | Sample 2 Left | Sample 2 right ... Sample n Left | Sample n right

As each sample is 16-bits (2 bytes) the first 2 channel sample (ie for both left and right) is 4 bytes in size.

This way you know that the last sample in a block of wave data is as follows:

wavDataSize - 4

You can then load each sample at a time by copying it into a different buffer by starting at the end of the recording and reading backwards. When you get to the start of the wave data you have reversed the wave data and playing will be reversed.

Edit: If you want easy ways to read wave files on iOS check out the Audio File Services Reference.

Edit 2:

readPoint  = waveDataSize;
writePoint = 0;
while( readPoint > 0 )
{
readPoint -= 4;
Uint32 bytesToRead = 4;
Uint32 sample;
AudioFileReadBytes( inFile, false, maxData, &bytesToRead &sample );
AudioFileWriteBytes( outFile, false, writePoint, &bytesToRead, &sample );

writePoint += 4;
}

How to reverse an audio file?

I have worked on a sample app, which records what user says and plays them backwards. I have used CoreAudio to achieve this. Link to app code.

As each sample is 16-bits in size(2 bytes)(mono channel). You can load each sample at a time by copying it into a different buffer by starting at the end of the recording and reading backwards. When you get to the start of the data you have reversed the data and playing will be reversed.

    // set up output file
AudioFileID outputAudioFile;

AudioStreamBasicDescription myPCMFormat;
myPCMFormat.mSampleRate = 16000.00;
myPCMFormat.mFormatID = kAudioFormatLinearPCM ;
myPCMFormat.mFormatFlags = kAudioFormatFlagsCanonical;
myPCMFormat.mChannelsPerFrame = 1;
myPCMFormat.mFramesPerPacket = 1;
myPCMFormat.mBitsPerChannel = 16;
myPCMFormat.mBytesPerPacket = 2;
myPCMFormat.mBytesPerFrame = 2;

AudioFileCreateWithURL((__bridge CFURLRef)self.flippedAudioUrl,
kAudioFileCAFType,
&myPCMFormat,
kAudioFileFlags_EraseFile,
&outputAudioFile);
// set up input file
AudioFileID inputAudioFile;
OSStatus theErr = noErr;
UInt64 fileDataSize = 0;

AudioStreamBasicDescription theFileFormat;
UInt32 thePropertySize = sizeof(theFileFormat);

theErr = AudioFileOpenURL((__bridge CFURLRef)self.recordedAudioUrl, kAudioFileReadPermission, 0, &inputAudioFile);

thePropertySize = sizeof(fileDataSize);
theErr = AudioFileGetProperty(inputAudioFile, kAudioFilePropertyAudioDataByteCount, &thePropertySize, &fileDataSize);

UInt32 dataSize = fileDataSize;
void* theData = malloc(dataSize);

//Read data into buffer
UInt32 readPoint = dataSize;
UInt32 writePoint = 0;
while( readPoint > 0 )
{
UInt32 bytesToRead = 2;

AudioFileReadBytes( inputAudioFile, false, readPoint, &bytesToRead, theData );
AudioFileWriteBytes( outputAudioFile, false, writePoint, &bytesToRead, theData );

writePoint += 2;
readPoint -= 2;
}

free(theData);
AudioFileClose(inputAudioFile);
AudioFileClose(outputAudioFile);

Can't reverse AVAsset audio properly. The only result is white noise

You get white noise because your in and out file formats are incompatible. You have different sample rates and channels and probably other differences. To make this work you need to have a common (PCM) format mediating between reads and writes. This is a reasonable job for the new(ish) AVAudio frameworks. We read from file to PCM, shuffle the buffers, then write from PCM to file. This approach is not optimised for large files, as all data is read into the buffers in one go, but is enough to get you started.

You can call this method from your getAudioFromVideo completion block. Error handling ignored for clarity.

- (void)readAudioFromURL:(NSURL*)inURL reverseToURL:(NSURL*)outURL {

//prepare the in and outfiles

AVAudioFile* inFile =
[[AVAudioFile alloc] initForReading:inURL error:nil];

AVAudioFormat* format = inFile.processingFormat;
AVAudioFrameCount frameCount =(UInt32)inFile.length;
NSDictionary* outSettings = @{
AVNumberOfChannelsKey:@(format.channelCount)
,AVSampleRateKey:@(format.sampleRate)};

AVAudioFile* outFile =
[[AVAudioFile alloc] initForWriting:outURL
settings:outSettings
error:nil];

//prepare the forward and reverse buffers
self.forwaredBuffer =
[[AVAudioPCMBuffer alloc] initWithPCMFormat:format
frameCapacity:frameCount];
self.reverseBuffer =
[[AVAudioPCMBuffer alloc] initWithPCMFormat:format
frameCapacity:frameCount];

//read file into forwardBuffer
[inFile readIntoBuffer:self.forwaredBuffer error:&error];

//set frameLength of reverseBuffer to forwardBuffer framelength
AVAudioFrameCount frameLength = self.forwaredBuffer.frameLength;
self.reverseBuffer.frameLength = frameLength;

//iterate over channels

//stride is 1 or 2 depending on interleave format
NSInteger stride = self.forwaredBuffer.stride;

for (AVAudioChannelCount channelIdx = 0;
channelIdx < self.forwaredBuffer.format.channelCount;
channelIdx++) {
float* forwaredChannelData =
self.forwaredBuffer.floatChannelData[channelIdx];
float* reverseChannelData =
self.reverseBuffer.floatChannelData[channelIdx];
int32_t reverseIdx = 0;

//iterate over samples, allocate to reverseBuffer in reverse order
for (AVAudioFrameCount frameIdx = frameLength;
frameIdx >0;
frameIdx--) {
float sample = forwaredChannelData[frameIdx*stride];
reverseChannelData[reverseIdx*stride] = sample;
reverseIdx++;
}
}

//write reverseBuffer to outFile
[outFile writeFromBuffer:self.reverseBuffer error:nil];
}

How to Generate Phase Inverse audio file from input audio file in Swift?

My naive approach would be: subtract the average (the bias), multiply by -1, add average back in.



Related Topics



Leave a reply



Submit