iOS - Combine/Concatenate Multiple Audio Files

ios - combine/concatenate multiple audio files

I have done this. To do the concatenation you first need to load the audio files into AVAssets. Specifically, you'll want to use a subclass of AVAsset called AVURLAssets, which can load up your URL: Loading AVAsset. You can then add each AVAsset into a AVMutableComposition, which is designed to contain multiple AVAssets. Once you've gotten it loaded into AVMutableComposition, you can use AVAssetExportSession to write the composition to a file.

Note that AVAssetExportSession doesn't give you much control over the file output (it'll export audio into a m4a file). If you need more control over the type of output, you'll need to use the AVAssetReader and AVAssetWriter classes to perform the export rather than the AVAssetExportSession. These classes are much more complex to use than AVAssetExportSession and it pays here to understand straight C.

I will also point out that there are no Apple-provided options to write out MP3 files. You can read them, but you can't write them. It's generally best to stick with a m4a/aac format.

Concatenate two audio files in Swift and play them

I got your code working by changing two things:

  • the preset name: from AVAssetExportPresetPassthrough to AVAssetExportPresetAppleM4A

  • the output file type: from AVFileTypeWAVE to AVFileTypeAppleM4A

Modify your assetExport declaration like this:

var assetExport = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetAppleM4A)
assetExport.outputFileType = AVFileTypeAppleM4A

then it will properly merge the files.

It looks like AVAssetExportSession only exports M4A format and ignores other presets. There may be a way to make it export other formats (by subclassing it?), though I haven't explored this possibility yet.

AVMutableComposition -How to Merge Multiple Audio Recordings with 1 Video Recording

Your approach is correct, but you've mixed up the two parameters that you're using for insertTimeRange, and you're adding the video and audio from your video track multiple times.

The first parameter in insertTimeRange refers to the timeRange within the original audio asset, not the composition; so assuming that for each audio clip you are looking to add the entire clip, the time range should always start at .zero, not at startTime. The at: parameter should no be .zero, but rather "startTime" - the time within the composition where you want to add the audio.

Regarding your video track and your audioFromVideoTrack, I would not add these as part of the loop, but rather just add them before the loop. Otherwise you are adding them multiple times (once for each audio item), rather than just once, and this can lead to unwanted behavior or the export sessions failing altogether.

I edited your code but wasn't able to actually test it so take it with a grain of salt.

guard let videoCompositionTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
guard let audioFromVideoCompositionTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
guard let audioModelCompositionTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }

let videoAsset = AVURLAsset(url: videoURL)
guard let videoTrack = videoAsset.tracks(withMediaType: .video).first else { return }

do {
try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.duration), of: videoTrack, at: .zero)
if let audioFromVideoTrack = videoAsset.tracks(withMediaType: .audio).first {
try audioFromVideoCompositionTrack.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration), of: audioFromVideoTrack, at: .zero)
}
} catch {
}

for audioModel in audioModels {
let audioAsset = AVURLAsset(url: audioModel.url!)
let startTime = CMTime(seconds: audioModel.startTime!, preferredTimescale: 1000)
do {
if let audioTrackFromAudioModel = audioAsset.tracks(withMediaType: .audio).first {
try audioModelCompositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: audioAsset.duration), of: audioTrackFromAudioModel, at: startTime)
}
} catch {
}
}

let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
// ... I know what to do from here

Combine two audio files into one in objective c

Look at AVFoundation framework ... There're many ways, but the simplest one for you can be ...

  • create AVAsset for both files (use AVURLAsset subclass),
  • alloc AVMutableComposition (composition),
  • add AVMutableCompositionTrack with type AVMediaTypeAudio to composition

[composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];

  • get track(s) from the first AVAsset and add it AVMutableCompositionTrack,
  • get track(s) from the second AVAsset and append it to AVMutableCompositionTrack,
  • then create AVAssetExportSession with your composition and export it.

Simplified description, but you get a clue. Depends on how many tracks do you have, what kind of effects do you want to use, etc.

If you do want to see some real code, open AVMovieExporter example, copy this code and remove video stuff and leave audio there only.

How to merge two audio files using iPhone SDK?

OPTION-1:

Refer to this link:

Join multiple audio files into one

Answer of invalidname in that post says:

MP3 is a stream format, meaning it doesn't have a bunch of metadata at
the front or end of the file. While this has a lot of downsides, one
of the upsides is that you can concatenate MP3 files together into a
single file and it'll play.

This is pretty much what you're doing by concatenating into an
NSMutableData, the downside of which is that you might run out of
memory. Another option would be to build up the file on disk with
NSFileHandle.

This doesn't work for most file formats (aac/m4a, aif, caf, etc.). MP3
is literally just a stream dumped to disk, with metadata in frame
headers (or, in ID3, tucked between frames), so that's why it works.

OPTION-2:

combine two .caf audio files into a single audio file in iphone

Answer by Midhere in this post:

You can do it using ExtAudioFileService. In ios developer library they
had provided two examples to convert one audio file to another format.
In these they are opening one audio file for reading and another file
for writing (converted audio). You can change or updated code to read
from two files and write them to one out put file in same format(caf)
or compressed format. First you have open first audio file and read
every packets from it and write it to a new audio file. After
finishing first audio file, close the file and open second audio file
for reading. Now read every packets from second audio file and write
to newly created audio file and close second audio file and new audio
file.

Please find the links(1,2) for these sample codes ....
Hope this helps you...and good luck. :)

So try and convert it to another format and then try combining it.

OPTION-3:

Refer to:

Joining two CAF files together

Answer by dineth in this post:

If anyone is keen to know the answer, there is a way to do it. You
have to use the AudioFiles API calls. Basically, you'd:

create a new audio file using AudioFileCreate with the correct
parameters (bitrate etc). open your first file, read the packets and
write them to the newly created file. open your second file and do the
same. make sure your counters are not zero-ed out after writing the
first file. AudioFileClose -- and you're done! Things to note: for
local files, you have to run a method to escape spaces

That's about it!

OPTION-4:

Slightly in a different note.

I think you are recording files in CAF and trying to combine them.So in that case you can finally try recording your files in some other format than caf.

Try out this link for that:

iOS: Record audio in other format than caf

Hope this helps.

Mixing two Audio Files using AVComposition on IOS

I'm posting the code that I eventually got to work, in case anybody else is trying to do the same thing and would like some code samples (My problem above I suspect was that the audio files weren't being loaded correctly)

 [self showActivityIndicator]; // This code takes a while so show the user an activity Indicator
AVMutableComposition *composition = [AVMutableComposition composition];
NSArray* tracks = [NSArray arrayWithObjects:@"backingTrack", @"RobotR33", nil];
NSString* audioFileType = @"wav";

for (NSString* trackName in tracks) {
AVURLAsset* audioAsset = [[AVURLAsset alloc]initWithURL:[NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:trackName ofType:audioFileType]]options:nil];

AVMutableCompositionTrack* audioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];

NSError* error;
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, audioAsset.duration) ofTrack:[[audioAsset tracksWithMediaType:AVMediaTypeAudio]objectAtIndex:0] atTime:kCMTimeZero error:&error];
if (error)
{
NSLog(@"%@", [error localizedDescription]);
}
}
AVAssetExportSession* _assetExport = [[AVAssetExportSession alloc] initWithAsset:composition presetName:AVAssetExportPresetAppleM4A];

NSString* mixedAudio = @"mixedAudio.m4a";

NSString *exportPath = [NSTemporaryDirectory() stringByAppendingString:mixedAudio];
NSURL *exportURL = [NSURL fileURLWithPath:exportPath];

if ([[NSFileManager defaultManager]fileExistsAtPath:exportPath]) {
[[NSFileManager defaultManager]removeItemAtPath:exportPath error:nil];
}
_assetExport.outputFileType = AVFileTypeAppleM4A;
_assetExport.outputURL = exportURL;
_assetExport.shouldOptimizeForNetworkUse = YES;

[_assetExport exportAsynchronouslyWithCompletionHandler:^{
[self hideActivityIndicator];
NSLog(@"Completed Sucessfully");
}];


Related Topics



Leave a reply



Submit