Only First Track Playing of Avmutablecomposition()

AVMutableComposition - Only Playing First Track (Swift)

I was able to figure out an answer the most important question. In order to play all of the clips together, they need to be in the same track. To do this, move the following line outside (before) the for loop:

let videoCompositionTrack = videoComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))

Here is the full, corrected code:

let playerLayer: AVPlayerLayer = AVPlayerLayer()
lazy var videoPlayer: AVPlayer = AVPlayer()

var videoClips = [AVAsset]()

let videoComposition = AVMutableComposition()
var playerItem: AVPlayerItem!
var lastTime: CMTime = kCMTimeZero

let videoCompositionTrack = videoComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))

for clipIndex in videoClips {

do {
try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipIndex.duration),
ofTrack: clipIndex.tracksWithMediaType(AVMediaTypeVideo)[0] ,
atTime: lastTime)
lastTime = CMTimeAdd(lastTime, clipIndex.duration)
} catch {
print("Failed to insert track")
}
}
print("VideoComposition Tracks: \(videoComposition.tracks.count)") // Shows multiple tracks

playerItem = AVPlayerItem(asset: videoComposition)
print("PlayerItem Duration: \(playerItem.duration.seconds)") // Shows the duration of all tracks together
print("PlayerItem Tracks: \(playerItem.tracks.count)") // Shows same number of tracks as the VideoComposition Track count

videoPlayer = AVPlayer(playerItem: playerItem)
playerLayer.player = videoPlayer
videoPlayer.volume = 0.0
videoPlayer.play() // Does play all clips sequentially

EDIT: I mentioned earlier that I was still wondering how to play multiple tracks in one asset. That's not how it works, I understand now. A good resource:

https://developer.apple.com/library/mac/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html

Only First Track Playing of AVMutableComposition()

Ok, so for my exact problem, I had to apply specific transforms CGAffineTransform in Swift to get the specific result we wanted. The current one I am posting works with any picture taken/obtained as well as video

//This method gets the orientation of the current transform. This method is used below to determine the orientation
func orientationFromTransform(_ transform: CGAffineTransform) -> (orientation: UIImageOrientation, isPortrait: Bool) {
var assetOrientation = UIImageOrientation.up
var isPortrait = false
if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 {
assetOrientation = .right
isPortrait = true
} else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 {
assetOrientation = .left
isPortrait = true
} else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 {
assetOrientation = .up
} else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 {
assetOrientation = .down
}

//Returns the orientation as a variable
return (assetOrientation, isPortrait)
}

//Method that lays out the instructions for each track I am editing and does the transformation on each individual track to get it lined up properly
func videoCompositionInstructionForTrack(_ track: AVCompositionTrack, _ asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction {

//This method Returns set of instructions from the initial track

//Create inital instruction
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)

//This is whatever asset you are about to apply instructions to.
let assetTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]

//Get the original transform of the asset
var transform = assetTrack.preferredTransform

//Get the orientation of the asset and determine if it is in portrait or landscape - I forget which, but either if you take a picture or get in the camera roll it is ALWAYS determined as landscape at first, I don't recall which one. This method accounts for it.
let assetInfo = orientationFromTransform(transform)

//You need a little background to understand this part.
/* MyAsset is my original video. I need to combine a lot of other segments, according to the user, into this original video. So I have to make all the other videos fit this size.
This is the width and height ratios from the original video divided by the new asset
*/
let width = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width/assetTrack.naturalSize.width
var height = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height

//If it is in portrait
if assetInfo.isPortrait {

//We actually change the height variable to divide by the width of the old asset instead of the height. This is because of the flip since we determined it is portrait and not landscape.
height = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.width

//We apply the transform and scale the image appropriately.
transform = transform.scaledBy(x: height, y: height)

//We also have to move the image or video appropriately. Since we scaled it, it could be wayy off on the side, outside the bounds of the viewing.
let movement = ((1/height)*assetTrack.naturalSize.height)-assetTrack.naturalSize.height

//This lines it up dead center on the left side of the screen perfectly. Now we want to center it.
transform = transform.translatedBy(x: 0, y: movement)

//This calculates how much black there is. Cut it in half and there you go!
let totalBlackDistance = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width-transform.tx
transform = transform.translatedBy(x: 0, y: -(totalBlackDistance/2)*(1/height))

} else {

//Landscape! We don't need to change the variables, it is all defaulted that way (iOS prefers landscape items), so we scale it appropriately.
transform = transform.scaledBy(x: width, y: height)

//This is a little complicated haha. So because it is in landscape, the asset fits the height correctly, for me anyway; It was just extra long. Think of this as a ratio. I forgot exactly how I thought this through, but the end product looked like: Answer = ((Original height/current asset height)*(current asset width))/(Original width)
let scale:CGFloat = ((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height)*(assetTrack.naturalSize.width))/MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width
transform = transform.scaledBy(x: scale, y: 1)

//The asset can be way off the screen again, so we have to move it back. This time we can have it dead center in the middle, because it wasn't backwards because it wasn't flipped because it was landscape. Again, another long complicated algorithm I derived.
let movement = ((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width-((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height)*(assetTrack.naturalSize.width)))/2)*(1/MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height)
transform = transform.translatedBy(x: movement, y: 0)
}

//This creates the instruction and returns it so we can apply it to each individual track.
instruction.setTransform(transform, at: kCMTimeZero)
return instruction
}

Now that we have those methods, we can now apply the correct and appropriate transformations to our assets appropriately and get everything fitting nice and clean.

func merge() {
if let firstAsset = MyAsset, let newAsset = newAsset {

//This creates our overall composition, our new video framework
let mixComposition = AVMutableComposition()

//One by one you create tracks (could use loop, but I just had 3 cases)
let firstTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))

//You have to use a try, so need a do
do {

//Inserting a timerange into a track. I already calculated my time, I call it startTime. This is where you would put your time. The preferredTimeScale doesn't have to be 600000 haha, I was playing with those numbers. It just allows precision. At is not where it begins within this individual track, but where it starts as a whole. As you notice below my At times are different You also need to give it which track
try firstTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600000)),
of: firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0],
at: kCMTimeZero)
} catch _ {
print("Failed to load first track")
}

//Create the 2nd track
let secondTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))

do {

//Apply the 2nd timeRange you have. Also apply the correct track you want
try secondTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, self.endTime-self.startTime),
of: newAsset.tracks(withMediaType: AVMediaTypeVideo)[0],
at: CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600000))
secondTrack.preferredTransform = newAsset.preferredTransform
} catch _ {
print("Failed to load second track")
}

//We are not sure we are going to use the third track in my case, because they can edit to the end of the original video, causing us not to use a third track. But if we do, it is the same as the others!
var thirdTrack:AVMutableCompositionTrack!
if(self.endTime != controller.realDuration) {
thirdTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))

//This part appears again, at endTime which is right after the 2nd track is suppose to end.
do {
try thirdTrack.insertTimeRange(CMTimeRangeMake(CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600000), self.controller.realDuration-endTime),
of: firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0] ,
at: CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600000))
} catch _ {
print("failed")
}
}

//Same thing with audio!
if let loadedAudioAsset = controller.audioAsset {
let audioTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: 0)
do {
try audioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, self.controller.realDuration),
of: loadedAudioAsset.tracks(withMediaType: AVMediaTypeAudio)[0] ,
at: kCMTimeZero)
} catch _ {
print("Failed to load Audio track")
}
}

//So, now that we have all of these tracks we need to apply those instructions! If we don't, then they could be different sizes. Say my newAsset is 720x1080 and MyAsset is 1440x900 (These are just examples haha), then it would look a tad funky and possibly not show our new asset at all.
let mainInstruction = AVMutableVideoCompositionInstruction()

//Make sure the overall time range matches that of the individual tracks, if not, it could cause errors.
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, self.controller.realDuration)

//For each track we made, we need an instruction. Could set loop or do individually as such.
let firstInstruction = videoCompositionInstructionForTrack(firstTrack, firstAsset)
//You know, not 100% why this is here. This is 1 thing I did not look into well enough or understand enough to describe to you.
firstInstruction.setOpacity(0.0, at: startTime)

//Next Instruction
let secondInstruction = videoCompositionInstructionForTrack(secondTrack, self.asset)

//Again, not sure we need 3rd one, but if we do.
var thirdInstruction:AVMutableVideoCompositionLayerInstruction!
if(self.endTime != self.controller.realDuration) {
secondInstruction.setOpacity(0.0, at: endTime)
thirdInstruction = videoCompositionInstructionForTrack(thirdTrack, firstAsset)
}

//Okay, now that we have all these instructions, we tie them into the main instruction we created above.
mainInstruction.layerInstructions = [firstInstruction, secondInstruction]
if(self.endTime != self.controller.realDuration) {
mainInstruction.layerInstructions += [thirdInstruction]
}

//We create a video framework now, slightly different than the one above.
let mainComposition = AVMutableVideoComposition()

//We apply these instructions to the framework
mainComposition.instructions = [mainInstruction]

//How long are our frames, you can change this as necessary
mainComposition.frameDuration = CMTimeMake(1, 30)

//This is your render size of the video. 720p, 1080p etc. You set it!
mainComposition.renderSize = firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize

//We create an export session (you can't use PresetPassthrough because we are manipulating the transforms of the videos and the quality, so I just set it to highest)
guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return }

//Provide type of file, provide the url location you want exported to (I don't have mine posted in this example).
exporter.outputFileType = AVFileTypeMPEG4
exporter.outputURL = url

//Then we tell the exporter to export the video according to our video framework, and it does the work!
exporter.videoComposition = mainComposition

//Asynchronous methods FTW!
exporter.exportAsynchronously(completionHandler: {
//Do whatever when it finishes!
})
}
}

There is a lot going on here, but it has to be done, for my example anyways! Sorry it took so long to post and let me know if you have questions.

AVFoundation - combine videos only one the first is displayed

Your problem is that by using multiple AVMutableCompositionTracks and inserting a time range at a time after kCMTimeZero, you are causing each subsequent track to have its media appear in the composition at kCMTimeZero. You need to use insertEmptyTimeRange: if you want to pursue this route. It will move the media for that particular track forward in time by the duration of the empty range you insert.

Or, a much much easier way would be to use a single AVMutableCompositionTrack.

AVMutableComposition not allowing to add 4 video track

/////////////////////////////////////////////
// Add video tracks 3 to mutable compositon//
/////////////////////////////////////////////
let thirdTrack = compostion.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
do{
try thirdTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset3.duration), ofTrack: videoAsset3.tracksWithMediaType(AVMediaTypeVideo)[0], atTime: videoAsset2.duration)
}
catch{
print("failed to add third track")
}

You're trying to add the third video at atTime: videoAsset2.duration, which may already have a video at that point in time.

Assume Video 1 is 10 seconds and Video 2 is 5 seconds. You're literally trying to insert your 3rd video at 5 seconds in, which is half way through video 1 on the track where there is already an asset.

Fortunately it's an easy fix.

Personally, I maintain an insertion time like this:

var insertionTime : CMTime = kCMTimeZero

Then every time you successfully add a video, you can increment the insertionTime

insertionTime = CMTimeAdd(insertionTime, VideoAsset1.duration)

And from there, just use

try thirdTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset3.duration), ofTrack: videoAsset3.tracksWithMediaType(AVMediaTypeVideo)[0], atTime: insertionTime)

As a bonus with this approach, you can actually rewrite your code to use a loop and an array of video assets which makes it more reusable too.

Do all following tracks need to be updated in a mutable composition if a track is removed?

Tracks are not like a rosary that when you grab a few seeds the other ones shift automatically , yes you have to update the times manually in the mutable composition and the instructions

AVMutableComposition Not Orienting Video Properly

I had this issue regarding orientation and this is how I solved it :

AVMutableCompositionTrack *a_compositionVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[a_compositionVideoTrack setPreferredTransform:CGAffineTransformRotate(CGAffineTransformMakeScale(-1, 1), M_PI)];

By rotating and scaling it. It was in objective-C but you can convert it easily. You just need to change this :

// Setup the preferred transform
compositionvideoTrack.preferredTransform = videoTrack.preferredTransform

Instead of preferredTransform Give transformation manually.

Change volume of audio track within AVMutableComposition

Here is the code I used to change volume of a track:

    let audioMix: AVMutableAudioMix = AVMutableAudioMix()
var audioMixParam: [AVMutableAudioMixInputParameters] = []

let assetAudioFromVideo: AVAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.audio)[0]

let videoParam: AVMutableAudioMixInputParameters = AVMutableAudioMixInputParameters(track: assetAudioFromVideo)
videoParam.trackID = videoAudioTrack!.trackID

videoParam.setVolume(inputs.levels.videoVolume, at: CMTime.zero)
audioMixParam.append(videoParam)

audioMix.inputParameters = audioMixParam

//...

let assetExport: AVAssetExportSession = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)!
assetExport.outputFileType = AVFileType.mp4
assetExport.outputURL = savePathUrl as URL
assetExport.shouldOptimizeForNetworkUse = true
assetExport.audioMix = audioMix
assetExport.videoComposition = videoComposition

assetExport.exportAsynchronously { //...


Related Topics



Leave a reply



Submit