Occasional Blank Frames After Exporting Asset - Avexportsession

AvMutableComposition issues having black frame at the end

there is a property of exportsession to give the time range ,
try giving time range little less than the actual time (few nano seconds less)

Performance issues with AVMutableComposition - scaleTimeRange

I have a feeling that by playing your videos at 10x using scaleTimeRange:toDuration simply has the effect of multiplying your data rate by 10, bringing it up to 10Mbit/s, which osx machines can handle, but iOS devices cannot.

In other words, you're creating videos that need to play back at 300 frames per second, which is pushing AVPlayer too hard.

If I didn't know about your other question, I would have said that the solution is to export your AVComposition using AVAssetExportSession, which should result in your high FPS video being down sampled to an easier to handle 30fps, and then play that with AVPlayer.

If AVAssetExportSession isn't working, you could try applying the speedup effect yourself, by reading the frames from the source video using AVAssetReader and writing every tenth frame to the output file using AVAssetWriter (don't forget to set the correct presentation timestamps).

Why don't I get video when exporting a movie using AVAssetExportSession?

Thanks ChrisH, you were right! The Export was taking place on another thread so in the handler I need to get the main queue...

I needed to get the main thread after

case AVAssetExportSessionStatusCompleted:{
dispatch_async(dispatch_get_main_queue(), ^{
//post the notification!
});
break;
}

AVAssetExportSession export fails non-deterministically with error: Operation Stopped, NSLocalizedFailureReason=The video could not be composed.

What seems to be the cure is making sure the assetTrack parameter in AVMutableVideoCompositionLayerInstruction is not from the AVURLAsset object, but from the video object returned by addMutableTrackWithMediaType.

In other words, this line:

let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: sourceVideoTrack)

Should be:

let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)

Argh. Hours of endless frustration because sometimes the first line worked, and sometimes it didn't.

Still would like to award the bounty to someone.

If you can explain why the first line failed non-deterministically, instead of every time, or provide a deeper tutorial into AVMutableComposition and its related classes -- for the purposes of adding text overlays to user-recorded videos -- the bounty is all yours. :)

AVMutableComposition output freezes at the last frame of the first video

I believe I figured out the problem in your code. You are only creating instructions on the first track. Look at these two lines here:

AVAssetTrack videoTrackWithMediaType = mixComposition.TracksWithMediaType(AVMediaType.Video)[0];

var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrackWithMediaType);

AVMutableComposition.tracksWithMediaType gets an array of tracks so, at the end of the first line, [0], grabs only the first track in the composition, which is the first video. As you loop through you are just creating instructions for the first video multiple times.

Your code and me not being familiar with Xamarin is confusing me, but I believe you can just do this and it should work:

Change these lines:

AVAssetTrack videoTrackWithMediaType =     mixComposition.TracksWithMediaType(AVMediaType.Video)[0];

var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrackWithMediaType);

#region Instructions
int counter = Clips.IndexOf(clip);
Instruction_Array[counter] = TestingInstruction(asset, mixComposition.Duration, videoTrackWithMediaType);
#endregion

To this:

var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrack);

#region Instructions
int counter = Clips.IndexOf(clip);
Instruction_Array[counter] = TestingInstruction(asset, mixComposition.Duration, videoTrack);
#endregion

All I did here was get rid of the videoTracksWithMediaType variable you made and used videoTrack instead. No need to fetch the corresponding track since you already created it and still have access to it within the code block you are in when creating instructions.

AVMutableComposition - concatenated video assets stops after first asset

OK, it was my mistake. I'd put the addMutableTrackWithMediaType: inside the for loop. Silly me. Fixed as below and it works like a charm!

I'll leave this here just in case anyone else has the same problem.

AVMutableComposition * movie = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [movie addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *compositionAudioTrack = [movie addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];

CMTime offset = kCMTimeZero;

for (AVURLAsset * asset in assets) {

AVAssetTrack *assetVideoTrack = [asset tracksWithMediaType:AVMediaTypeVideo].firstObject;
AVAssetTrack *assetAudioTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;

CMTimeRange timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration);
NSError * error = nil;

if (![compositionVideoTrack insertTimeRange: timeRange ofTrack: assetVideoTrack atTime: offset error: &error]) {
NSLog(@"Error adding video track - %@", error);
}
if (![compositionAudioTrack insertTimeRange: timeRange ofTrack: assetAudioTrack atTime: offset error: &error]) {
NSLog(@"Error adding audio track - %@", error);
}

offset = CMTimeAdd(offset, asset.duration);
}


Related Topics



Leave a reply



Submit