Swift 3:Issue with Avvideocompositioncoreanimationtool to Add Watermark on Video

Swift 3 : issue with AVVideoCompositionCoreAnimationTool to add watermark on video

Seeing as the code works on iOS 9, this is probably a bug in iOS 10.0 where AVAssetExportSessions don't work properly when they have videoComposition set.

Some have reported that things look better in the iOS 10.1 beta and others have worked around the problem.

Add a Watermark on Video after merging Video and Audio Asset into one in Swift3 iOS

I have worked on a project and used this code. Maybe this will help you to add watermark.

import UIKit
import AssetsLibrary
import AVFoundation
import Photos
import SpriteKit

enum PDWatermarkPosition {
case TopLeft
case TopRight
case BottomLeft
case BottomRight
case Default
}

class PDVideoWaterMarker: NSObject {


func watermark(video videoAsset:AVAsset, watermarkText text : String, saveToLibrary flag : Bool, watermarkPosition position : PDWatermarkPosition, completion : ((_ status : AVAssetExportSessionStatus?, _ session: AVAssetExportSession?, _ outputURL : URL?) -> ())?) {
self.watermark(video: videoAsset, watermarkText: text, imageName: nil, saveToLibrary: flag, watermarkPosition: position) { (status, session, outputURL) -> () in
completion!(status, session, outputURL)
}
}

func watermark(video videoAsset:AVAsset, imageName name : String, watermarkText text : String , saveToLibrary flag : Bool, watermarkPosition position : PDWatermarkPosition, completion : ((_ status : AVAssetExportSessionStatus?, _ session: AVAssetExportSession?, _ outputURL : URL?) -> ())?) {
self.watermark(video: videoAsset, watermarkText: text, imageName: name, saveToLibrary: flag, watermarkPosition: position) { (status, session, outputURL) -> () in
completion!(status, session, outputURL)
}
}

private func watermark(video videoAsset:AVAsset, watermarkText text : String!, imageName name : String!, saveToLibrary flag : Bool, watermarkPosition position : PDWatermarkPosition, completion : ((_ status : AVAssetExportSessionStatus?, _ session: AVAssetExportSession?, _ outputURL : URL?) -> ())?) {
DispatchQueue.global(qos: DispatchQoS.QoSClass.default).async {

let mixComposition = AVMutableComposition()

let compositionVideoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))


if videoAsset.tracks(withMediaType: AVMediaTypeVideo).count == 0

{
completion!(nil, nil, nil)
return
}

let clipVideoTrack = videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0]

self.addAudioTrack(composition: mixComposition, videoAsset: videoAsset as! AVURLAsset)


do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), of: clipVideoTrack, at: kCMTimeZero)
}
catch {
print(error.localizedDescription)
}

let videoSize = clipVideoTrack.naturalSize //CGSize(width: 375, height: 300)

print("videoSize--\(videoSize)")
let parentLayer = CALayer()

let videoLayer = CALayer()

parentLayer.frame = CGRect(x: 0, y: 0, width: videoSize.width, height: videoSize.height)
videoLayer.frame = CGRect(x: 0, y: 0, width: videoSize.width, height: videoSize.height)
//videoLayer.backgroundColor = UIColor.red.cgColor
parentLayer.addSublayer(videoLayer)

if name != nil {
let watermarkImage = UIImage(named: name)
let imageLayer = CALayer()
//imageLayer.backgroundColor = UIColor.purple.cgColor
imageLayer.contents = watermarkImage?.cgImage

var xPosition : CGFloat = 0.0
var yPosition : CGFloat = 0.0
let imageSize : CGFloat = 57.0

switch (position) {
case .TopLeft:
xPosition = 0
yPosition = 0
break
case .TopRight:
xPosition = videoSize.width - imageSize - 30
yPosition = 30
break
case .BottomLeft:
xPosition = 0
yPosition = videoSize.height - imageSize
break
case .BottomRight, .Default:
xPosition = videoSize.width - imageSize
yPosition = videoSize.height - imageSize
break
}


imageLayer.frame = CGRect(x: xPosition, y: yPosition, width: imageSize, height: imageSize)
imageLayer.opacity = 0.65
parentLayer.addSublayer(imageLayer)



if text != nil {
let titleLayer = CATextLayer()
titleLayer.backgroundColor = UIColor.clear.cgColor
titleLayer.string = text
titleLayer.font = "Helvetica" as CFTypeRef
titleLayer.fontSize = 20
titleLayer.alignmentMode = kCAAlignmentRight
titleLayer.frame = CGRect(x: 0, y: yPosition - imageSize, width: videoSize.width - imageSize/2 - 4, height: 57)
titleLayer.foregroundColor = UIColor.red.cgColor
parentLayer.addSublayer(titleLayer)
}
}

let videoComp = AVMutableVideoComposition()
videoComp.renderSize = videoSize
videoComp.frameDuration = CMTimeMake(1, 30)
videoComp.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)


let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, mixComposition.duration)
instruction.backgroundColor = UIColor.gray.cgColor
_ = mixComposition.tracks(withMediaType: AVMediaTypeVideo)[0] as AVAssetTrack

let layerInstruction = self.videoCompositionInstructionForTrack(track: compositionVideoTrack, asset: videoAsset)

instruction.layerInstructions = [layerInstruction]
videoComp.instructions = [instruction]



let documentDirectory = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
let dateFormatter = DateFormatter()
dateFormatter.dateStyle = .long
dateFormatter.timeStyle = .short
let date = dateFormatter.string(from: Date())


let url = URL(fileURLWithPath: documentDirectory).appendingPathComponent("watermarkVideo-\(date).mov")

let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
exporter?.outputURL = url
exporter?.outputFileType = AVFileTypeQuickTimeMovie
exporter?.shouldOptimizeForNetworkUse = false
exporter?.videoComposition = videoComp


exporter?.exportAsynchronously() {
DispatchQueue.main.async {

if exporter?.status == AVAssetExportSessionStatus.completed {
let outputURL = exporter?.outputURL
if flag {
if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(outputURL!.path) {
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: outputURL!)
}) { saved, error in
if saved {
completion!(AVAssetExportSessionStatus.completed, exporter, outputURL)
}
}
}

} else {
completion!(AVAssetExportSessionStatus.completed, exporter, outputURL)
}

} else {
// Error
completion!(exporter?.status, exporter, nil)
}
}
}
}
}

private func addAudioTrack(composition: AVMutableComposition, videoAsset: AVURLAsset) {
let compositionAudioTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
let audioTracks = videoAsset.tracks(withMediaType: AVMediaTypeAudio)
for audioTrack in audioTracks {
try! compositionAudioTrack.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: kCMTimeZero)
}
}


private func orientationFromTransform(transform: CGAffineTransform) -> (orientation: UIImageOrientation, isPortrait: Bool) {
var assetOrientation = UIImageOrientation.up
var isPortrait = false
if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 {
assetOrientation = .right
isPortrait = true
} else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 {
assetOrientation = .left
isPortrait = true
} else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 {
assetOrientation = .up
} else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 {
assetOrientation = .down
}

return (assetOrientation, isPortrait)
}

private func videoCompositionInstructionForTrack(track: AVCompositionTrack, asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction {
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
let assetTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]

let transform = assetTrack.preferredTransform
let assetInfo = orientationFromTransform(transform: transform)

var scaleToFitRatio = UIScreen.main.bounds.width / 375
if assetInfo.isPortrait {
scaleToFitRatio = UIScreen.main.bounds.width / assetTrack.naturalSize.height
let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
instruction.setTransform(assetTrack.preferredTransform.concatenating(scaleFactor),
at: kCMTimeZero)
} else {
let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
var concat = assetTrack.preferredTransform.concatenating(scaleFactor).concatenating(CGAffineTransform(translationX: 0, y: 0))
if assetInfo.orientation == .down {
let fixUpsideDown = CGAffineTransform(rotationAngle: CGFloat(Double.pi))
let windowBounds = UIScreen.main.bounds
let yFix = 375 + windowBounds.height
let centerFix = CGAffineTransform(translationX: assetTrack.naturalSize.width, y: CGFloat(yFix))
concat = fixUpsideDown.concatenating(centerFix).concatenating(scaleFactor)
}
instruction.setTransform(concat, at: kCMTimeZero)

}

return instruction
}
}

Adding watermark to video is extremely slow

Per the Apple Documentation, try using the class AVAsynchronousCIImageFilteringRequest

Overview


You use this class when creating a composition for Core Image filtering with the init(asset:applyingCIFiltersWithHandler:) method. In that method call, you provide a block to be called by AVFoundation as it processes each frame of video, and the block’s sole parameter is a AVAsynchronousCIImageFilteringRequest object. Use that object both to the video frame image to be filtered and allows you to return a filtered image to AVFoundation for display or export. Listing 1 shows an example of applying a filter to an asset.

    let filter = CIFilter(name: "CIGaussianBlur")!
let composition = AVVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in

// Clamp to avoid blurring transparent pixels at the image edges
let source = request.sourceImage.imageByClampingToExtent()
filter.setValue(source, forKey: kCIInputImageKey)

// Vary filter parameters based on video timing
let seconds = CMTimeGetSeconds(request.compositionTime)
filter.setValue(seconds * 10.0, forKey: kCIInputRadiusKey)

// Crop the blurred output to the bounds of the original image
let output = filter.outputImage!.imageByCroppingToRect(request.sourceImage.extent)

// Provide the filter output to the composition
request.finishWithImage(output, context: nil)
})

There is a tutorial in Objective C that may be a good resource as well.

Can't show animated CALayer in video using AVVideoCompositionCoreAnimationTool

Ok, Finally got it to work as I always wanted it to.

First off even if he deleted his comments, thanks to Matt for the link to a working example that helped me piece together what was wrong with my code.

  • First off
let assetExport = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetPassthrough)!

I needed to use AVAssetExportPresetHighestQuality instead of AVAssetExportPresetPassthrough. My guess is that the passthrough preset means you don't do any re-encoding so setting it to highest (not medium because my exported video is of over 400x400) made it so that I could actually re-encode my video. I'm guessing this is what was stopping the exported video from containing any of the CALayer I was trying out (even covering the video in white).

  • Secondly (not sure if this affects really but I'll try later)
parentLayer.addSublayer(aLayer)

I replaced this with:

videoLayer.addSublayer(aLayer)

Not sure if this really mattered but my understanding was that this was actually the animation layer for AVVideoCompositionCoreAnimationTool and parentLayer was just a container not meant to contain more than this, but I'm likely wrong.

  • Third change I did
let spriteAnimation = CABasicAnimation(keyPath: "frameIndex")
spriteAnimation.fromValue = 1
spriteAnimation.toValue = 4
spriteAnimation.duration = 2.25
spriteAnimation.repeatCount = .infinity
spriteAnimation.autoreverses = false
spriteAnimation.beginTime = AVCoreAnimationBeginTimeAtZero
aLayer.add(spriteAnimation, forKey: nil)

I changed it to this:

let animation = CAKeyframeAnimation(keyPath: #keyPath(CALayer.contentsRect))
animation.duration = 2.25
animation.calculationMode = kCAAnimationDiscrete
animation.repeatCount = .infinity
animation.values = [
CGRect(x: 0, y: 0, width: 1, height: 1/3.0),
CGRect(x: 0, y: 1/3.0, width: 1, height: 1/3.0),
CGRect(x: 0, y: 2/3.0, width: 1, height: 1/3.0)
] as [CGRect]
animation.beginTime = AVCoreAnimationBeginTimeAtZero
animation.fillMode = kCAFillModeBackwards
animation.isRemovedOnCompletion = false
aLayer.add(animation, forKey: nil)

This change was mainly removing my custom animations for the sprite sheet (since it will always be the same I first wanted a working example then I'll generalise it and probably add it to my private UI Pod). But most importantly animation.isRemovedOnCompletion = false I noticed that removing this makes it so the animation simply does not play on the exported video. So for anyone with CABasicAnimation not animating on the video after an export, try looking if your isRemovedOnCompletion is set correctly on your animation.

I think that's pretty much all the changed I did.

Although I technically answered my question my bounty remains to understand how AVVideoCompositionCoreAnimationTool and AVAssetExport work and why I had to do the changes I did to finally get it to work if anyone is interested in explaining.

Thanks again to Matt, you helped me out by showing me how you did it.

iOS 10.0 - 10.1: AVPlayerLayer doesn't show video after using AVVideoCompositionCoreAnimationTool, only audio

The answer for me in this case is to work around the issue with AVVideoCompositionCoreAnimationTool by using a custom video compositing class implementing the AVVideoCompositing protocol, and a custom composition instruction implementing the AVVideoCompositionInstruction protocol. Because I need to overlay a CALayer on top of the video I'm including that layer in the composition instruction instance.

You need to set the custom compositor on your video composition like so:

composition.customVideoCompositorClass = CustomVideoCompositor.self

and then set your custom instructions on it:

let instruction = CustomVideoCompositionInstruction(...) // whatever parameters you need and are required by the instruction protocol
composition.instructions = [instruction]

EDIT: Here is a working example of how to use a custom compositor to overlay a layer on a video using the GPU: https://github.com/samsonjs/LayerVideoCompositor ... original answer continues below

As for the compositor itself you can implement one if you watch the relevant WWDC sessions and check out their sample code. I cannot post the one I wrote here, but I am using CoreImage to do the heavy lifting in processing the AVAsynchronousVideoCompositionRequest, making sure to use an OpenGL CoreImage context for best performance (if you do it on the CPU it will be abysmally slow). You also may need an auto-release pool if you get a memory usage spike during the export.

If you're overlaying a CALayer like me then make sure to set layer.isGeometryFlipped = true when you render that layer out to a CGImage before sending it off to CoreImage. And make sure you cache the rendered CGImage from frame to frame in your compositor.

Adding watermark to video text not shows

i have notice this issue when i was making my own application

there is a part of function of my application which is adding subtitles to my videos i was tried to using CATextLayer to rendering my subtitles to my AVMutableComposition obj.but i was failed too
then i was tried to using an container as kind of bridge.so i was putting my CATextLayer in to CALayer , then adding this CALayer to animationTool's sublayer.

let subtitle = CATextLayer()
/*
setup subtitle's properties...
*/
let subtitle_container = CALayer()
/*
setup container's frame/position/rotation/animations<-(such as subtitles fade in / out effects)...
*/
subtitle_container.addSublayer(subtitle)
parentLayer.addSublayer(subtitle_container)

in my application, this will do the trick, but the reason of not rendering CATextLayer,well still have no idea why.

wish could help



Update 21/3/2017

This is pieces of code in my project, i was using OC not swift.

        RAGE_SubtitleModel * m = subtitleModelArray[i];
// ^this model contained the frame of subtitle, start time , end time.

CGFloat subtitle_startTime = CMTimeGetSeconds(m.timeMapping.target.start);
CGFloat subtitle_duration = CMTimeGetSeconds(m.timeMapping.target.duration);

CALayer * subtitle_container = [CALayer layer];
CATextLayer * subtitle_layer = [CATextLayer layer];
// Font
[subtitle_layer setFont:(__bridge CFTypeRef _Nullable)(m.font.fontName)];
// Font Size
[subtitle_layer setFontSize:m.fontSize * MAX(ratioW, ratioH)];
// Background Color
[subtitle_layer setBackgroundColor:[UIColor clearColor].CGColor];
// Subtitle Color
[subtitle_layer setForegroundColor:m.color.CGColor];
// Subtitle Content
[subtitle_layer setString:m.string];
// Subtitle Alignment Type
[subtitle_layer setAlignmentMode:kCAAlignmentLeft];

[subtitle_layer setContentsScale:SCREEN_SCALE];
// Subtitle Position
[subtitle_container setFrame:CGRectMake(0,
0,
targetSize.width,
targetSize.height)];

[subtitle_layer setFrame:CGRectMake(0, 0, m.frame.size.width * ratioW, m.frame.size.height * ratioH)];

[subtitle_layer setAnchorPoint:CGPointMake(.5,.5)];

[subtitle_layer setPosition:CGPointMake(m.frame.origin.x * ratioW + m.frame.size.width * ratioW * .5f,
targetSize.height - m.frame.origin.y * ratioH - m.frame.size.height * ratioH * .5f)];

[subtitle_container addSublayer:subtitle_layer];
[renderingLayer addSublayer:subtitle_container];

as above , i was using subtitle_container which is CALayer i mentioned, and adding container to renderingLayer which is parentLayer.

is this possible because your watermark's frame was incorrect, so you are rendering only container? i was setting my container's width & height to my output video's width & height ,so i had an full screen container, after that, adjusting my CATextLayer become much more easier.

Anything useful to you?

Add watermark to recorded video and save

You've forgotten to add your videoComposition to the AVAssetExportSession:

exporter.outputFileType = AVFileTypeMPEG4 // You had this
exporter.videoComposition = videoComp // but had forgotten this
exporter.exportAsynchronouslyWithCompletionHandler({ // ...


Related Topics



Leave a reply



Submit