Swift 3: How to Add Watermark on Video? Avvideocompositioncoreanimationtool iOS 10 Issue

Swift 3 : issue with AVVideoCompositionCoreAnimationTool to add watermark on video

Seeing as the code works on iOS 9, this is probably a bug in iOS 10.0 where AVAssetExportSessions don't work properly when they have videoComposition set.

Some have reported that things look better in the iOS 10.1 beta and others have worked around the problem.

Add a Watermark on Video after merging Video and Audio Asset into one in Swift3 iOS

I have worked on a project and used this code. Maybe this will help you to add watermark.

import UIKit
import AssetsLibrary
import AVFoundation
import Photos
import SpriteKit

enum PDWatermarkPosition {
case TopLeft
case TopRight
case BottomLeft
case BottomRight
case Default
}

class PDVideoWaterMarker: NSObject {

func watermark(video videoAsset:AVAsset, watermarkText text : String, saveToLibrary flag : Bool, watermarkPosition position : PDWatermarkPosition, completion : ((_ status : AVAssetExportSessionStatus?, _ session: AVAssetExportSession?, _ outputURL : URL?) -> ())?) {
self.watermark(video: videoAsset, watermarkText: text, imageName: nil, saveToLibrary: flag, watermarkPosition: position) { (status, session, outputURL) -> () in
completion!(status, session, outputURL)
}
}

func watermark(video videoAsset:AVAsset, imageName name : String, watermarkText text : String , saveToLibrary flag : Bool, watermarkPosition position : PDWatermarkPosition, completion : ((_ status : AVAssetExportSessionStatus?, _ session: AVAssetExportSession?, _ outputURL : URL?) -> ())?) {
self.watermark(video: videoAsset, watermarkText: text, imageName: name, saveToLibrary: flag, watermarkPosition: position) { (status, session, outputURL) -> () in
completion!(status, session, outputURL)
}
}

private func watermark(video videoAsset:AVAsset, watermarkText text : String!, imageName name : String!, saveToLibrary flag : Bool, watermarkPosition position : PDWatermarkPosition, completion : ((_ status : AVAssetExportSessionStatus?, _ session: AVAssetExportSession?, _ outputURL : URL?) -> ())?) {
DispatchQueue.global(qos: DispatchQoS.QoSClass.default).async {

let mixComposition = AVMutableComposition()

let compositionVideoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))

if videoAsset.tracks(withMediaType: AVMediaTypeVideo).count == 0

{
completion!(nil, nil, nil)
return
}

let clipVideoTrack = videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0]

self.addAudioTrack(composition: mixComposition, videoAsset: videoAsset as! AVURLAsset)

do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), of: clipVideoTrack, at: kCMTimeZero)
}
catch {
print(error.localizedDescription)
}

let videoSize = clipVideoTrack.naturalSize //CGSize(width: 375, height: 300)

print("videoSize--\(videoSize)")
let parentLayer = CALayer()

let videoLayer = CALayer()

parentLayer.frame = CGRect(x: 0, y: 0, width: videoSize.width, height: videoSize.height)
videoLayer.frame = CGRect(x: 0, y: 0, width: videoSize.width, height: videoSize.height)
//videoLayer.backgroundColor = UIColor.red.cgColor
parentLayer.addSublayer(videoLayer)

if name != nil {
let watermarkImage = UIImage(named: name)
let imageLayer = CALayer()
//imageLayer.backgroundColor = UIColor.purple.cgColor
imageLayer.contents = watermarkImage?.cgImage

var xPosition : CGFloat = 0.0
var yPosition : CGFloat = 0.0
let imageSize : CGFloat = 57.0

switch (position) {
case .TopLeft:
xPosition = 0
yPosition = 0
break
case .TopRight:
xPosition = videoSize.width - imageSize - 30
yPosition = 30
break
case .BottomLeft:
xPosition = 0
yPosition = videoSize.height - imageSize
break
case .BottomRight, .Default:
xPosition = videoSize.width - imageSize
yPosition = videoSize.height - imageSize
break
}

imageLayer.frame = CGRect(x: xPosition, y: yPosition, width: imageSize, height: imageSize)
imageLayer.opacity = 0.65
parentLayer.addSublayer(imageLayer)

if text != nil {
let titleLayer = CATextLayer()
titleLayer.backgroundColor = UIColor.clear.cgColor
titleLayer.string = text
titleLayer.font = "Helvetica" as CFTypeRef
titleLayer.fontSize = 20
titleLayer.alignmentMode = kCAAlignmentRight
titleLayer.frame = CGRect(x: 0, y: yPosition - imageSize, width: videoSize.width - imageSize/2 - 4, height: 57)
titleLayer.foregroundColor = UIColor.red.cgColor
parentLayer.addSublayer(titleLayer)
}
}

let videoComp = AVMutableVideoComposition()
videoComp.renderSize = videoSize
videoComp.frameDuration = CMTimeMake(1, 30)
videoComp.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)

let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, mixComposition.duration)
instruction.backgroundColor = UIColor.gray.cgColor
_ = mixComposition.tracks(withMediaType: AVMediaTypeVideo)[0] as AVAssetTrack

let layerInstruction = self.videoCompositionInstructionForTrack(track: compositionVideoTrack, asset: videoAsset)

instruction.layerInstructions = [layerInstruction]
videoComp.instructions = [instruction]

let documentDirectory = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
let dateFormatter = DateFormatter()
dateFormatter.dateStyle = .long
dateFormatter.timeStyle = .short
let date = dateFormatter.string(from: Date())

let url = URL(fileURLWithPath: documentDirectory).appendingPathComponent("watermarkVideo-\(date).mov")

let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
exporter?.outputURL = url
exporter?.outputFileType = AVFileTypeQuickTimeMovie
exporter?.shouldOptimizeForNetworkUse = false
exporter?.videoComposition = videoComp

exporter?.exportAsynchronously() {
DispatchQueue.main.async {

if exporter?.status == AVAssetExportSessionStatus.completed {
let outputURL = exporter?.outputURL
if flag {
if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(outputURL!.path) {
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: outputURL!)
}) { saved, error in
if saved {
completion!(AVAssetExportSessionStatus.completed, exporter, outputURL)
}
}
}

} else {
completion!(AVAssetExportSessionStatus.completed, exporter, outputURL)
}

} else {
// Error
completion!(exporter?.status, exporter, nil)
}
}
}
}
}

private func addAudioTrack(composition: AVMutableComposition, videoAsset: AVURLAsset) {
let compositionAudioTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
let audioTracks = videoAsset.tracks(withMediaType: AVMediaTypeAudio)
for audioTrack in audioTracks {
try! compositionAudioTrack.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: kCMTimeZero)
}
}

private func orientationFromTransform(transform: CGAffineTransform) -> (orientation: UIImageOrientation, isPortrait: Bool) {
var assetOrientation = UIImageOrientation.up
var isPortrait = false
if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 {
assetOrientation = .right
isPortrait = true
} else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 {
assetOrientation = .left
isPortrait = true
} else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 {
assetOrientation = .up
} else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 {
assetOrientation = .down
}

return (assetOrientation, isPortrait)
}

private func videoCompositionInstructionForTrack(track: AVCompositionTrack, asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction {
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
let assetTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]

let transform = assetTrack.preferredTransform
let assetInfo = orientationFromTransform(transform: transform)

var scaleToFitRatio = UIScreen.main.bounds.width / 375
if assetInfo.isPortrait {
scaleToFitRatio = UIScreen.main.bounds.width / assetTrack.naturalSize.height
let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
instruction.setTransform(assetTrack.preferredTransform.concatenating(scaleFactor),
at: kCMTimeZero)
} else {
let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
var concat = assetTrack.preferredTransform.concatenating(scaleFactor).concatenating(CGAffineTransform(translationX: 0, y: 0))
if assetInfo.orientation == .down {
let fixUpsideDown = CGAffineTransform(rotationAngle: CGFloat(Double.pi))
let windowBounds = UIScreen.main.bounds
let yFix = 375 + windowBounds.height
let centerFix = CGAffineTransform(translationX: assetTrack.naturalSize.width, y: CGFloat(yFix))
concat = fixUpsideDown.concatenating(centerFix).concatenating(scaleFactor)
}
instruction.setTransform(concat, at: kCMTimeZero)

}

return instruction
}
}

Add watermark to recorded video and save

You've forgotten to add your videoComposition to the AVAssetExportSession:

exporter.outputFileType = AVFileTypeMPEG4 // You had this
exporter.videoComposition = videoComp // but had forgotten this
exporter.exportAsynchronouslyWithCompletionHandler({ // ...

iPhone Watermark on recorded Video.

Use AVFoundation. I would suggest grabbing frames with AVCaptureVideoDataOutput, then overlaying the captured frame with the watermark image, and finally writing captured and processed frames to a file user AVAssetWriter.

Search around stack overflow, there are a ton of fantastic examples detailing how to do each of these things I have mentioned. I haven't seen any that give code examples for exactly the effect you would like, but you should be able to mix and match pretty easily.

EDIT:

Take a look at these links:

iPhone: AVCaptureSession capture output crashing (AVCaptureVideoDataOutput) - this post might be helpful just by nature of containing relevant code.

AVCaptureDataOutput will return images as CMSampleBufferRefs.
Convert them to CGImageRefs using this code:

    - (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer

uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);

CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!

return newImage;
}

From there you would convert to a UIImage,

  UIImage *img = [UIImage imageWithCGImage:yourCGImage];  

Then use

[img drawInRect:CGRectMake(x,y,height,width)]; 

to draw the frame to a context, draw a PNG of the watermark over it, and then add the processed images to your output video using AVAssetWriter. I would suggest adding them in real time so you're not filling up memory with tons of UIImages.

How do I export UIImage array as a movie? - this post shows how to add the UIImages you have processed to a video for a given duration.

This should get you well on your way to watermarking your videos. Remember to practice good memory management, because leaking images that are coming in at 20-30fps is a great way to crash the app.

How to add a watermark to video (simple way)?

The easiest way I can think of, if a watermark during play is what you want, is to superinpose a semitransparent image to your video player.

  • create a UIImageView
  • make sure the UIImageView has the same size and layout constraints of your video
  • make sure that the UIImageView is semitransparent (not opaque and with an appropriate alpha, if required)
  • [MPMoviePlayerViewController addSubview:]

and you should be about done.

Hope it helps

EDIT and there is the correct way to do it:

Add watermark on Video [SO answer]

as pointed out by sanjeet in it's comment.

iOS 10.0 - 10.1: AVPlayerLayer doesn't show video after using AVVideoCompositionCoreAnimationTool, only audio

The answer for me in this case is to work around the issue with AVVideoCompositionCoreAnimationTool by using a custom video compositing class implementing the AVVideoCompositing protocol, and a custom composition instruction implementing the AVVideoCompositionInstruction protocol. Because I need to overlay a CALayer on top of the video I'm including that layer in the composition instruction instance.

You need to set the custom compositor on your video composition like so:

composition.customVideoCompositorClass = CustomVideoCompositor.self

and then set your custom instructions on it:

let instruction = CustomVideoCompositionInstruction(...) // whatever parameters you need and are required by the instruction protocol
composition.instructions = [instruction]

EDIT: Here is a working example of how to use a custom compositor to overlay a layer on a video using the GPU: https://github.com/samsonjs/LayerVideoCompositor ... original answer continues below

As for the compositor itself you can implement one if you watch the relevant WWDC sessions and check out their sample code. I cannot post the one I wrote here, but I am using CoreImage to do the heavy lifting in processing the AVAsynchronousVideoCompositionRequest, making sure to use an OpenGL CoreImage context for best performance (if you do it on the CPU it will be abysmally slow). You also may need an auto-release pool if you get a memory usage spike during the export.

If you're overlaying a CALayer like me then make sure to set layer.isGeometryFlipped = true when you render that layer out to a CGImage before sending it off to CoreImage. And make sure you cache the rendered CGImage from frame to frame in your compositor.

Can't show animated CALayer in video using AVVideoCompositionCoreAnimationTool

Ok, Finally got it to work as I always wanted it to.

First off even if he deleted his comments, thanks to Matt for the link to a working example that helped me piece together what was wrong with my code.

  • First off
let assetExport = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetPassthrough)!

I needed to use AVAssetExportPresetHighestQuality instead of AVAssetExportPresetPassthrough. My guess is that the passthrough preset means you don't do any re-encoding so setting it to highest (not medium because my exported video is of over 400x400) made it so that I could actually re-encode my video. I'm guessing this is what was stopping the exported video from containing any of the CALayer I was trying out (even covering the video in white).

  • Secondly (not sure if this affects really but I'll try later)
parentLayer.addSublayer(aLayer)

I replaced this with:

videoLayer.addSublayer(aLayer)

Not sure if this really mattered but my understanding was that this was actually the animation layer for AVVideoCompositionCoreAnimationTool and parentLayer was just a container not meant to contain more than this, but I'm likely wrong.

  • Third change I did
let spriteAnimation = CABasicAnimation(keyPath: "frameIndex")
spriteAnimation.fromValue = 1
spriteAnimation.toValue = 4
spriteAnimation.duration = 2.25
spriteAnimation.repeatCount = .infinity
spriteAnimation.autoreverses = false
spriteAnimation.beginTime = AVCoreAnimationBeginTimeAtZero
aLayer.add(spriteAnimation, forKey: nil)

I changed it to this:

let animation = CAKeyframeAnimation(keyPath: #keyPath(CALayer.contentsRect))
animation.duration = 2.25
animation.calculationMode = kCAAnimationDiscrete
animation.repeatCount = .infinity
animation.values = [
CGRect(x: 0, y: 0, width: 1, height: 1/3.0),
CGRect(x: 0, y: 1/3.0, width: 1, height: 1/3.0),
CGRect(x: 0, y: 2/3.0, width: 1, height: 1/3.0)
] as [CGRect]
animation.beginTime = AVCoreAnimationBeginTimeAtZero
animation.fillMode = kCAFillModeBackwards
animation.isRemovedOnCompletion = false
aLayer.add(animation, forKey: nil)

This change was mainly removing my custom animations for the sprite sheet (since it will always be the same I first wanted a working example then I'll generalise it and probably add it to my private UI Pod). But most importantly animation.isRemovedOnCompletion = false I noticed that removing this makes it so the animation simply does not play on the exported video. So for anyone with CABasicAnimation not animating on the video after an export, try looking if your isRemovedOnCompletion is set correctly on your animation.

I think that's pretty much all the changed I did.

Although I technically answered my question my bounty remains to understand how AVVideoCompositionCoreAnimationTool and AVAssetExport work and why I had to do the changes I did to finally get it to work if anyone is interested in explaining.

Thanks again to Matt, you helped me out by showing me how you did it.



Related Topics



Leave a reply



Submit