Cgaffinetransform -How to Align Video in Screen Center

How to get two views to be the same width and height using CGAffineTransform

You need to make your transforms based on the Composited Video's output size - its .renderSize.

Based on your other question...

So, if you have two 1280.0 x 720.0 videos, and you want them side-by-side in a 640 x 480 rendered frame, you need to:

  • get the size of the first video
  • scale it to 320 x 480
  • move it to 0, 0

then:

  • get the size of the second video
  • scale it to 320 x 480
  • move it to 320, 0

So your scale transform will be:

let targetWidth = renderSize.width / 2.0
let targetHeight = renderSize.height
let widthScale = targetWidth / sourceVideoSize.width
let heightScale = targetHeight / sourceVideoSize.height

let scale = CGAffineTransform(scaleX: widthScale, y: heightScale)

That should get your there --- except...

In my testing, I took 4 8-second videos in landscape orientation.

For reasons unbeknownst to me - the "native" preferredTransforms are:

Videos 1 & 3
[-1, 0, 0, -1, 1280, 720]

Videos 2 & 4
[1, 0, 0, 1, 0, 0]

So, the sizes returned by the recommended track.naturalSize.applying(track.preferredTransform) end up being:

Videos 1 & 3
-1280 x -720

Videos 2 & 4
1280 x 720

which messes with the transforms.

After a little experimentation, if the size is negative, we need to:

  • rotate the transform
  • scale the transform (making sure to use positive widths/heights)
  • translate the transform adjusted for the change in orientation

Here is a complete implementation (without the save-to-disk at the end):

import UIKit
import AVFoundation

class VideoViewController: UIViewController {

override func viewDidLoad() {
super.viewDidLoad()

view.backgroundColor = .systemYellow
}

override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)

guard let originalVideoURL1 = Bundle.main.url(forResource: "video1", withExtension: "mov"),
let originalVideoURL2 = Bundle.main.url(forResource: "video2", withExtension: "mov")
else { return }

let firstAsset = AVURLAsset(url: originalVideoURL1)
let secondAsset = AVURLAsset(url: originalVideoURL2)

let mixComposition = AVMutableComposition()

guard let firstTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
let timeRange1 = CMTimeRangeMake(start: .zero, duration: firstAsset.duration)

do {
try firstTrack.insertTimeRange(timeRange1, of: firstAsset.tracks(withMediaType: .video)[0], at: .zero)
} catch {
return
}

guard let secondTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
let timeRange2 = CMTimeRangeMake(start: .zero, duration: secondAsset.duration)

do {
try secondTrack.insertTimeRange(timeRange2, of: secondAsset.tracks(withMediaType: .video)[0], at: .zero)
} catch {
return
}

let mainInstruction = AVMutableVideoCompositionInstruction()

mainInstruction.timeRange = CMTimeRangeMake(start: .zero, duration: CMTimeMaximum(firstAsset.duration, secondAsset.duration))

var track: AVAssetTrack!

track = firstAsset.tracks(withMediaType: .video).first

let firstSize = track.naturalSize.applying(track.preferredTransform)

track = secondAsset.tracks(withMediaType: .video).first

let secondSize = track.naturalSize.applying(track.preferredTransform)

// debugging
print("firstSize:", firstSize)
print("secondSize:", secondSize)

let renderSize = CGSize(width: 640, height: 480)

var scale: CGAffineTransform!
var move: CGAffineTransform!

let firstLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: firstTrack)

scale = .identity
move = .identity

if (firstSize.width < 0) {
scale = CGAffineTransform(rotationAngle: .pi)
}
scale = scale.scaledBy(x: abs(renderSize.width / 2.0 / firstSize.width), y: abs(renderSize.height / firstSize.height))
move = CGAffineTransform(translationX: 0, y: 0)
if (firstSize.width < 0) {
move = CGAffineTransform(translationX: renderSize.width / 2.0, y: renderSize.height)
}

firstLayerInstruction.setTransform(scale.concatenating(move), at: .zero)

let secondLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: secondTrack)

scale = .identity
move = .identity

if (secondSize.width < 0) {
scale = CGAffineTransform(rotationAngle: .pi)
}
scale = scale.scaledBy(x: abs(renderSize.width / 2.0 / secondSize.width), y: abs(renderSize.height / secondSize.height))
move = CGAffineTransform(translationX: renderSize.width / 2.0, y: 0)
if (secondSize.width < 0) {
move = CGAffineTransform(translationX: renderSize.width, y: renderSize.height)
}

secondLayerInstruction.setTransform(scale.concatenating(move), at: .zero)

mainInstruction.layerInstructions = [firstLayerInstruction, secondLayerInstruction]

let mainCompositionInst = AVMutableVideoComposition()
mainCompositionInst.instructions = [mainInstruction]
mainCompositionInst.frameDuration = CMTime(value: 1, timescale: 30)
mainCompositionInst.renderSize = renderSize

let newPlayerItem = AVPlayerItem(asset: mixComposition)
newPlayerItem.videoComposition = mainCompositionInst

let player = AVPlayer(playerItem: newPlayerItem)
let playerLayer = AVPlayerLayer(player: player)

playerLayer.frame = view.bounds
view.layer.addSublayer(playerLayer)
player.seek(to: .zero)
player.play()

// video export code goes here...

}

}

It's possible that the preferredTransforms could also be different for front / back camera, mirrored, etc. But I'll leave that up to you to work out.

Edit

Sample project at: https://github.com/DonMag/VideoTest

Produces (using two 720 x 1280 video clips):

Sample Image

Scale with CGAffineTransform and set the anchor

(a)

Scale and then translate?

Something like :

CGAffineTransform t = CGAffineTransformMakeScale(2, 2);
t = CGAffineTransformTranslate(t, width/2, height/2);
self.transform = t;

(b)

Set the anchor point (which is probably what you want really)

[self layer].anchorPoint = CGPointMake(0.0f, 0.0f);
self.transform = CGAffineTransformMakeScale(2, 2);

(c)

Set the center again to make sure it's in the same place?

CGPoint center = self.center;
self.transform = CGAffineTransformMakeScale(2, 2);
self.center = center;

How do you translate and scale a view without the transforms from conflicting with each other?

Understanding transforms

The main thing to realize is that the origin for transforms is the center point of the view rectangle. This is best shown with an example.

First we translate the view. v1 is the view at it's original position, v2 is the view at its translated position. p is the padding that you desire (finalPadding in your code). c marks the center point of the view.

+--------------------------------+
| ^ |
| | p |
| v |
| +- v2 --------+ |
| | | |
| | c |<->|
| | | p |
| +-------------+ |
| |
| |
| +- v1 --------+ |
| | | |
| | c | |
| | | |
| +-------------+ |
| |
+--------------------------------+

Next we scale the view. v3 is the view at its scaled position. Note how the center point for v3 remains unchanged while the dimensions of the view around it shrink. Although the dimensions are correct, the positioning of the view and the resulting padding p' are wrong.

+--------------------------------+
| ^ |
| | p' |
| | |
| v |
| +- v3 --+ |
| | c |<---->|
| +-------+ p' |
| |
| |
| +- v1 --------+ |
| | | |
| | c | |
| | | |
| +-------------+ |
| |
+--------------------------------+

A fix for the delta calculations

Now that we know how scaling works, we need to fix the code where you calculate the translation deltas. Here is how to do it right:

CGRect windowFrame = self.window.frame;
CGRect viewFrame = self.platformProgressView.frame;
CGPoint finalCenter = CGPointZero;
finalCenter.x = (windowFrame.size.width
- (viewFrame.size.width * finalScale) / 2.0f
- finalPadding);
finalCenter.y = (finalPadding
+ (viewFrame.size.height * finalScale) / 2.0f);
CGPoint viewCenter = self.platformProgressView.center;
CGFloat deltaX = finalCenter.x - viewCenter.x;
CGFloat deltaY = finalCenter.y - viewCenter.y;

Order of transforms

Finally, as you have noted yourself, the order in which transformations are concatenated with CGAffineTransformConcat matters. In your first attempt you have the sequence 1) transform + 2) scale. The result is that the scale transform - which comes later in the sequence - affects the deltas specified for the translate transform.

There are 2 solution here: Either your own second attempt, where you reverse the sequence so that it becomes 1) scale + 2) transform. Or you use the helper function which I suggested in the first revision of my answer. So this

self.platformProgressView.transform = CGAffineTransformConcat(
CGAffineTransformMakeScale(finalScale, finalScale),
CGAffineTransformMakeTranslation(deltaX, deltaY)
);

is equivalent to

CGAffineTransform CGAffineTransformMakeScaleTranslate(CGFloat sx, CGFloat sy, CGFloat dx, CGFloat dy)
{
return CGAffineTransformMake(sx, 0.0f, 0.0f, sy, dx, dy);
}

self.platformProgressView.transform = CGAffineTransformMakeScaleTranslate(finalScale, finalScale, deltaX, deltaY);

A different solution: Setting the anchor point

If you're unhappy about the center point being the origin for the view's transforms you can change this by setting the anchorPoint property of the view's CALayer. The default anchor point is at 0.5/0.5, which represents the center of the view rectangle. Obviously, the anchor point is not a coordinate but a kind of multiplication factor.

Your original calculation for the translation deltas was correct if we assume that the anchor point is in the upper-left corner of the view. So if you do this

self.platformProgressView.layer.anchorPoint = CGPointMake(0.0f, 0.0f)

you can keep your original calculations.

Reference

There is probably a lot more of information material out there, but the WWDC 2011 video Understanding UIKit Rendering is something that I recently watched and that greatly helped me improve my understanding of the relationship between bounds, center, transform and frame.

Also, if you are going to change the CALayer anchorPoint property, you should probably read the property documentation in the CALayer class reference first.

Cropping AVAsset video with AVFoundation

Here is my interpretation of your question: You are capturing video on a device with a screen ratio of 4:3, thus your AVCaptureVideoPreviewLayer is 4:3, but the video input device captures video in 16:9 so the resulting video is 'larger' than seen in the preview.

If you are simply looking to crop the extra pixels not caught by the preview then check out this http://www.netwalk.be/article/record-square-video-ios. This article shows how to crop the video into a square. However you'll only need a few modifications to crop to 4:3. I've gone and tested this, here are the changes I made:

Once you have the AVAssetTrack for the video you will need to calculate a new height.

// we convert the captured height i.e. 1080 to a 4:3 screen ratio and get the new height
CGFloat newHeight = clipVideoTrack.naturalSize.height/3*4;

Then modify these two lines, using newHeight.

videoComposition.renderSize = CGSizeMake(clipVideoTrack.naturalSize.height, newHeight);

CGAffineTransform t1 = CGAffineTransformMakeTranslation(clipVideoTrack.naturalSize.height, -(clipVideoTrack.naturalSize.width - newHeight)/2 );

So what we've done here is set the renderSize to a 4:3 ratio - the exact dimension are based on the input device. We then use a CGAffineTransform to translate the video position so that what we saw in the AVCaptureVideoPreviewLayer is what is rendered to our file.

Edit: If you want to put it all together and crop a video based on the device's screen ratio (3:2, 4:3, 16:9) and take the video orientation into mind we need to add a few things.

First here is the modified sample code with a few critical alterations:

// output file
NSString* docFolder = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
NSString* outputPath = [docFolder stringByAppendingPathComponent:@"output2.mov"];
if ([[NSFileManager defaultManager] fileExistsAtPath:outputPath])
[[NSFileManager defaultManager] removeItemAtPath:outputPath error:nil];

// input file
AVAsset* asset = [AVAsset assetWithURL:outputFileURL];

AVMutableComposition *composition = [AVMutableComposition composition];
[composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];

// input clip
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];

// crop clip to screen ratio
UIInterfaceOrientation orientation = [self orientationForTrack:asset];
BOOL isPortrait = (orientation == UIInterfaceOrientationPortrait || orientation == UIInterfaceOrientationPortraitUpsideDown) ? YES: NO;
CGFloat complimentSize = [self getComplimentSize:videoTrack.naturalSize.height];
CGSize videoSize;

if(isPortrait) {
videoSize = CGSizeMake(videoTrack.naturalSize.height, complimentSize);
} else {
videoSize = CGSizeMake(complimentSize, videoTrack.naturalSize.height);
}

AVMutableVideoComposition* videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.renderSize = videoSize;
videoComposition.frameDuration = CMTimeMake(1, 30);

AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30) );

// rotate and position video
AVMutableVideoCompositionLayerInstruction* transformer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];

CGFloat tx = (videoTrack.naturalSize.width-complimentSize)/2;
if (orientation == UIInterfaceOrientationPortrait || orientation == UIInterfaceOrientationLandscapeRight) {
// invert translation
tx *= -1;
}

// t1: rotate and position video since it may have been cropped to screen ratio
CGAffineTransform t1 = CGAffineTransformTranslate(videoTrack.preferredTransform, tx, 0);
// t2/t3: mirror video horizontally
CGAffineTransform t2 = CGAffineTransformTranslate(t1, isPortrait?0:videoTrack.naturalSize.width, isPortrait?videoTrack.naturalSize.height:0);
CGAffineTransform t3 = CGAffineTransformScale(t2, isPortrait?1:-1, isPortrait?-1:1);

[transformer setTransform:t3 atTime:kCMTimeZero];
instruction.layerInstructions = [NSArray arrayWithObject: transformer];
videoComposition.instructions = [NSArray arrayWithObject: instruction];

// export
exporter = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPresetHighestQuality] ;
exporter.videoComposition = videoComposition;
exporter.outputURL=[NSURL fileURLWithPath:outputPath];
exporter.outputFileType=AVFileTypeQuickTimeMovie;

[exporter exportAsynchronouslyWithCompletionHandler:^(void){
NSLog(@"Exporting done!");

// added export to library for testing
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:[NSURL fileURLWithPath:outputPath]]) {
[library writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:outputPath]
completionBlock:^(NSURL *assetURL, NSError *error) {
NSLog(@"Saved to album");
if (error) {

}
}];
}
}];

What we added here is a call to get the new render size of the video based on cropping its dimensions to the screen ratio. Once we crop the size down, we need to translate the position to recenter the video. So we grab its orientation to move it in the proper direction. This will fix the off-center issue we saw with UIInterfaceOrientationLandscapeLeft. Finally CGAffineTransform t2, t3 mirror the video horizontally.

And here are the two new methods that make this happen:

- (CGFloat)getComplimentSize:(CGFloat)size {
CGRect screenRect = [[UIScreen mainScreen] bounds];
CGFloat ratio = screenRect.size.height / screenRect.size.width;

// we have to adjust the ratio for 16:9 screens
if (ratio == 1.775) ratio = 1.77777777777778;

return size * ratio;
}

- (UIInterfaceOrientation)orientationForTrack:(AVAsset *)asset {
UIInterfaceOrientation orientation = UIInterfaceOrientationPortrait;
NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];

if([tracks count] > 0) {
AVAssetTrack *videoTrack = [tracks objectAtIndex:0];
CGAffineTransform t = videoTrack.preferredTransform;

// Portrait
if(t.a == 0 && t.b == 1.0 && t.c == -1.0 && t.d == 0) {
orientation = UIInterfaceOrientationPortrait;
}
// PortraitUpsideDown
if(t.a == 0 && t.b == -1.0 && t.c == 1.0 && t.d == 0) {
orientation = UIInterfaceOrientationPortraitUpsideDown;
}
// LandscapeRight
if(t.a == 1.0 && t.b == 0 && t.c == 0 && t.d == 1.0) {
orientation = UIInterfaceOrientationLandscapeRight;
}
// LandscapeLeft
if(t.a == -1.0 && t.b == 0 && t.c == 0 && t.d == -1.0) {
orientation = UIInterfaceOrientationLandscapeLeft;
}
}
return orientation;
}

These are pretty straight forward. The only thing to note is that in the getComplimentSize: method we have to manually adjust the ratio for 16:9 since the iPhone5+ resolution is mathematically shy of true 16:9.

How can you rotate text for UIButton and UILabel in Swift?

I am putting my answer in a similar format to this answer.

Here is the original label:

Sample Image

Rotate 90 degrees clockwise:

yourLabelName.transform = CGAffineTransform(rotationAngle: CGFloat.pi / 2)

Sample Image

Rotate 180 degrees:

yourLabelName.transform = CGAffineTransform(rotationAngle: CGFloat.pi)

Sample Image

Rotate 90 degrees counterclockwise:

yourLabelName.transform = CGAffineTransform(rotationAngle: -CGFloat.pi / 2)

Sample Image

Do the same thing to rotate a button. Thankfully the touch events also get rotated so the button is still clickable in its new bounds without having to do anything extra.

yourButtonName.transform = CGAffineTransform(rotationAngle: CGFloat.pi / 2)

Notes:

Documentation for CGAffineTransform

The basic format is CGAffineTransform(rotationAngle: CGFloat) where rotationAngle is in radians, not degrees.

There are 2π radians in a full circle (360 degrees). Swift includes the useful constant CGFloat.pi.

  • CGFloat.pi = π = 180 degrees
  • CGFloat.pi / 2 = π/2 = 90 degrees

Auto Layout:

Auto layout does not work with rotated views. (See Frame vs Bounds for an explanation why.) This problem can be solved by creating a custom view. This answer shows how to do it for a UITextView, but it is the same basic concept for a label or button. (Note that you will have to remove the CGAffineTransformScale line in that answer since you don't need to mirror the text.)

Related

  • How to do transforms on a CALayer?
  • How to apply multiple transforms in Swift
  • CTM transforms vs Affine Transforms in iOS (for translate, rotate, scale)


Related Topics



Leave a reply



Submit