Swift 3: How to Pinch to Scale and Rotate Uiimageview

Swift 3: How do I pinch to scale and rotate UIImageView?

You have a some problems in your code. First you need to add the UIGestureRecognizerDelegate to your view controller and make it your gesture recognizer delegate. You need also to implement shouldRecognizeSimultaneously method and return true. Second when applying the scale you need to save the transform when the pinch begins and apply the scale in top of it:

class DraggableImageView: UIImageView {
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
backgroundColor = .blue
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
backgroundColor = .green
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if let position = touches.first?.location(in: superview){
center = position
}
}
}


class ViewController: UIViewController, UIGestureRecognizerDelegate {

var identity = CGAffineTransform.identity

override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .white
setupViews()

let pinchGesture = UIPinchGestureRecognizer(target: self, action: #selector(scale))
let rotationGesture = UIRotationGestureRecognizer(target: self, action: #selector(rotate))

pinchGesture.delegate = self
rotationGesture.delegate = self

view.addGestureRecognizer(pinchGesture)
view.addGestureRecognizer(rotationGesture)
}
let firstImageView: DraggableImageView = {
let iv = DraggableImageView()
iv.backgroundColor = .red
iv.isUserInteractionEnabled = true
return iv
}()

func setupViews() {
view.addSubview(firstImageView)
let firstImageWidth: CGFloat = 50
let firstImageHeight: CGFloat = 50
firstImageView.frame = CGRect(x: view.frame.midX - firstImageWidth / 2, y: view.frame.midY - firstImageWidth / 2, width: firstImageWidth, height: firstImageHeight)
}
@objc func scale(_ gesture: UIPinchGestureRecognizer) {
switch gesture.state {
case .began:
identity = firstImageView.transform
case .changed,.ended:
firstImageView.transform = identity.scaledBy(x: gesture.scale, y: gesture.scale)
case .cancelled:
break
default:
break
}
}
@objc func rotate(_ gesture: UIRotationGestureRecognizer) {
firstImageView.transform = firstImageView.transform.rotated(by: gesture.rotation)
}
func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer, shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool {
return true
}
}

How to save UIImage after Editing with Pan, Rotate and Pinch Gesture

I have a UIIView named myView with Transparent background and i add UIImageView into myView as subview then i apply UIRotationGestureRecognizer on myView so that rotate my Image here is my @IBAction func of UIRotationGestureRecognizer....

@IBAction func rotateGesture(sender: UIRotationGestureRecognizer) {
if let view = sender.view {
view.transform = CGAffineTransformRotate(view.transform, sender.rotation)
sender.rotation = 0
}
}

ios- zoom and rotate a masked UIImageView

To rotate an image you can use UIPanGestureRecognizer and to zoom in/out you can use UIPinchGestureRecognizer. May be the code below could help you.

// In your vieDidLoad use:

UIPanGestureRecognizer *pan=[[UIPanGestureRecognizer alloc]initWithTarget:self action:@selector(panned:)];

UIPinchGestureRecognizer *pinch=[[UIPinchGestureRecognizer alloc]initWithTarget:self action:@selector(pinched:)];
[self.imageview addGestureRecognizer:pinch];
[self.imageview addGestureRecognizer:pan];

and add the selector methods:

// To zoom in/out
- (void)pinched:(UIPinchGestureRecognizer *)sender {

if (sender.scale >1.0f && sender.scale < 2.5f) {
CGAffineTransform transform = CGAffineTransformMakeScale(sender.scale, sender.scale);
imageview.transform = transform;
}
}

//To rotate
-(void) panned:(UIGestureRecognizer *)gesture
{
CGPoint translatedPoint = [(UIPanGestureRecognizer*)gesture translationInView:imageview];

if([(UIPanGestureRecognizer*)gesture state] == UIGestureRecognizerStateBegan) {
_firstX = [imageview center].x;
_firstY = [imageview center].y;
}
translatedPoint = CGPointMake(_firstX+translatedPoint.x, _firstY+translatedPoint.y);
[imageview setCenter:translatedPoint];

}

in your viewDidLoad your should also add your frame as as subview to the imageview.

How to merge UIImages while using transform (scale, rotation and translation)?

The problem is you have multiple geometries (coordinate systems) and scale factors and reference points in play, and it's hard to keep them straight. You have the root view's geometry, in which the frames of the image view and the sticker view are defined, and then you have the geometry of the graphics context, and you don't make them match. The image view's origin is not at the origin of its superview's geometry, because you constrained it to the safe areas, and I'm not sure you're properly compensating for that offset. You try to deal with the scaling of the image in the image view by adjusting the sticker size when drawing the sticker. You don't properly compensate for the fact that you have both the sticker's center property and its transform affecting its pose (location / scale /rotation).

Let's simplify.

First, let's introduce a “canvas view” as the superview of the image view. The canvas view can be laid out however you want with respect to the safe areas. We'll constrain the image view to fill the canvas view, so the image view's origin will be .zero.

new view hierarchy

Next, we'll set the sticker view's layer.anchorPoint to .zero. This makes the view's transform operate relative to the top-left corner of the sticker view, instead of its center. It also makes view.layer.position (which is the same as view.center) control the position of the top-left corner of the view, instead of controlling the position of the center of the view. We want these changes because they match how Core Graphics draws the sticker image in areaRect when we merge the images.

We'll also set view.layer.position to .zero. This simplifies how we compute where to draw the sticker image when we merge the images.

private func makeStickerView(with image: UIImage, center: CGPoint) -> UIImageView {
let heightOnWidthRatio = image.size.height / image.size.width
let imageWidth: CGFloat = 150

// let newStickerImageView = UIImageView(frame: CGRect(origin: .zero, size: CGSize(width: imageWidth, height: imageWidth * heightOnWidthRatio)))
let view = UIImageView(frame: CGRect(x: 0, y: 0, width: imageWidth, height: imageWidth * heightOnWidthRatio))
view.image = image
view.clipsToBounds = true
view.contentMode = .scaleAspectFit
view.isUserInteractionEnabled = true
view.backgroundColor = UIColor.red.withAlphaComponent(0.7)
view.layer.anchorPoint = .zero
view.layer.position = .zero
return view
}

This means we need to position the sticker entirely using its transform, so we want to initialize the transform to center the sticker:

@IBAction func resetPose(_ sender: Any) {
let center = CGPoint(x: canvasView.bounds.midX, y: canvasView.bounds.midY)
let size = stickerView.bounds.size
stickerView.transform = .init(translationX: center.x - size.width / 2, y: center.y - size.height / 2)
}

Because of these changes, we have to handle pinches and rotates in a more complex way. We'll use a helper method to manage the complexity:

extension CGAffineTransform {
func around(_ locus: CGPoint, do body: (CGAffineTransform) -> (CGAffineTransform)) -> CGAffineTransform {
var transform = self.translatedBy(x: locus.x, y: locus.y)
transform = body(transform)
transform = transform.translatedBy(x: -locus.x, y: -locus.y)
return transform
}
}

Then we handle pinch and rotate like this:

@objc private func stickerDidPinch(pincher: UIPinchGestureRecognizer) {
guard let stickerView = pincher.view else { return }
stickerView.transform = stickerView.transform.around(pincher.location(in: stickerView), do: { $0.scaledBy(x: pincher.scale, y: pincher.scale) })
pincher.scale = 1
}

@objc private func stickerDidRotate(rotater: UIRotationGestureRecognizer) {
guard let stickerView = rotater.view else { return }
stickerView.transform = stickerView.transform.around(rotater.location(in: stickerView), do: { $0.rotated(by: rotater.rotation) })
rotater.rotation = 0
}

This also makes scaling and rotating work better than before. In your code, scaling and rotating always happen around the center of the view. With this code, they happen around the center point between the user's fingers, which feels more natural.

Finally, to merge the images, we'll start by scaling the graphics context's geometry the same way imageView scaled its image, because the sticker transform is relative to the imageView size, not the image size. Since we position the sticker entirely using the transform now, and since we've set the image view and sticker view origins to .zero, we don't have to make any adjustments for weird origins.

extension UIImage {

func merge(in viewSize: CGSize, with imageTuples: [(image: UIImage, viewSize: CGSize, transform: CGAffineTransform)]) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(size, false, UIScreen.main.scale)

print("scale : \(UIScreen.main.scale)")
print("size : \(size)")
print("--------------------------------------")

guard let context = UIGraphicsGetCurrentContext() else { return nil }

// Scale the context geometry to match the size of the image view that displayed me, because that's what all the transforms are relative to.
context.scaleBy(x: size.width / viewSize.width, y: size.height / viewSize.height)

draw(in: CGRect(origin: .zero, size: viewSize), blendMode: .normal, alpha: 1)

for imageTuple in imageTuples {
let areaRect = CGRect(origin: .zero, size: imageTuple.viewSize)

context.saveGState()
context.concatenate(imageTuple.transform)

context.setBlendMode(.color)
UIColor.purple.withAlphaComponent(0.5).setFill()
context.fill(areaRect)

imageTuple.image.draw(in: areaRect, blendMode: .normal, alpha: 1)

context.restoreGState()
}

let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

return image
}
}

You can find my fixed version of your test project here.

iOS Swift: Rotate and Scale UIView without resizing

You are resetting the scale transform of view when applying rotation transform. Create a property to hold original scale of the view.

var currentScale: CGFloat = 0

And when pinch is done, store the currentScale value to current scale. Then when rotating also use this scale, before applying the rotation.

let scaleTransform = CGAffineTransform(scaleX: currentScale, y: currentScale)

let concatenatedTransform = scaleTransform.rotated(by: newRotation)
self.transform = concatenatedTransform

You are using extension to add gesture recognizers, for that reason you cannot store currentScale. You can also get the scale values of view from current transform values. Here is how your code would look like,

extension UIView {

var currentScale: CGPoint {
let a = transform.a
let b = transform.b
let c = transform.c
let d = transform.d

let sx = sqrt(a * a + b * b)
let sy = sqrt(c * c + d * d)

return CGPoint(x: sx, y: sy)
}

func addPinchGesture() {
var pinchGesture = UIPinchGestureRecognizer()
pinchGesture = UIPinchGestureRecognizer(target: self,
action: #selector(handlePinchGesture(_:)))
self.addGestureRecognizer(pinchGesture)
}

@objc func handlePinchGesture(_ sender: UIPinchGestureRecognizer) {
self.transform = self.transform.scaledBy(x: sender.scale, y: sender.scale)
sender.scale = 1
}

}

// ROTATION
extension UIView {

func addRotationGesture() {
var rotationGesture = RotationGestureRecognizer()
rotationGesture = RotationGestureRecognizer(target: self,
action: #selector(handleRotationGesture(_:)))
self.addGestureRecognizer(rotationGesture)
}

@objc func handleRotationGesture(_ sender: RotationGestureRecognizer) {
var originalRotation = CGFloat()
switch sender.state {
case .began:
sender.rotation = sender.lastRotation
originalRotation = sender.rotation
case .changed:
let scale = CGAffineTransform(scaleX: currentScale.x, y: currentScale.y)
let newRotation = sender.rotation + originalRotation

self.transform = scale.rotated(by: newRotation)
case .ended:
sender.lastRotation = sender.rotation
default:
break
}
}
}

I used this answer as a reference for extracting the scale value.

How to get a rotated, zoomed and panned image from an UIImageView at its full resolution?

The following code creates a snapshot of the enclosing view (superview of faceImageView with clipsToBounds set to YES) using a calculated scale factor.

It assumes that the content mode of faceImageView is UIViewContentModeScaleAspectFit and that the frame of faceImageView is set to the enclosingView's bounds.

- (UIImage *)captureView {

float imageScale = sqrtf(powf(faceImageView.transform.a, 2.f) + powf(faceImageView.transform.c, 2.f));
CGFloat widthScale = faceImageView.bounds.size.width / faceImageView.image.size.width;
CGFloat heightScale = faceImageView.bounds.size.height / faceImageView.image.size.height;
float contentScale = MIN(widthScale, heightScale);
float effectiveScale = imageScale * contentScale;

CGSize captureSize = CGSizeMake(enclosingView.bounds.size.width / effectiveScale, enclosingView.bounds.size.height / effectiveScale);

NSLog(@"effectiveScale = %0.2f, captureSize = %@", effectiveScale, NSStringFromCGSize(captureSize));

UIGraphicsBeginImageContextWithOptions(captureSize, YES, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextScaleCTM(context, 1/effectiveScale, 1/effectiveScale);
[enclosingView.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

return img;
}

Depending on the current transform the resulting image will have a different size. For example when you zoom in, the size gets smaller. You can also set effectiveScale to a constant value in order to get an image with a constant size.

Your gesture recognizer code does not limit the scale factor, i.e. you can zoom out/in without being limited. That can be very dangerous! My capture method can output really large images when you've zoomed out very much.

If you have zoomed out the background of the captured image will be black. If you want it to be transparent, you must set the opaque-parameter of UIGraphicsBeginImageContextWithOptions to NO.



Related Topics



Leave a reply



Submit