Converting UIImageView Touch Coordinates to UIImage Coordinates
Something like this should get you going.
CGPoint imageViewPoint = [touch locationInView:imageView];
float percentX = imageViewPoint.x / imageView.frame.size.width;
float percentY = imageViewPoint.y / imageView.frame.size.height;
CGPoint imagePoint = CGPointMake(image.size.width * percentX, image.size.height * percentY);
Assuming the UIImageView
is called imageView
the UITouch
is called touch
and the UIImage
is called image
.
How to get the coordinates of point tapped on an UIImageView so that they are at the same position on all the devices (of different size/resolutions)?
I have make same as you want.Can you please try this one. it will help you.
- When you click on
UIImageView
at that time i have find touch point of theUIImageView
.You have to need calculate the percentage ofUIImageView
base onheight
andwidth
, so it will work on all device.
//Your imageView add tapGestureRecognizer event
imageView.isUserInteractionEnabled = true
let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(tapAction))
self.imageView.addGestureRecognizer(tapGestureRecognizer)
@objc func tapAction(sender: UITapGestureRecognizer) {
let touchPoint = sender.location(in: self.imageView)
print(touchPoint)
var z1 = touchPoint.x
var z2 = touchPoint.y
print("Before Alert Touched point (\(self.z1), \(self.z2)")
//convert point into Percentage
let z1per = z1 * 100 / self.imageView.frame.size.width
let z2per = z2 * 100 / self.imageView.frame.size.height
print("After Alert Touched point (\(self.z1per), \(self.z2per)")
//whatever you want to add like button or image on tap action.
let btn = UIButton(frame: CGRect(x: touchPoint.x - 15, y: touchPoint.y - 15, width: 30, height: 30))
btn.layer.cornerRadius = 15
btn image = UIImage(named: "marker.png") as UIImage?
btn.setImage(image, for: .normal)
self.imageView.addSubview(btn)
}
How to determine location of (x, y) user tapped on a UIImage not UIImageView?
Yes, by adding a UITapGestureRecognizer
to the image view, getting the tap position from that and converting the tap coordinates to the image coordinates:
let img = UIImage(named: "whatever")
// add a tap recognizer to the image view
let tap = UITapGestureRecognizer(target: self,
action: #selector(self.tapGesture(_:)))
imgView.addGestureRecognizer(tap)
imgView.isUserInteractionEnabled = true
imgView.image = img
imgView.contentMode = .scaledAspectFit
func convertTapToImg(_ point: CGPoint) -> CGPoint? {
let xRatio = imgView.frame.width / img.size.width
let yRatio = imgView.frame.height / img.size.height
let ratio = min(xRatio, yRatio)
let imgWidth = img.size.width * ratio
let imgHeight = img.size.height * ratio
var tap = point
var borderWidth: CGFloat = 0
var borderHeight: CGFloat = 0
// detect border
if ratio == yRatio {
// border is left and right
borderWidth = (imgView.frame.size.width - imgWidth) / 2
if point.x < borderWidth || point.x > borderWidth + imgWidth {
return nil
}
tap.x -= borderWidth
} else {
// border is top and bottom
borderHeight = (imgView.frame.size.height - imgHeight) / 2
if point.y < borderHeight || point.y > borderHeight + imgHeight {
return nil
}
tap.y -= borderHeight
}
let xScale = tap.x / (imgView.frame.width - 2 * borderWidth)
let yScale = tap.y / (imgView.frame.height - 2 * borderHeight)
let pixelX = img.size.width * xScale
let pixelY = img.size.height * yScale
return CGPoint(x: pixelX, y: pixelY)
}
@objc func tapGesture(_ gesture: UITapGestureRecognizer) {
let point = gesture.location(in: imgView)
let imgPoint = convertTapToImg(point)
print("tap: \(point) -> img \(imgPoint)")
}
Convert points from image to UIImageView points, contentMode-aware
Today, I've had some spare time to put together my solution to this problem and publish it on GitHub: UIImageView-GeometryConversion
It's a category on UIImageView that provides methods for converting points and rects from the image to view coordinates and respects all of the 13 different content modes.
Hope you like it!
How Do I Get the Coordinates of an Animated UIImageView
I was able to finally get the objects to report where they were on screen using the following changes to my touchesBegan method:
- (void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesBegan:touches withEvent:event];
UITouch *touch = [touches anyObject];
CGPoint startPoint = [touch locationInView:self.view];
CGFloat x = startPoint.x;
CGFloat y = startPoint.y;
//create touch point
CGPoint touchPoint = CGPointMake(x, y);
// loop through the items we're iterested in being tapped on here
for (UIImageView *fish in self.swimmers) {
CGRect fishFrame = [[fish.layer presentationLayer] frame];
if (CGRectContainsPoint(fishFrame, touchPoint)) {
NSLog(@"You hit fish: %u",fish.tag); // do stuff here
}
}
}
Return UIImageView to starting coordinates when tapped after zooming in?
Add a UITapGestureRecognizer
to the view. When it fires, set view's transform
back to CGAffineTransformIdentity
.
Finding touch location according to image in UIImageView
Try this code. I did not test it so mistakes might have been made. I hope the comments will help you enough to find the correct solution.
- (CGPoint)point:(CGPoint)point onImageWithSize:(CGSize)imageSize inImageView:(UIImageView *)view contentMode:(UIViewContentMode)mode
{
// find the relative image position on the view
CGPoint imageRelativeOrigin = CGPointZero;
CGSize imageRelativeSize = view.frame.size;
if(mode == UIViewContentModeScaleAspectFit)
{
// we expect one of the origin coordinates has a positive offset
// compare the ratio
if(imageSize.width/imageSize.height > view.frame.size.width/view.frame.size.height)
{
// in this case the image width is the same as the view width but height is smaller
imageRelativeSize = CGSizeMake(view.frame.size.width, view.frame.size.width*(imageSize.height/imageSize.width));
CGFloat verticalOffset = (view.frame.size.height-imageRelativeSize.height)*.5f; // this is the visible image offset
imageRelativeOrigin = CGPointMake(.0f, verticalOffset);
}
else
{
// in this case the image height is the same as the view height but widh is smaller
imageRelativeSize = CGSizeMake(view.frame.size.height*(imageSize.width/imageSize.height), view.frame.size.height);
CGFloat horizontalOffset = (view.frame.size.width-imageRelativeSize.width)*.5f; // this is the visible image offset
imageRelativeOrigin = CGPointMake(horizontalOffset, .0f);
}
}
else
{
// TODO: add other content modes
}
CGPoint relativeImagePoint = CGPointMake(point.x-imageRelativeOrigin.x, point.y-imageRelativeOrigin.y); // note these can now be off the image bounds
// resize to image coordinate system
CGPoint actualImagePoint = CGPointMake(relativeImagePoint.x*(imageSize.width/imageRelativeSize.width),
relativeImagePoint.y*(imageSize.height/imageRelativeSize.height));
return actualImagePoint;
}
Related Topics
Get Tapped Word from Uitextview in Swift
Detect Which App Is Currently Running on iOS Using Sysctl
iOS Uicollectionview - Default Flow, Fill Rows from Right to Left
How to Animate an Svg Path in iOS
Objective-C Check If Subviews of Rotated Uiviews Intersect
How to Efficiently Find Cgrects for Visible Words in Uitextview
How to Use Coreaudio's Audioconverter to Encode Aac in Real-Time
How to Upload Video to Server from Iphone
How to Detect If the Currently Running App Was Installed from the App Store
How to Prevent Initial White Flash When Showing a Uiwebview
Reused Cells in a Uicollectionview Show Multiple Uiimageviews When They Should Only Show One
Uisearchdisplaycontroller - How to Preload Searchresulttableview
How to Write Keyboard Notifications in Swift 3
Implementing Uitextfielddelegate with Swift