Cropping Cgrect from Avcapturephotooutput (Resizeaspectfill)

Cropping CGRect from AVCapturePhotoOutput (resizeAspectFill)

I managed to solve the issue with this code.

private func cropToPreviewLayer(from originalImage: UIImage, toSizeOf rect: CGRect) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }

// This previewLayer is the AVCaptureVideoPreviewLayer which the resizeAspectFill and videoOrientation portrait has been set.
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: rect)

let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)

let cropRect = CGRect(x: (outputRect.origin.x * width), y: (outputRect.origin.y * height), width: (outputRect.size.width * width), height: (outputRect.size.height * height))

if let croppedCGImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}

return nil
}

usage of the piece of code for my case:

let rect = CGRect(x: 25, y: 150, width: 325, height: 230)
let croppedImage = self.cropToPreviewLayer(from: image, toSizeOf: rect)
self.imageView.image = croppedImage

metadataOutputRectConverted(fromLayerRect:) in Android

I figured it out with the following code.

private fun cropImage(bitmap: Bitmap, cameraFrame: View, cropRectFrame: View): Bitmap {
val scaleFactor: Double; val widthOffset: Double; val heightOffset: Double

if (cameraFrame.height * bitmap.width > cameraFrame.height * bitmap.width) {
scaleFactor = (bitmap.width).toDouble() / (cameraFrame.width).toDouble()
widthOffset = 0.0
heightOffset = (bitmap.height - cameraFrame.height * scaleFactor) / 2
} else {
scaleFactor = (bitmap.height).toDouble() / (cameraFrame.height).toDouble()
widthOffset = (bitmap.width - cameraFrame.width * scaleFactor) / 2
heightOffset = 0.0
}

val newX = cropRectFrame.left * scaleFactor + widthOffset
val newY = cropRectFrame.top * scaleFactor + heightOffset
val width = cropRectFrame.width * scaleFactor
val height = cropRectFrame.height * scaleFactor

return Bitmap.createBitmap(bitmap, (newX).toInt(), (newY).toInt(), (width).toInt(), (height).toInt())

}

cropping an area CGRect from UIGraphicsImageContext

The CGRect called objectBounds has two components, an origin and a size. In order the draw the object correctly as a thumbnail, the code needs to scale the image (to get the size right) and translate the image (to move the origin to {0,0}). So the code looks like this

- (UIImage *)getThumbnailOfSize:(CGSize)size forObject:(UIBezierPath *)object
{
// to maintain the aspect ratio, we need to compute the scale
// factors for x and y, and then use the smaller of the two
CGFloat xscale = size.width / object.bounds.size.width;
CGFloat yscale = size.height / object.bounds.size.height;
CGFloat scale = (xscale < yscale) ? xscale : yscale;

// start a graphics context with the thumbnail size
UIGraphicsBeginImageContext( size );
CGContextRef context = UIGraphicsGetCurrentContext();

// here's where we scale and translate to make the image fit
CGContextScaleCTM( context, scale, scale );
CGContextTranslateCTM( context, -object.bounds.origin.x, -object.bounds.origin.y );

// draw the object and get the resulting image
[object stroke];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

return image;
}

AVCapturePhotoOutput iOS Camera Super Dark

It appears as though I have too many async pieces. I broke the code into separate functions for each major piece - async or not and put them all into a DispatchGroup. That seems to have solved the issue.



Related Topics



Leave a reply



Submit