How to Render View into Image Faster

How to render view into image faster?

No. In iOS6, renderInContext: is the only way. It is slow. It uses the CPU.

Ways to render UIKit content

renderInContext:

[view.layer renderInContext:UIGraphicsGetCurrentContext()];
  • Requires iOS 2.0. It runs in the CPU.
  • It doesn't capture views with non-affine transforms, OpenGL, or video content.
  • If an animation is running, you can have the option of capturing:

    • view.layer, which captures the final frame of the animation.
    • view.presentationLayer, which captures the current frame of the animation .

snapshotViewAfterScreenUpdates:

UIView *snapshot = [view snapshotViewAfterScreenUpdates:YES];
  • Requires iOS 7.
  • It is the fastest method.
  • The view contents are immutable. Not good if you want to apply an effect.
  • It captures all content types (UIKit, OpenGL, or video).

resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets

[view resizableSnapshotViewFromRect:rect afterScreenUpdates:YES withCapInsets:edgeInsets]
  • Requires iOS 7.
  • Same as snapshotViewAfterScreenUpdates: but with resizable insets. content is also immutable.

drawViewHierarchyInRect:afterScreenUpdates:

[view drawViewHierarchyInRect:rect afterScreenUpdates:YES];
  • Requires iOS 7.
  • It draws in the current context.
  • According to session 226 it is faster than renderInContext:.

See WWDC 2013 Session 226 Implementing Engaging UI on iOS about the new snapshotting APIs.


If it is any help, here is some code to discard capture attempts while one is still running.

This throttles block execution to one at a time, and discards others. From this SO answer.

dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);
dispatch_queue_t renderQueue = dispatch_queue_create("com.throttling.queue", NULL);

- (void) capture {
if (dispatch_semaphore_wait(semaphore, DISPATCH_TIME_NOW) == 0) {
dispatch_async(renderQueue, ^{
// capture
dispatch_semaphore_signal(semaphore);
});
}
}

What is this doing?

  • Create a semaphore for one (1) resource.
  • Create a serial queue.
  • DISPATCH_TIME_NOW means the timeout is none, so it returns non zero immediately on red light. Thus, not executing the if content.
  • If green light, run the block asynchronously, and set green light again.

How to render an image with effect faster with UIKit

As far as I understand you have to make the same changes to different images. So time of initial initialization is not critical for you but each image should be processed as soon as possible. First of all it is critical to generate new images in a background queue/thread.
There are two good ways to quickly process/generate images:

  1. Use CIFilter from CoreImage

  2. Use GPUImage library

If you used CoreImage check that you use CIFilter and CIContext properly. CIContext creation takes quite a lot of time but it could be SHARED between different CIFilters and images - so you should create CIContext only once! CIFilter could also be SHARED between different images, but since it is not thread safe you should have a separate CIFilter for each thread.

In my code I have the following:

+ (UIImage*)roundShadowImageForImage:(UIImage*)image {
static CIFilter *_filter;

static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^
{
NSLog(@"CIContext and CIFilter generating...");
_context = [CIContext contextWithOptions:@{ kCIContextUseSoftwareRenderer: @NO,
kCIContextWorkingColorSpace : [NSNull null] }];

CIImage *roundShadowImage = [CIImage imageWithCGImage:[[self class] roundShadowImage].CGImage];
CIImage *maskImage = [CIImage imageWithCGImage:[[self class] roundWhiteImage].CGImage];

_filter = [CIFilter filterWithName:@"CIBlendWithAlphaMask"
keysAndValues:
kCIInputBackgroundImageKey, roundShadowImage,
kCIInputMaskImageKey, maskImage, nil];
NSLog(@"CIContext and CIFilter are generated");
});

if (image == nil) {
return nil;
}
NSAssert(_filter, @"Error: CIFilter for cover images is not generated");

CGSize imageSize = CGSizeMake(image.size.width * image.scale, image.size.height * image.scale);

// CIContext and CIImage objects are immutable, which means each can be shared safely among threads
CIFilter *filterForThread = [_filter copy]; // CIFilter could not be shared between different threads.

CGAffineTransform imageTransform = CGAffineTransformIdentity;
if (!CGSizeEqualToSize(imageSize, coverSize)) {
NSLog(@"Cover image. Resizing image %@ to required size %@", NSStringFromCGSize(imageSize), NSStringFromCGSize(coverSize));
CGFloat scaleFactor = MAX(coverSide / imageSize.width, coverSide / imageSize.height);
imageTransform = CGAffineTransformMakeScale(scaleFactor, scaleFactor);
}
imageTransform = CGAffineTransformTranslate(imageTransform, extraBorder, extraBorder);

CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];
ciImage = [ciImage imageByApplyingTransform:imageTransform];

if (image.hasAlpha) {
CIImage *ciWhiteImage = [CIImage imageWithCGImage:[self whiteImage].CGImage];
CIFilter *filter = [CIFilter filterWithName:@"CISourceOverCompositing"
keysAndValues:
kCIInputBackgroundImageKey, ciWhiteImage,
kCIInputImageKey, ciImage, nil];
[filterForThread setValue:filter.outputImage forKey:kCIInputImageKey];
}
else
{
[filterForThread setValue:ciImage forKey:kCIInputImageKey];
}

CIImage *outputCIImage = [filterForThread outputImage];
CGImageRef cgimg = [_context createCGImage:outputCIImage fromRect:[outputCIImage extent]];
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return newImage;
}

If you are still not satisfied with the speed try GPUImage It is a very good library, it is also very fast because it uses OpenGL for image generation.

How to convert a UIView to an image

An extension on UIView should do the trick.

extension UIView {

// Using a function since `var image` might conflict with an existing variable
// (like on `UIImageView`)
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
}
}

Apple discourages using UIGraphicsBeginImageContext starting iOS 10 with the introduction of the P3 color gamut. UIGraphicsBeginImageContext is sRGB and 32-bit only. They introduced the new UIGraphicsImageRenderer API that is fully color managed, block-based, has subclasses for PDFs and images, and automatically manages the context lifetime. Check out WWDC16 session 205 for more details (image rendering begins around the 11:50 mark)

To be sure that it works on every device, use #available with a fallback to earlier versions of iOS:

extension UIView {

// Using a function since `var image` might conflict with an existing variable
// (like on `UIImageView`)
func asImage() -> UIImage {
if #available(iOS 10.0, *) {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
} else {
UIGraphicsBeginImageContext(self.frame.size)
self.layer.render(in:UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return UIImage(cgImage: image!.cgImage!)
}
}
}

how to take a UIView screenshot faster?

You can use exact same view in magnifier view, and change position to visible words.

iOS: what's the fastest, most performant way to make a screenshot programmatically?

I've found a better method that uses the snapshot API whenever possible.

I hope it helps.

class func screenshot() -> UIImage {
var imageSize = CGSize.zero

let orientation = UIApplication.shared.statusBarOrientation
if UIInterfaceOrientationIsPortrait(orientation) {
imageSize = UIScreen.main.bounds.size
} else {
imageSize = CGSize(width: UIScreen.main.bounds.size.height, height: UIScreen.main.bounds.size.width)
}

UIGraphicsBeginImageContextWithOptions(imageSize, false, 0)
for window in UIApplication.shared.windows {
window.drawHierarchy(in: window.bounds, afterScreenUpdates: true)
}

let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}

Wanna know more about iOS 7 Snapshots?

Objective-C version:

+ (UIImage *)screenshot
{
CGSize imageSize = CGSizeZero;

UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;
if (UIInterfaceOrientationIsPortrait(orientation)) {
imageSize = [UIScreen mainScreen].bounds.size;
} else {
imageSize = CGSizeMake([UIScreen mainScreen].bounds.size.height, [UIScreen mainScreen].bounds.size.width);
}

UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
for (UIWindow *window in [[UIApplication sharedApplication] windows]) {
CGContextSaveGState(context);
CGContextTranslateCTM(context, window.center.x, window.center.y);
CGContextConcatCTM(context, window.transform);
CGContextTranslateCTM(context, -window.bounds.size.width * window.layer.anchorPoint.x, -window.bounds.size.height * window.layer.anchorPoint.y);
if (orientation == UIInterfaceOrientationLandscapeLeft) {
CGContextRotateCTM(context, M_PI_2);
CGContextTranslateCTM(context, 0, -imageSize.width);
} else if (orientation == UIInterfaceOrientationLandscapeRight) {
CGContextRotateCTM(context, -M_PI_2);
CGContextTranslateCTM(context, -imageSize.height, 0);
} else if (orientation == UIInterfaceOrientationPortraitUpsideDown) {
CGContextRotateCTM(context, M_PI);
CGContextTranslateCTM(context, -imageSize.width, -imageSize.height);
}
if ([window respondsToSelector:@selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
[window drawViewHierarchyInRect:window.bounds afterScreenUpdates:YES];
} else {
[window.layer renderInContext:context];
}
CGContextRestoreGState(context);
}

UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}


Related Topics



Leave a reply



Submit