How Is Filters Uiscrollview/Uicollectionview in Apple's Photos App Implemented That It Opens So Fast

How is filters UIScrollView/UICollectionView in Apple's Photos app implemented that it opens so fast?

The question seems to be how to display the CIImage resulting from a Core Image CIFilter as fast as possible — so fast that it appears instantly when the view controller appears; so fast, in fact, that the user can adjust the CIFilter parameters using sliders and so forth, and the image will redisplay live and keep up with the adjustment.

The answer is to use Metal Kit, and in particular a MTKView. The rendering work is moved off onto the device's GPU and is extremely fast, fast enough to come in under the refresh rate of the device's screen so that there is no discernible lag as the user twiddles the sliders.

I have a simple demonstration where the user applies a custom chain of filters called VignetteFilter:

Sample Image

As the user slides the slider, the amount of vignetting (the inner circle) changes smoothly. At every instant of sliding, a new filter is being applied to the original image and the filter is rendered, over and over as the user slides the slider, keeping up in synch with the user's movements.

The view at the bottom, as I said, is an MTKView. MTKView is not hard to work with in this way; it does require some preparation but it's all boilerplate. The only tricky part is actually getting the image to come out where you want it.

Here's the code for my view controller (I'm omitting everything but the slider and the display of the filtered image):

class EditingViewController: UIViewController, MTKViewDelegate {
@IBOutlet weak var slider: UISlider!
@IBOutlet weak var mtkview: MTKView!

var context : CIContext!
let displayImage : CIImage! // must be set before viewDidLoad
let vig = VignetteFilter()
var queue: MTLCommandQueue!

// slider value changed
@IBAction func doSlider(_ sender: Any?) {
self.mtkview.setNeedsDisplay()
}

override func viewDidLoad() {
super.viewDidLoad()

// preparation, all pure boilerplate

self.mtkview.isOpaque = false // otherwise background is black
// must have a "device"
guard let device = MTLCreateSystemDefaultDevice() else {
return
}
self.mtkview.device = device

// mode: draw on demand
self.mtkview.isPaused = true
self.mtkview.enableSetNeedsDisplay = true

self.context = CIContext(mtlDevice: device)
self.queue = device.makeCommandQueue()

self.mtkview.delegate = self
self.mtkview.setNeedsDisplay()
}

func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
}

func draw(in view: MTKView) {
// run the displayImage thru the CIFilter
self.vig.setValue(self.displayImage, forKey: "inputImage")
let val = Double(self.slider.value)
self.vig.setValue(val, forKey:"inputPercentage")
var output = self.vig.outputImage!

// okay, `output` is the CIImage we want to display
// scale it down to aspect-fit inside the MTKView
var r = view.bounds
r.size = view.drawableSize
r = AVMakeRect(aspectRatio: output.extent.size, insideRect: r)
output = output.transformed(by: CGAffineTransform(
scaleX: r.size.width/output.extent.size.width,
y: r.size.height/output.extent.size.height))
let x = -r.origin.x
let y = -r.origin.y

// minimal dance required in order to draw: render, present, commit
let buffer = self.queue.makeCommandBuffer()!
self.context!.render(output,
to: view.currentDrawable!.texture,
commandBuffer: buffer,
bounds: CGRect(origin:CGPoint(x:x, y:y), size:view.drawableSize),
colorSpace: CGColorSpaceCreateDeviceRGB())
buffer.present(view.currentDrawable!)
buffer.commit()
}
}

Photos Extension: Can't Save Images Not Oriented Up

I have certainly seen the save fail because the orientation stuff was wrong, but the following architecture currently seems to work for me:

func startContentEditing(with contentEditingInput: PHContentEditingInput, placeholderImage: UIImage) {
self.input = contentEditingInput
if let im = self.input?.displaySizeImage {
self.displayImage = CIImage(image:im, options: [.applyOrientationProperty:true])!
// ... other stuff depending on what the adjustment data was ...
}
self.mtkview.setNeedsDisplay()
}
func finishContentEditing(completionHandler: @escaping ((PHContentEditingOutput?) -> Void)) {
DispatchQueue.global(qos:.default).async {
let inurl = self.input!.fullSizeImageURL!
let output = PHContentEditingOutput(contentEditingInput:self.input!)
let outurl = output.renderedContentURL
var ci = CIImage(contentsOf: inurl, options: [.applyOrientationProperty:true])!
let space = ci.colorSpace!
// ... apply real filter to `ci` based on user edits ...
try! CIContext().writeJPEGRepresentation(
of: ci, to: outurl, colorSpace: space)
let data = // whatever
output.adjustmentData = PHAdjustmentData(
formatIdentifier: self.myidentifier, formatVersion: "1.0", data: data)
completionHandler(output)
}
}

CIColorControls & UISlider w/ Swift 4

The problem is this line:

previewImage.image = UIImage(ciImage: filteredImage)

You cannot magically make a UIImage out of a CIImage by using that initializer. Your image view is coming out empty because you are effectively setting its image to nil by doing that. Instead, you must render the CIImage. See, for example, my answer here.

(Apple claims that the image view itself will render the CIImage in this configuration, but I've never seen that claim verified.)

However, we might go even further and point out that if the goal is to allow the user to move a slider and change the brightness of the image in real time, your approach is never going to work. You cannot use a UIImageView at all to display an image that will do that, because every time the slider is moved, we must render again, and that is really really slow. Instead, you should be using a Metal view to render into (as described in my answer here); that is the way to respond in real time to a CIFilter governed by a slider.

Adding pinch zoom to a UICollectionView

The good bit – how to make it work

Some very minor tweaks to the above code have resolved What Doesn't Work 1 & What Doesn't Work 2 in the question.

I have added the following lines in to the viewDidLoad method of my UICollectionViewController:

[collectionView setMinimumZoomScale: 0.25];
[collectionView setMaximumZoomScale: 4];

I've also updated the example project so that instead of text labels, the view is made of little circles. As you zoom in and out, these are resized. Here's what it looks like now (zoomed out and zoomed in):

Image zoomed out Image zoomed in

During a zoom the views for the circles are not redrawn, but just interpolated from their pre-zoom size. The redraw is postponed until the zoom finishes. Here's a capture of how that looks after a zoom in of several times:

During zoom

It would be great to have the redrawing during zoom happen in a background thread so that the artefacts are less noticeable, but that's well out of the scope of this question and I've not worked on it yet either.

You can find the entire project, with fixes, on Bit Bucket so you can grab the files there.

The Bad Part – I don't know why it works

I was hoping that with this question answered, I'd have lots of new certainty about UIScrollView zooming. I don't.

From what I've read about UIScrollView, this "fix" should not have made any difference and it should have already worked in the first place anyway.

A UIScrollView isn't supposed to enable scrolling until you give it a delegate that implements viewForZoomingInScrollView:, which I've not done.

Confusion About CIContext, OpenGL and Metal (SWIFT). Does CIContext use CPU or GPU by default?

I started making this a comment, but I think since WWDC'18 this works best as an answer. I'll edit as others more an expert than I comment, and am willing to delete the entire answer if that's the proper thing to do.

You are on the right track - utilize the GPU when you can and it's a good fit. CoreImage and Metal, while "low-level" technologies that "usually" use the GPU, can use the CPU if that is desired. CoreGraphics? It renders things using the CPU.

Images. A UIImage and a CGImage are actual images. A CIImage however, isn't. The best way to think of it is a "recipe" for an image.

I typically - for now, I'll explain in a moment - stick to CoreImage, CIFilters, CIImages, and GLKViews when working with filters. Using a GLKView against a CIImage means using OpenGL and a single CIContext and EAGLContext. It offers almost as good performance as using MetalKit or MTKViews.

As for using UIKit and it's UIImage and UIImageView, I only do when needed - saving/sharing/uploading, whatever. Stick to the GPU until then.

....

Here's where it starts getting complicated.

Metal is an Apple proprietary API. Since they own the hardware - including the CPU and GPU - they've optimized it for them. It's "pipeline" is somewhat different than OpenGL. Nothing major, just different.

Until WWDC'18, using GLKit, including GLKView, was fine. But all things OpenGL were depricated, and Apple is moving things to Metal. While the performance gain (for now) isn't that great, you may be best off for something new to use MTKView, Metal, and CIContext`.

Look at the answer @matt gave here for a nice way to use MTKViews.

Paging UIScrollView in increments smaller than frame size

Try making your scrollview less than the size of the screen (width-wise), but uncheck the "Clip Subviews" checkbox in IB. Then, overlay a transparent, userInteractionEnabled = NO view on top of it (at full width), which overrides hitTest:withEvent: to return your scroll view. That should give you what you're looking for. See this answer for more details.



Related Topics



Leave a reply



Submit