Ios: Is Core Graphics Implemented on Top of Opengl

Core graphics or opengl for paint app?

Core Graphics. It's easier to use but still can do pretty much anything you need.

Will CoreGraphics support Metal on Apple Devices?

First, Core Graphics doesn't "use" Quartz. "Core Graphics" and "Quartz" are just two names for the same thing. They are equivalent.

Second, Apple doesn't promise what technology Core Graphics uses under the hood. They've occasionally touted the acceleration they were able to accomplish by using some particular technology, but that's marketing — marketing to developers, but marketing nonetheless — not a design contract. They reserve the right and ability to change how Core Graphics is implemented, and have done so often. Any developer who writes code which depends on the specific implementation is risking having their code break with future updates to the OS. Developers should only rely on the design contract in the documentation and headers.

It is very likely that Core Graphics is already using Metal under the hood. It makes no difference to you as a developer or user whether it is or isn't.

Finally, Core Graphics has not been deprecated. That means that there's no reason to expect it to go away, break, or lose functionality any time soon.

Core Graphics Performance on iOS

Core Graphics work is performed by the CPU. The results are then pushed to the GPU. When you call setNeedsDisplay you indicate that the drawing work needs to occur afresh.

Assuming that many of your objects retain a consistent shape and merely move around or rotate you should simply call setNeedsLayout on the parent view, then push the latest object positions in that view's layoutSubviews, probably directly to the center property. Merely adjusting positions does not cause a thing to need to be redrawn; the compositor will simply ask the GPU to reproduce the graphic it already has at a different position.

A more general solution for games might be to ignore center, bounds and frame other than for initial setup. Simply push the affine transforms you want to transform, probably created using some combination of these helpers. That'll allow you aribtrarily to reposition, rotate and scale your objects without CPU intervention — it'll all be GPU work.

If you want even more control then each view has a CALayer with its own affineTransform but they also have a sublayerTransform that combines with the transforms of sublayers. So if you're so interested in 3d then the easiest way is to load a suitable perspective matrix as the sublayerTransform on the superlayer and then push suitable 3d transforms to the sublayers or subviews.

There's a single obvious downside to this approach in that if you draw once and then scale up you'll be able to see the pixels. You can adjust your layer's contentsScale in advance to try to ameliorate for that but otherwise you're just going to see the natural consequence of allowing the GPU to proceed with compositing. There's a magnificationFilter property on the layer if you want to switch between linear and nearest filtering; linear is the default.

why is UIKit drawing not accelorated by using the graphics processor 'directly'

To add a few more comments to what Steven wrote:

Every drawing on OS X and iOS is eventually done by Open GL. But there are ways drawing of a line can be done:

  1. one is to render a line into a rasterized image by CPU, and then send the resulting rasterized image to the GPU to show it.

  2. Another is to send the drawing command to the GPU so that the GPU draws it to a rasterized image.

Then, blending, animation etc work on the resulting rasterized image in the GPU.

If you use Open GL manually, 2. is what you usually do. I'm not sure which way UIKit drawing like UIBezierPath takes, but the OS X counterpart, AppKit, uses the method 1 unless you opt-in, which is called Quartz GL (which was called Quartz 2d extreme in the past):

  1. Usually, AppKit draws things down to rasterized image, and send it to GPU.
  2. With Quartz GL turned on, AppKit sends the drawing commands to GPU.

But Quartz GL is not turned on by default, due to various technical reasons, which are detailed in the (always fantastic) Ars Technica articles by John Siracusa. See the discussions here for 10.4 and another for 10.5.

Here is one official documentation on Quartz GL.

Why are OpenGL ES and cocos2D faster than Cocoa Touch / iOS frameworks itself?

cocos2D is built on top of OpenGL. When creating a sprite in cocos2D, you are actually creating a 3D model and applying a texture to it. The 3D model is just a flat square and the camera is always looking straight at it which is why it all appears flat and 2D. But this is why you can do things like scaling and rotating sprites easily - all you are really doing is rotating the 2D square (well, two triangles really) or moving them closer or further away from the camera. But Cocos2D handles all that for you.

OpenGL is designed from the start to pump out 3D graphics very very quickly. So it is designed to handle shoving points and triangles around. This is then enhanced by a 3D rendering hardware which it can use specifically for this. As this is all it does, it can be very optimised for doing all the maths on the points that build up the objects and mapping textures onto those object. It doesn't have to worry about handling touches or other system things that Cocoa does.

Cocoa Touch doesn't use openGl. It may use some hardware acceleration, but it isn't designed for that - it's designed for creating 2D buttons, etc. What it does, it does well, but it has lots of layers to pass through to do what it needs to do which doesn't make it as efficient as something designed just for graphics (openGL).

OpenGL is the fastest
cocos2D is slightly slower, but only because there are some wrappers to make your life easier. If you were to do the same thing, then you may get it faster, but with the cost of flexibility.
Core Animation is the slowest.

But they all have their uses and are excellent in their individual niche areas.

Most efficient way to draw part of an image in iOS

I guessed you are doing this to display part of an image on the screen, because you mentioned UIImageView. And optimization problems always need defining specifically.


Trust Apple for Regular UI stuff

Actually, UIImageView with clipsToBounds is one of the fastest/simplest ways to archive your goal if your goal is just clipping a rectangular region of an image (not too big). Also, you don't need to send setNeedsDisplay message.

Or you can try putting the UIImageView inside of an empty UIView and set clipping at the container view. With this technique, you can transform your image freely by setting transform property in 2D (scaling, rotation, translation).

If you need 3D transformation, you still can use CALayer with masksToBounds property, but using CALayer will give you very little extra performance usually not considerable.

Anyway, you need to know all of the low-level details to use them properly for optimization.


Why is that one of the fastest ways?

UIView is just a thin layer on top of CALayer which is implemented on top of OpenGL which is a virtually direct interface to the GPU. This means UIKit is being accelerated by GPU.

So if you use them properly (I mean, within designed limitations), it will perform as well as plain OpenGL implementation. If you use just a few images to display, you'll get acceptable performance with UIView implementation because it can get full acceleration of underlying OpenGL (which means GPU acceleration).

Anyway if you need extreme optimization for hundreds of animated sprites with finely tuned pixel shaders like in a game app, you should use OpenGL directly, because CALayer lacks many options for optimization at lower levels. Anyway, at least for optimization of UI stuff, it's incredibly hard to be better than Apple.


Why your method is slower than UIImageView?

What you should know is all about GPU acceleration. In all of the recent computers, fast graphics performance is achieved only with GPU. Then, the point is whether the method you're using is implemented on top of GPU or not.

IMO, CGImage drawing methods are not implemented with GPU.
I think I read mentioning about this on Apple's documentation, but I can't remember where. So I'm not sure about this. Anyway I believe CGImage is implemented in CPU because,

  1. Its API looks like it was designed for CPU, such as bitmap editing interface and text drawing. They don't fit to a GPU interface very well.
  2. Bitmap context interface allows direct memory access. That means it's backend storage is located in CPU memory. Maybe somewhat different on unified memory architecture (and also with Metal API), but anyway, initial design intention of CGImage should be for CPU.
  3. Many recently released other Apple APIs mentioning GPU acceleration explicitly. That means their older APIs were not. If there's no special mention, it's usually done in CPU by default.

So it seems to be done in CPU. Graphics operations done in CPU are a lot slower than in GPU.

Simply clipping an image and compositing the image layers are very simple and cheap operations for GPU (compared to CPU), so you can expect the UIKit library will utilize this because whole UIKit is implemented on top of OpenGL.

  • Here's another thread about whether the CoreGraphics on iOS is using OpenGL or not: iOS: is Core Graphics implemented on top of OpenGL?

About Limitations

Because optimization is a kind of work about micro-management, specific numbers and small facts are very important. What's the medium size? OpenGL on iOS usually limits maximum texture size to 1024x1024 pixels (maybe larger in recent releases). If your image is larger than this, it will not work, or performance will be degraded greatly (I think UIImageView is optimized for images within the limits).

If you need to display huge images with clipping, you have to use another optimization like CATiledLayer and that's a totally different story.

And don't go OpenGL unless you want to know every details of the OpenGL. It needs full understanding about low-level graphics and 100 times more code at least.


About Some Future

Though it is not very likely happen, but CGImage stuffs (or anything else) doesn't need to be stuck in CPU only. Don't forget to check the base technology of the API which you're using. Still, GPU stuffs are very different monster from CPU, then API guys usually explicitly and clearly mention them.

iPhone - Custom dynamic drawing using touch like in Draw something game OpenGL or Core Graphics?

Have you considered using something like Cocos2D to achieve the OpenGL drawing? It's quite intuitive to pick up.

I'm sorry, I have no idea how easy/hard/performance hitting using CoreGraphics would be.

CoreGraphics Alternative?

MonkVG is an OpenVG 1.1 like vector graphics API implementation currently using an OpenGL ES backend that should be compatible with any HW that supports OpenGL ES 2.0 which includes most iOS and Android devices.

This is an open source BSD licensed project that is in active development. At the time of this writing it is in a very early pre-release state (very minimal features implemented). Contributors and sponsors welcome.

It can be found at GitHub http://github.com/micahpearlman/MonkVG

Also, there is a SVG and SWF (flash) renderers built on top of MonkVG:

MonkSVG
MonkSWF



Related Topics



Leave a reply



Submit