How to Create a Cgbitmapcontext Which Works for Retina Display and Not Wasting Space for Regular Display

How to create a CGBitmapContext which works for Retina display and not wasting space for regular display?

After doing more research, I found the following solution:

If you have to use CGBitmapContextCreate, then there are two steps that can make the context with a size and coordinate system tailored to a standard display or Retina display:

float scaleFactor = [[UIScreen mainScreen] scale];

CGSize size = CGSizeMake(768, 768);

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

CGContextRef context = CGBitmapContextCreate(NULL,
size.width * scaleFactor, size.height * scaleFactor,
8, size.width * scaleFactor * 4, colorSpace,
kCGImageAlphaPremultipliedFirst);

CGContextScaleCTM(context, scaleFactor, scaleFactor);

The sample is to create a 768 x 768 point region, and on The New iPad, it will be 1536 x 1536 pixel. On iPad 2, it is 768 x 768 pixel.

A key factor is that, CGContextScaleCTM(context, scaleFactor, scaleFactor); is used to adjust the coordinate system, so that any drawing by Core Graphics, such as CGContextMoveToPoint, etc, will automatically work, no matter it is standard resolution or the Retina resolution.


One more note is that UIGraphicsBeginImageContext(CGSizeMake(300, 300)); will create a 300 x 300 pixel on Retina display, while UIGraphicsBeginImageContextWithOptions(CGSizeMake(300, 300), NO, 0.0); will create 600 x 600 pixel on the Retina display. The 0.0 is for the method call to automatically give the proper size for standard display or Retina display.

Retina display device width is not autofit in on ipad when using device width

Problem is solved !!!

I have to used this meta TAG

<meta name="viewport" content="width=device-width, maximum-scale=1.0, user-scalable=1" />

iOS: How do I support Retina Display with CGLayer?

This is how to draw a CGLayer correctly for all resolutions.

  1. When first creating the layer, you need to calculate the correct bounds by multiplying the dimensions with the scale:

    int width = 25; 
    int height = 25;
    float scale = [self contentScaleFactor];
    CGRect bounds = CGRectMake(0, 0, width * scale, height * scale);
    CGLayer layer = CGLayerCreateWithContext(context, bounds.size, NULL);
    CGContextRef layerContext = CGLayerGetContext(layer);
  2. You then need to set the correct scale for your layer context:

    CGContextScaleCTM(layerContext, scale, scale);
  3. If the current device has a retina display, all drawing made to the layer will now be drawn twice as large.

  4. When you finally draw the contents of your layer, make sure you use CGContextDrawLayerInRect and supply the unscaled CGRect:

    CGRect bounds = CGRectMake(0, 0, width, height);
    CGContextDrawLayerInRect(context, bounds, layerContext);

That's it!

OpenGL ES 2.0 textures for retina display?

The answer to this question can be found here

How to create a CGBitmapContext which works for Retina display and not wasting space for regular display?

What I basically did, is to multiply the texture and the buffer by the screen's scale factor and because this only yielded a texture that was 1/4 of the size, I had to multiply the context by the scale factor as well

CGContextScaleCTM(context, scaleFactor, scaleFactor);

what is the positioning regularity of coordinates in core graphics in iphone programming?

EDITS/ADDITIONS AT BOTTOM

It sounds like you might be confused about how the coordinate system maps to the pixel grid. When you're drawing into a CGContext, you're drawing into a "continuous" floating-point-based plane. This continuous plane is mapped onto the pixel-grid of the display such that integer values fall on the lines between screen pixels.

In theory, at default scale for any given device (so 1.0 for non-retina, 2.0 for retina), if you drew a rect from 0,0 -> 320,480, with a 100% opacity black stroke with a width of 1.0pt, you could expect the following results:

  • On non-retina, you would have a 1 pixel wide rect around the outside
    of the screen with 50% opacity.
  • On retina, you would have a 1 pixel
    wide rect around the outside of the screen with 100% opacity.

This stems from the fact that the zero line is right at the edge of the display, between the first row/col of pixels in the display and the first row/col pixels NOT in the display. When you draw a 1pt line at default scale in that situation, it draws along the center of the path, thus half of it will be drawn off the display, and in your continuous plane, you'll have a line whose width extends from -0.5pt to 0.5pt. On a non-retina display, that will become a 1px line at 50% opacity, and on retina display that will be rendered as a 1px line with 100% opacity.

EDITS

OP said:

My goal is to draw some shapes on retina display and that each line
will be 1 pixel width and 100% opacity.

There is nothing about this goal that requires antialiasing to be turned off. If you are seeing alpha blending on horizontal and vertical lines with antialiasing turned on, then you are not drawing the lines in the right places or at the right sizes.

And in that case in function: CGContextMoveToPoint(context, X, Y);
between 2 pixel on X-axis the engine will choose the right one, and on
Y-axis it will choose the higher one. But in the function:
CGContextFillRect(context, someRect); It will fill like it maps on
pixel grid (1 to 1).

The reason you are seeing this confusing behavior is that you have antialiasing turned off. The key being able to see and understand what's going on here is to leave antialiasing on, and then make the necessary changes until you get it just right. The easiest way to get started doing this is to leave the CGContext's transform unchanged from the default and change the values you're passing to the draw routines. It's true that you can also do some of this work by transforming the CGContext, but that adds a step of math that's done on every coordinate you pass in to any CG routine, which you can't step into in the debugger, so I highly recommend that you start with the standard transform and with AA left on. You will want to develop a full understanding of how CG drawing works before attempting to mess with the context transform.

Is there some Apple defined method to map graphics on pixel and not on
lines between pixels?

In a word, "no," because CGContexts are not universally bitmaps (for instance, you could be drawing into a PDF -- you can't know from asking the CGContext). The plane you're drawing into is intrinsically a floating-point plane. That's just how it works. It is completely possible to achieve 100% pixel-accurate bitmap drawing using the floating point plane, but in order to do so, you have to understand how this stuff actually works.

You might be able to get bootstrapped faster by taking the default CGContext that you're given and making this call:

CGContextTranslateCTM(context, 0.5, 0.5); 

What this will do is add (0.5, 0.5) to every point you ever pass in to all subsequent drawing calls (until you call CGContextRestoreGState). If you make no other changes to the context, that should make it such that when you draw a line from 0,0 -> 10,0 it will, even with antialiasing on, be perfectly pixel aligned. (See my initial answer above to understand why this is the case.)

Using this 0.5, 0.5 trick may get you started faster, but if you want to work with pixel-accurate CG drawing, you really need to get your head around how floating-point-based graphics contexts work, and how they relate to the bitmaps that may (or may not) back them.

Turning AA off and then nudging values around until they're "right" is just asking for trouble later. For instance, say some day UIKit passes you a context that has a different flip in it's transform? In that case, one of your values might round down, where it used to round up (because now it's being multiplied by -1.0 when the flip is applied). The same problem can happen with contexts that have been translated into a negative quadrant. Furthermore, you don't know (and can't strictly rely on, version to version) what rounding rule CoreGraphics is going to use when you turn AA off, so if you end up being handed contexts with different CTMs for whatever reason you'll get inconsistent results.

For the record, the time when you might want to turn antialiasing off would be when you're drawing non-rectilinear paths and you don't want CoreGraphics to alpha blend the edges. You may indeed want that effect, but turning off AA makes it much harder to learn, understand and be sure of what's going on, so I highly recommend leaving it on until you fully understand this stuff, and have gotten all your rectilinear drawing perfectly pixel aligned. Then flip the switch to get rid of AA on non-rectilinear paths.

Turning off antialiasing does NOT magically allow a CGContext to be addressed as a bitmap. It takes a floating point plane and adds a rounding step that you can't see (code-wise), can't control, and which is hard to understand and predict. In the end, you still have a floating point plane.

Just a quick example to help clarify:

- (void)drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSaveGState(ctx);

BOOL doAtDefaultScale = YES;
if (doAtDefaultScale)
{
// Do it by using the right values for the default context
CGContextSetStrokeColorWithColor(ctx, [[UIColor blackColor] CGColor]);
CGContextSetLineWidth(ctx, 0.5); // We're working in scaled pixel. 0.5pt => 1.0px
CGContextStrokeRect(ctx, CGRectMake(25.25, 25.25, 50, 50));
}
else
{
// Do it by transforming the context
CGContextSetStrokeColorWithColor(ctx, [[UIColor blackColor] CGColor]);
CGContextScaleCTM(ctx, 0.5, 0.5); // Back out the default scale
CGContextTranslateCTM(ctx, 0.5, 0.5); // Offset from edges of pixels to centers of pixels
CGContextSetLineWidth(ctx, 1.0); // we're working in device pixels now, having backed out the scale.
CGContextStrokeRect(ctx, CGRectMake(50, 50, 100, 100));
}
CGContextRestoreGState(ctx);
}

Make a new single-view application, add a custom view subclass with this drawRect: method, and set the default view in the .xib file to use your custom class. Both sides of this if statement produce the same results on retina display: a 100x100 device-pixel, non-alpha-blended square. The first side does it by using the "right" values for the default scale. The else side of it does it by backing out the 2x scale, and then translating the plane from being aligned to the edges of device pixels to being aligned with the center of device pixels. Note how the stroke widths are different (scale factors apply to them too.) Hope this helps.

OP replied:

But one note, there is some alpha blend, a little bit. This is the
screenshot with 3200x zoom:

No, really. Trust me. There's a reason for this, and it's NOT anti-aliasing being turned on in the context. (Also, I think you mean 3200% zoom, not 3200x zoom -- at 3200x zoom a single pixel wouldn't fit on a 30" display.)

In the example I gave, we were drawing a rect, so we didn't need to think about line endings, since it's a closed path -- the line is continuous. Now that you're drawing a single segment, you do have to think about line endings to avoid alpha blending. This is the "edge of the pixel" vs. "center of the pixel" thing coming back around. The default line cap style is kCGLineCapButt. kCGLineCapButt means that the end of the line starts exactly where you start drawing. If you want it to behave more like pen -- that is to say, if you put a felt-tip pen down, intending to draw a line 10 units to the right, some amount of ink is going to bleed out to the left of the exact point you pointed the pen at -- you might consider using kCGLineCapSquare (or kCGLineCapRound for a rounded end, but for single-pixel-level drawing that will just drive you mad with alpha-blending, since it will calculate the alpha as 1.0 - 0.5/pi). I've overlaid hypothetical pixel grids on this illustration from Apple's Quartz 2D Programming Guide to illustrate how line endings relate to pixels:

Line endings with hypothetical pixel grids overlaid

But, I digress. Here's an example; Consider the following code:

- (void)drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSaveGState(ctx);
CGContextSetStrokeColorWithColor(ctx, [[UIColor blackColor] CGColor]);
CGContextSetLineWidth(ctx, 0.5); //On device pixel at default scale
CGContextMoveToPoint(ctx, 2.0, 2.25); // Edge of pixel in X, Center of pixel in Y
CGContextAddLineToPoint(ctx, 7.0, 2.25); // Draw a line of 5 scaled, 10 device pixels
CGContextStrokePath(ctx); // Stroke it
CGContextRestoreGState(ctx);
}

Notice here that instead of moving to 2.25, 2.25, I move to 2.0, 2.25. This means I'm at the edge of the pixel in the X dimension, and the center of the pixel in the Y dimension. Therefore, I wont get alpha blending at the ends of the line. Indeed, zoomed to 3200% in Acorn, I see the following:

Non-alpha-blended 10 device pixel line at 3200%

Now, yes, at some point, way beyond the point of caring (for most folks) you may run into accumulated floating point error if you're transforming values or working with a long pipeline of calculations. I've seen error as significant as 0.000001 creep up in very complex situations, and even an error like that can bite you if you're talking about situations like the difference between 1.999999 vs 2.000001, but that is NOT what you're seeing here.

If your goal is only to draw pixel accurate bitmaps, consisting of axis-aligned/rectilinear elements only, there is NO reason to turn off context anti-aliasing. At the zoom levels and bitmap densities we're talking about in this situation, you should easily remain free of almost all problems caused by 1e-6 or smaller magnitude floating point errors. (In fact in this case, anything smaller than 1/256th of one pixel will not have any effect on alpha blending in an 8-bit context, since such error will effectively be zero when quantized to 8 bits.)

Let me take this opportunity to recommend a book: Programming with Quartz: 2D and PDF Graphics in Mac OS X It's a great read, and covers all this sort of stuff in great detail.

Create a circle or a disk with antialiasing for retina display

The problem is that you are creating a non-retina graphics context when using UIGraphicsBeginImageContext, as mentioned in the documentation:

This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.

Instead you should be using UIGraphicsBeginImageContextWithOptions to create your image context. You can keep passing NO for the opaque parameter if you want an image that supports transparency (same as what you are implicitly doing now).

In most cases you can pass 0.0 for the scale factor. This sets the scale factor to that of the device's main screen. Again, as mentioned in the documentation:

If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.


So, in short, you should create your image context like this:

UIGraphicsBeginImageContextWithOptions(rectForDisk.size, false, 0.0) // false for Swift


Related Topics



Leave a reply



Submit