Drawing Pixels on the Screen Using Coregraphics in Swift

Drawing pixels on the screen using CoreGraphics in Swift

There are couple changes required.
1. Crashing because of there was memory overrun.
2. You are creating image from newly created context and writing to same context instead of current drawing context.

Use this modified drawRect Function:

override func drawRect(rect: CGRect) {
let width = 200
let height = 300
let boundingBox = CGRectMake(0, 0, CGFloat(width), CGFloat(height))
let context = createBitmapContext(width, height)

let data = CGBitmapContextGetData(context)

let pixels = UnsafeMutablePointer<CUnsignedChar>(data)
var n = 0
for var j = 0; j < height; j++ {
for var i = 0; i < width; i++ {
pixels[n++] = 0 //B
pixels[n++] = 255 //G
pixels[n++] = 0 //R
pixels[n++] = 255 //A
}
}
let image = CGBitmapContextCreateImage(context!)
if let currentContext: CGContext! = UIGraphicsGetCurrentContext() {
UIImagePNGRepresentation(UIImage(CGImage:image!))?.writeToFile("/Users/admin/Desktop/aaaaa.png", atomically: true)
CGContextDrawImage(currentContext, boundingBox, image)
}
}

Is this code drawing at the point or pixel level? How to draw retina pixels?

[Note: The code on the github example does not calculate the gradient on a pixel basis. The code on the github example calculates the gradient on a points basis. -Fattie]

The code is working in pixels. First, it fills a simple raster bitmap buffer with the pixel color data. That obviously has no notion of an image scale or unit other than pixels. Next, it creates a CGImage from that buffer (in a bit of an odd way). CGImage also has no notion of a scale or unit other than pixels.

The issue comes in where the CGImage is drawn. Whether scaling is done at that point depends on the graphics context and how it has been configured. There's an implicit transform in the context that converts from user space (points, more or less) to device space (pixels).

The -drawInContext: method ought to convert the rect using CGContextConvertRectToDeviceSpace() to get the rect for the image. Note that the unconverted rect should still be used for the call to CGContextDrawImage().

So, for a 2x Retina display context, the original rect will be in points. Let's say 100x200. The image rect will be doubled in size to represent pixels, 200x400. The draw operation will draw that to the 100x200 rect, which might seem like it would scale the large, highly-detailed image down, losing information. However, internally, the draw operation will scale the target rect to device space before doing the actual draw, and fill a 200x400 pixel area from the 200x400 pixel image, preserving all of the detail.

Drawing on the retina display using CoreGraphics - Image pixelated

You need to replace UIGraphicsBeginImageContext with

if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(size);
}

UIGraphicsBeginImageContextWithOptions was introduced in 4.x software. If you're going to run this code on 3.x devices, you need to weak link UIKit framework. If your deployment target is 4.x or higher, you can just use UIGraphicsBeginImageContextWithOptions without any additional checking.

How to render to offscreen bitmap then blit to screen using Core Graphics

To render into an offscreen context and save it as a CGImageRef:

void *bitmapData = calloc(height, bytesPerLine);
CGContextRef offscreen = CGBitmapContextCreate(..., bitmapData, ...)
// draw stuff into offscreen
CGImageRef image = CGBitmapContextCreateImage(offscreen);
CFRelease(offscreen);
free(bitmapData);

To draw it on the screen:

- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, rect, image);
}

You could also just save the image in the view's layer's contents property (view.layer.contents = image), or use a UIImageView.

Generate Image from Pixel Array (fast)

The most performant way to do that is to use Metal Compute Function.

Apple has a good documentation to illustrate GPU programming.

  • Performing Calculations on a GPU

  • Processing a Texture in a Compute Function

Best way to change pixels in iOS, swift

Manipulating individual pixels and then copying the entire memory buffer to a CGContext and then creating a UIImage with that context is going to end up being inefficient, as you are discovering.

You can continue to improve and optimize a CoreGraphics canvas approach by being more efficient about what part of your offscreen is copied onto screen. You can detect the pixels that have changed and only copy the minimum bounding rectangle of those pixels onto screen. This approach may be good enough for your use case where you are only filling in areas with colors.

Instead of copying the entire offscreen, copy just the changed area:

self.context?.draw(image.cgImage, in: CGRect(x: diffX, y: diffY, width: diffWidth, height: diffHeight))

It is up to you to determine the changed rectangle and when to update the screen.

Here is an example of a painting app that uses CoreGraphics, CoreImage and CADisplayLink. The code is a bit old, but the concepts are still valid and will serve as a good starting point. You can see how the changes are accumulated and drawn to the screen using a CADisplayLink.

If you want to introduce various types of ink and paint effects, a CoreGraphics approach is going to be more challenging. You will want to look at Apple's Metal API. A good tutorial is here.

Click and drag the CGContext drawing to position it anywhere on the screen

Check this library: https://github.com/luiyezheng/JLStickerTextView
I think this what I what you are looking for.



Related Topics



Leave a reply



Submit