Ios: Ambiguous Use of Init(Cgimage)

IOS: Ambiguous Use of init(CGImage)

I solved it on my own.

It turns out, I was capitalizing CGImage wrong. The code should really read:

let personciImage = CIImage(cgImage: imageView.image!.cgImage!)

This throws no errors.

Ambiguous use of 'init'

I found solution for my issue , I use same method name with deferent parameter in Objective-C Class

- (instancetype)initWithCoordinate:(CLLocationCoordinate2D)coordinate
Dragable:(BOOL)isDragable
updateCoordinate:(updateCoordinate)updateCoordinate;

and

- (instancetype)initWithCoordinate:(CLLocationCoordinate2D)coordinate
Dragable:(BOOL)isDragable;

just used NS_SWIFT_NAME to change names and it's work fine

NS_SWIFT_NAME(init(withUpdateCoordinateAndCoordinate:isDragable:withUpdateCoordinate:));

and

NS_SWIFT_NAME(init(withCoordinate:isDragable:));

Cut image in pieces swift3 / Ambiguous use of init((CGImage: scale: orientation:)

Just wanted to make sure that Rob's comment gets highlighted since that seems to be the correct answer. To add to it, as of Swift 4, method signature stays what Rob has mentioned.

Rob:
"In Swift 3, the first label to that function is now cgImage:, not CGImage:. See init(cgImage:scale:orientation:)."

For e.g.:

let resultUIImg = UIImage(cgImage: someCGImg!, scale: origUIImg.scale, orientation: origUIImg.imageOrientation)

Swift3 - Ambiguous use of Init

Try this:

    return ImageProcessor.imageFromARGB32Bitmap(Data(bytes: pixelBuffer), width: framebufferwidth, height: framebufferheight)

(Assuming ImageProcessor.imageFromARGB32Bitmap takes Data as its first parameter.)

You have no need to get an UnsafePointer from an Array of UInt8.

Slow performance of CGImage averaging

There are a few options:

  1. Parallelize the routine:

    You can improve performance with concurrentPerform, to move the processing to multiple cores. It it’s simplest form, you can just replace your outer for loop with concurrentPerform:

    extension CGImage {
    func average(with secondImage: CGImage) -> CGImage? {
    guard
    width == secondImage.width,
    height == secondImage.height
    else {
    return nil
    }

    let colorSpace = CGColorSpaceCreateDeviceRGB()
    let bytesPerPixel = 4
    let bitsPerComponent = 8
    let bytesPerRow = bytesPerPixel * width
    let bitmapInfo = RGBA32.bitmapInfo

    guard
    let context1 = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
    let context2 = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
    let buffer1 = context1.data,
    let buffer2 = context2.data
    else {
    return nil
    }

    context1.draw(self, in: CGRect(x: 0, y: 0, width: width, height: height))
    context2.draw(secondImage, in: CGRect(x: 0, y: 0, width: width, height: height))

    let imageBuffer1 = buffer1.bindMemory(to: UInt8.self, capacity: width * height * 4)
    let imageBuffer2 = buffer2.bindMemory(to: UInt8.self, capacity: width * height * 4)

    DispatchQueue.concurrentPerform(iterations: height) { row in // i.e. a parallelized version of `for row in 0 ..< height {`
    var offset = row * bytesPerRow
    for _ in 0 ..< bytesPerRow {
    offset += 1

    let byte1 = imageBuffer1[offset]
    let byte2 = imageBuffer2[offset]

    imageBuffer1[offset] = byte1 / 2 + byte2 / 2
    }
    }

    return context1.makeImage()
    }
    }

    Note, a few other observations:

    • Because you're doing the same calculation on every byte, you might simplify this further, getting rid of casts, shifts, masks, etc. I also moved repetitive calculations out of the inner loop.

    • As a result, I’m using UInt8 type and iterating through bytesPerRow.

    • FWIW, I’ve defined this as a CGImage extension, which is invoked as:

      let combinedImage = image1.average(with: image2)
    • Right now, we’re striding through the pixels by row in the pixel array. You can play around with actually changing this to process multiple pixels per iteration of concurrentPerform, though I didn’t see a material change when I did that.
       

    I found that concurrentPerform was many times faster than the non-parallelized for loop. Unfortunately, the nested for loop are only a small part of the overall processing time of the entire function (e.g. once you include overhead of building these two pixel buffers, the overall performance is only 40% faster than the non-optimized rendition). On well-spec’ed MBP 2018, it processes 10,000 × 10,000 px images in under half a second.

  2. The other alternative is the Accelerate vImage library.

    This library offers wide variety of image processing routines and is a good library to familiarize yourself with if you’re going to be processing large images. I don’t know if its alpha compositing algorithm is mathematically identical to an “average the byte values” algorithm, but might be sufficient for your purposes. It has the virtue that it reduces your nested for loops with a single API call. It also opens the door for a far wider variety of types of image compositing and manipulation routines:

    extension CGImage {
    func averageVimage(with secondImage: CGImage) -> CGImage? {
    let bitmapInfo: CGBitmapInfo = [.byteOrder32Little, CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)]
    let colorSpace = CGColorSpaceCreateDeviceRGB()

    guard
    width == secondImage.width,
    height == secondImage.height,
    let format = vImage_CGImageFormat(bitsPerComponent: 8, bitsPerPixel: 32, colorSpace: colorSpace, bitmapInfo: bitmapInfo)
    else {
    return nil
    }

    guard var sourceBuffer = try? vImage_Buffer(cgImage: self, format: format) else { return nil }
    defer { sourceBuffer.free() }

    guard var sourceBuffer2 = try? vImage_Buffer(cgImage: secondImage, format: format) else { return nil }
    defer { sourceBuffer2.free() }

    guard var destinationBuffer = try? vImage_Buffer(width: width, height: height, bitsPerPixel: 32) else { return nil }
    defer { destinationBuffer.free() }

    guard vImagePremultipliedConstAlphaBlend_ARGB8888(&sourceBuffer, Pixel_8(127), &sourceBuffer2, &destinationBuffer, vImage_Flags(kvImageNoFlags)) == kvImageNoError else {
    return nil
    }

    return try? destinationBuffer.createCGImage(format: format)
    }
    }

    Anyway, I found the performance here to be similar to the concurrentPerform algorithm.

  3. For giggles and grins, I also tried rendering the images with CGBitmapInfo.floatComponents and used BLAS catlas_saxpby for one-line call to average the two vectors. It worked well, but, unsurprisingly, was slower than the above integer-based routines.

CIImage Convert to UIImage

you will call init for resolve this issue

imgurl.image = UIImage.init(ciImage: transformedImage)

CGImage to MPSTexture or MPSImage

Why not construct the MTLTexture from the CVPixelBuffer directly? Is much quicker!

Do this once at the beginning of your program:

// declare this somewhere, so we can re-use it
var textureCache: CVMetalTextureCache?

// create the texture cache object
guard CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, device, nil, &textureCache) == kCVReturnSuccess else {
print("Error: could not create a texture cache")
return false
}

Do this once your have your CVPixelBuffer:

let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)

var texture: CVMetalTexture?
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache,
pixelBuffer, nil, .bgra8Unorm, width, height, 0, &texture)

if let texture = texture {
metalTexture = CVMetalTextureGetTexture(texture)
}

Now metalTexture contains an MTLTexture object with the contents of the CVPixelBuffer.

Ambiguous use of 'value'

The problem is that the use of .value is ambiguous to Swift, as it could either refer to your value property, or to any of NSObject's value(for...) family of methods for Key-Value Coding.

I don't believe there's an easy way of disambiguating this by just using Swift syntax (given that your value property is typed as Any – which the methods can also be typed as).

Although amusingly, you can actually use Key-Value Coding itself to get the value:

let fetcher = wrapper?.value(forKeyPath: #keyPath(ObjectWrapper.value)) as? Fetcher<UIImage>

But honestly, the easiest solution would be just to rename your value property to something else (base maybe?).

If a filter is applied to a PNG where height width, it rotates the image 90 degrees. How can I efficiently prevent this?

I found the answer. My biggest issue was the "Ambiguous use of 'init(CIImage:scale:orientation:)' "

it turned out that Xcode was auto populating the code as 'CIImage:scale:orientation' when it should have been ciImage:scale:orientation' The very vague error left a new dev like my scratching my head for 3 days over this. (This was true for CGImage and UIImage inits as well, but my original error was with CIImage so I used that to explain.)

with that knowledge I was able to formulate the code below for my new output:

if let output = currentFilter?.value(forKey: kCIOutputImageKey) as? CIImage{

let outputImage = UIImage(cgImage: context.createCGImage(output, from: output.extent)!)

let imageTurned = UIImage(cgImage: outputImage.cgImage!, scale: CGFloat(1.0), orientation: origImage.imageOrientation)
centerScrollViewContents()
self.imageView.image = imageTurned
}

This code replaces the if let output in the OP.



Related Topics



Leave a reply



Submit