Metal - Resize Video Buffer Before Passing to Custom Kernel Filter

Core image filter with custom metal kernel doesn't work

Your sampling coordinates are off.

Samplers use relative coordinates in Core Image, i.e. (0,0) corresponds to the upper left corner, (1,1) the lower right corner of the whole input image.

So try something like this:

float4 eight_bit(sampler image, sampler palette_image, float paletteSize) {
float4 color = image.sample(image.coord());
// initial offset to land in the middle of the first pixel
float2 firstPaletteCoord = float2(1.0 / (2.0 * palletSize), 0.5);
float dist = distance(color, palette_image.sample(firstPaletteCoord));
float4 returnColor = palette_image.sample(firstPaletteCoord);
for (int i = 1; i < floor(paletteSize); ++i) {
// step one pixel further
float2 paletteCoord = firstPaletteCoord + float2(1.0 / paletteSize, 0.0);
float4 paletteColor = palette_image.sample(paletteCoord);
float tempDist = distance(color, paletteColor);
if (tempDist < dist) {
dist = tempDist;
returnColor = paletteColor;
}
}
return returnColor;
}

Rewriting an Android OpenGL filter to Metal (for CIFilter)

Here's the Metal source for a kernel that attempts to replicate your described filter:

#include <metal_stdlib>
#include <CoreImage/CoreImage.h>

using namespace metal;

extern "C" {
namespace coreimage {

float4 sketch(sampler src, float texelWidth, float texelHeight, float intensity40) {
float size = 1.25f + (intensity40 / 100.0f) * 2.0f;

float minVal = 1.0f;
float maxVal = 0.0f;
for (float x = -size; x < size; ++x) {
for (float y = -size; y < size; ++y) {
float4 color = src.sample(src.coord() + float2(x * texelWidth, y * texelHeight));
float val = (color.r + color.g + color.b) / 3.0f;
if (val > maxVal) {
maxVal = val;
} else if (val < minVal) {
minVal = val;
}
}
}

float range = 5.0f * (maxVal - minVal);

float4 outColor(pow(1.0f - range, size * 1.5f));
outColor = float4((outColor.r + outColor.g + outColor.b) / 3.0f > 0.75f ? float3(1.0f) : outColor.rgb, 1.0f);
return outColor;
}

}
}

I assume you're already familiar with the basics of how to correctly build Metal shaders into a library that can be loaded by Core Image.

You can instantiate your kernel at runtime by loading the default Metal library and requesting the "sketch" function (the name is arbitrary, so long as it matches the kernel source):

NSURL *libraryURL = [NSBundle.mainBundle URLForResource:@"default" withExtension:@"metallib"];
NSData *libraryData = [NSData dataWithContentsOfURL:libraryURL];

NSError *error;
CIKernel *kernel = [CIKernel kernelWithFunctionName:@"sketch" fromMetalLibraryData:libraryData error:&error];

You can then apply this kernel to an image by wrapping it in your own CIFilter subclass, or just invoke it directly:

CIImage *outputImage = [kernel applyWithExtent:CGRectMake(0, 0, width, height)
roiCallback:^CGRect(int index, CGRect destRect)
{ return destRect; }
arguments:@[inputImage, @(1.0f/width), @(1.0f/height), @(60.0f)]];

I've tried to select sensible defaults for each of the arguments (the first of which should be an instance of CIImage), but of course these can be adjusted to taste.

Setting the interpolation algorithm / quality when transforming a CIImage

I researched a couple of techniques and found a couple of approaches that produce output images I'm pretty happy with. It was a lot more convenient to stay within Core Image, which can optimize a sequence of image manipulations, than to jump back and forth between Core Graphics.

I used the Lanczos Scale Transform filter to shrink the image smoothly:

// `image` is a CIImage
CIFilter *scaleFilter = [CIFilter filterWithName:@"CILanczosScaleTransform"];
scaleFilter setValue:image forKey:kCIInputImageKey];
[scaleFilter setValue:@(scale) forKey:kCIInputScaleKey];
CIImage *scaledImage = scaleFilter.outputImage;

The other important thing was to make sure all of the geometry was calculated in pixels and not points. Working with pixels produces a significantly higher-quality image compared to working with points.

The Sharpen Luminance filter can help define the detail in the resized photo:

CIFilter *sharpenFilter = [CIFilter filterWithName:@"CISharpenLuminance"];
[sharpenFilter setValue:scaledImage forKey:kCIInputImageKey];
[sharpenFilter setValue:@(0.1) forKey:kCIInputSharpnessKey];
CIImage *sharpenedImage = sharpenFilter.outputImage;

Finally, the JPEG compression level really made a difference on some images. Around 0.9 it was producing quite clear images, compared to 0.75 which had some artifacts.

Metal equivalent to OpenGL mix

The problem is, that greenCoord (which was only a good variable name for the other question you asked, by the way) is the relative coordinate of the current pixel and has nothing to do with the absolute input resolution.

If you want a replacement for your iResolution, use src.size() instead.

And it seems you need your input coordinates in absolute (pixel) units. You can achieve that by adding a destination parameter to the inputs of your kernel like so:

float4 cornerRadius(sampler src, destination dest) {
const float2 destCoord = dest.coord(); // pixel position in the output buffer in absolute coordinates
const float2 srcSize = src.size();

const float t = 0.5;
const float radius = min(srcSize.x, srcSize.y) * t;
const float2 halfRes = 0.5 * srcSize;

const float b = udRoundBox(destCoord - halfRes, halfRes, radius);

const float3 c = mix(float3(1.0,0.0,0.0), float3(0.0,0.0,0.0), smoothstep(0.0,1.0,b) );

return float4(c, 1.0);
}

Lanczos scale not working when scaleKey greater than some value

That's what Apple said:

This scenario exposes a bug in Core Image. The bug occurs when rendering requires an intermediate buffer that has a dimension greater than the GPU texture limits (4096) AND the input image fits into these limits. This happens with any filter that is performing a convolution (blur, lanczos) on an input image that has width or height close to the GL texture limit.

Note: the render is succesful if the one of the dimensions of the input image is increased to 4097.

Replacing CILanczosScaleTransform with CIAffineTransform (lower quality) or resizing the image with CG are possible workarounds for the provided sample code.

CIGaussianBlur image size

The issue isn't that it's not blurring all of the image, but rather that the blur is extending the boundary of the image, making the image larger, and it's not lining up properly as a result.

To keep the image the same size, after the line:

CIImage *resultImage    = [gaussianBlurFilter valueForKey: @"outputImage"];

You can grab the CGRect for a rectangle the size of the original image in the center of this resultImage:

// note, adjust rect because blur changed size of image

CGRect rect = [resultImage extent];
rect.origin.x += (rect.size.width - viewImage.size.width ) / 2;
rect.origin.y += (rect.size.height - viewImage.size.height) / 2;
rect.size = viewImage.size;

And then use CIContext to grab that portion of the image:

CIContext *context      = [CIContext contextWithOptions:nil];
CGImageRef cgimg = [context createCGImage:resultImage fromRect:rect];
UIImage *blurredImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);

Alternatively, for iOS 7, if you go to the iOS UIImageEffects sample code and download iOS_UIImageEffects.zip, you can then grab the UIImage+ImageEffects category. Anyway, that provides a few new methods:

- (UIImage *)applyLightEffect;
- (UIImage *)applyExtraLightEffect;
- (UIImage *)applyDarkEffect;
- (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor;
- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius tintColor:(UIColor *)tintColor saturationDeltaFactor:(CGFloat)saturationDeltaFactor maskImage:(UIImage *)maskImage;

So, to blur and image and lightening it (giving that "frosted glass" effect) you can then do:

UIImage *newImage = [image applyLightEffect];

Interestingly, Apple's code does not employ CIFilter, but rather calls vImageBoxConvolve_ARGB8888 of the vImage high-performance image processing framework. This technique is illustrated in WWDC 2013 video Implementing Engaging UI on iOS.



Related Topics



Leave a reply



Submit