Convert Opengl Shader to Metal (Swift) to Be Used in Cifilter

Convert OpenGL shader to Metal (Swift) to be used in CIFilter

I gave it a try. Here's the kernel code:

#include <metal_stdlib>
using namespace metal;
#include <CoreImage/CoreImage.h>

extern "C" { namespace coreimage {

float4 vhs(sampler_h src, float time, float amount) {
const float magnitude = sin(time) * 0.1 * amount;

float2 greenCoord = src.coord(); // this is alreay in relative coords; no need to devide by image size

const float split = 1.0 - fract(time / 2.0);
const float scanOffset = 0.01;
float2 redCoord = float2(greenCoord.x + magnitude, greenCoord.y);
float2 blueCoord = float2(greenCoord.x, greenCoord.y + magnitude);
if (greenCoord.y > split) {
greenCoord.x += scanOffset;
redCoord.x += scanOffset;
blueCoord.x += scanOffset;
}

float r = src.sample(redCoord).r;
float g = src.sample(greenCoord).g;
float b = src.sample(blueCoord).b;

return float4(r, g, b, 1.0);
}

}}

And here some slight adjustments to outputImage in your filter:

override var outputImage: CIImage? {
guard let inputImage = self.inputImage else { return nil }

// could be filter parameters
let inputTime: NSNumber = 60
let inputAmount: NSNumber = 0.3

// You need to tell the kernel the region of interest of the input image,
// i.e. what region of input pixels you need to read for a given output region.
// Since you sample pixels to the right and below the center pixel, you need
// to extend the ROI accordingly.
let magnitude = CGFloat(sin(inputTime.floatValue) * 0.1 * inputAmount.floatValue)
let inputExtent = inputImage.extent

let roiCallback: CIKernelROICallback = { _, rect -> CGRect in
return CGRect(x: rect.minX, y: rect.minY,
width: rect.width + (magnitude + 0.01) * inputExtent.width, // scanOffset
height: rect.height + magnitude * inputExtent.height)
}

return self.kernel.apply(extent: inputExtent,
roiCallback: roiCallback,
arguments: [inputImage, inputTime, inputAmount])
}

Rewriting an Android OpenGL filter to Metal (for CIFilter)

Here's the Metal source for a kernel that attempts to replicate your described filter:

#include <metal_stdlib>
#include <CoreImage/CoreImage.h>

using namespace metal;

extern "C" {
namespace coreimage {

float4 sketch(sampler src, float texelWidth, float texelHeight, float intensity40) {
float size = 1.25f + (intensity40 / 100.0f) * 2.0f;

float minVal = 1.0f;
float maxVal = 0.0f;
for (float x = -size; x < size; ++x) {
for (float y = -size; y < size; ++y) {
float4 color = src.sample(src.coord() + float2(x * texelWidth, y * texelHeight));
float val = (color.r + color.g + color.b) / 3.0f;
if (val > maxVal) {
maxVal = val;
} else if (val < minVal) {
minVal = val;
}
}
}

float range = 5.0f * (maxVal - minVal);

float4 outColor(pow(1.0f - range, size * 1.5f));
outColor = float4((outColor.r + outColor.g + outColor.b) / 3.0f > 0.75f ? float3(1.0f) : outColor.rgb, 1.0f);
return outColor;
}

}
}

I assume you're already familiar with the basics of how to correctly build Metal shaders into a library that can be loaded by Core Image.

You can instantiate your kernel at runtime by loading the default Metal library and requesting the "sketch" function (the name is arbitrary, so long as it matches the kernel source):

NSURL *libraryURL = [NSBundle.mainBundle URLForResource:@"default" withExtension:@"metallib"];
NSData *libraryData = [NSData dataWithContentsOfURL:libraryURL];

NSError *error;
CIKernel *kernel = [CIKernel kernelWithFunctionName:@"sketch" fromMetalLibraryData:libraryData error:&error];

You can then apply this kernel to an image by wrapping it in your own CIFilter subclass, or just invoke it directly:

CIImage *outputImage = [kernel applyWithExtent:CGRectMake(0, 0, width, height)
roiCallback:^CGRect(int index, CGRect destRect)
{ return destRect; }
arguments:@[inputImage, @(1.0f/width), @(1.0f/height), @(60.0f)]];

I've tried to select sensible defaults for each of the arguments (the first of which should be an instance of CIImage), but of course these can be adjusted to taste.

How do I convert OpenGLES shaders to Metal compatible ones?

There are many ways to do what you want:

1) You can use MoltenGL to seamlessly convert your GLSL shaders to MSL.

2) You can use open-source shader cross-compilers like: krafix, pmfx-shader, etc.


I would like to point out that based on my experience it would be better in terms of performance that you try to rewrite the shaders yourself.

Confusion About CIContext, OpenGL and Metal (SWIFT). Does CIContext use CPU or GPU by default?

I started making this a comment, but I think since WWDC'18 this works best as an answer. I'll edit as others more an expert than I comment, and am willing to delete the entire answer if that's the proper thing to do.

You are on the right track - utilize the GPU when you can and it's a good fit. CoreImage and Metal, while "low-level" technologies that "usually" use the GPU, can use the CPU if that is desired. CoreGraphics? It renders things using the CPU.

Images. A UIImage and a CGImage are actual images. A CIImage however, isn't. The best way to think of it is a "recipe" for an image.

I typically - for now, I'll explain in a moment - stick to CoreImage, CIFilters, CIImages, and GLKViews when working with filters. Using a GLKView against a CIImage means using OpenGL and a single CIContext and EAGLContext. It offers almost as good performance as using MetalKit or MTKViews.

As for using UIKit and it's UIImage and UIImageView, I only do when needed - saving/sharing/uploading, whatever. Stick to the GPU until then.

....

Here's where it starts getting complicated.

Metal is an Apple proprietary API. Since they own the hardware - including the CPU and GPU - they've optimized it for them. It's "pipeline" is somewhat different than OpenGL. Nothing major, just different.

Until WWDC'18, using GLKit, including GLKView, was fine. But all things OpenGL were depricated, and Apple is moving things to Metal. While the performance gain (for now) isn't that great, you may be best off for something new to use MTKView, Metal, and CIContext`.

Look at the answer @matt gave here for a nice way to use MTKViews.

Metal equivalent to OpenGL mix

The problem is, that greenCoord (which was only a good variable name for the other question you asked, by the way) is the relative coordinate of the current pixel and has nothing to do with the absolute input resolution.

If you want a replacement for your iResolution, use src.size() instead.

And it seems you need your input coordinates in absolute (pixel) units. You can achieve that by adding a destination parameter to the inputs of your kernel like so:

float4 cornerRadius(sampler src, destination dest) {
const float2 destCoord = dest.coord(); // pixel position in the output buffer in absolute coordinates
const float2 srcSize = src.size();

const float t = 0.5;
const float radius = min(srcSize.x, srcSize.y) * t;
const float2 halfRes = 0.5 * srcSize;

const float b = udRoundBox(destCoord - halfRes, halfRes, radius);

const float3 c = mix(float3(1.0,0.0,0.0), float3(0.0,0.0,0.0), smoothstep(0.0,1.0,b) );

return float4(c, 1.0);
}

Is it feasible to convert a QOpenGLWidget subclass into one that uses Metal instead of OpenGL?

Yes, it is possible to do what you want. It probably won't be a straightforward transition due to the fact that the code you posted uses very old deprecated features of OpenGL. Also, you might be better off just using CoreGraphics for the simple drawing you're doing. (It looks like a number of solid-colored, quads are being drawn. That's very easy and fairly efficient in CoreGraphics.) Metal seems like overkill for this job. That said, here are some ideas.

Metal is an inherently Objective-C API, so you will need to wrap the Metal code in some sort of wrapper. There are a number of ways you could write such a wrapper. You could make an Objective-C class that does your drawing and call it from your C++/Qt class. (You'll need to put your Qt class into a .mm file so the compiler treats it as Objective-C++ to call Objective-C code.) Or you could make your Qt class be an abstract class that has an implementation pointer to the class that does the real work. On Windows and Linux it could point to an object that does OpenGL drawing. On macOS it would point to your Objective-C++ class that uses Metal for drawing.

This example of mixing OpenGL and Metal might be informative for understanding how the 2 are similar and where they differ. Rather than having a context where you set state and make draw calls like in OpenGL, in Metal you create a command buffer with the drawing commands and then submit them to be drawn. Like with more modern OpenGL programming where you have vertex arrays and apply a vertex and fragment shader to every piece of geometry, in Metal you will also submit vertices and use a fragment and vertex shader for drawing.

To be honest, though, that sounds like a lot of work. (But it is certainly possible to do.) If you did it in CoreGraphics it would look something like this:

   virtual void paintCG()
{
const float meterWidth = [...];
const float top = [...];

CGRect backgroundRect = CGRectMake(...);
CGContextClearRect(ctx, backgroundRect);

float x = 0.0f;
for (int i=0; i<numMeters; i++)
{
const float y = _meterHeight[i];

CGContextSetRGBFillColor(ctx, _meterColorRed[i], _meterColorGreen[i], _meterColorBlue[i]);
CGRect meterRect = CGRectMake(x, y, meterWidth, _meterHeight[i]);
CGContextFillRect(ctx, meterRect);

x += meterWidth;
}
glEnd();
}

It just requires that you have a CGContextRef, which I believe you can get from whatever window you're drawing into. If the window is an NSWindow, then you can call:

 NSGraphicsContext* nsContext = [window graphicsContext];
CGContextRef ctx = nsContext.CGContext;

This seems easier to write and maintain than using Metal in this case.

How to apply a Vignette CIFilter to a live camera feed in iOS?

Your step 2 is way too slow to support real-time rendering... and it looks like you're missing a couple of steps. For your purpose, you would typically:

Setup:

  1. create a pool of CVPixelBuffer - using CVPixelBufferPoolCreate
  2. create a pool of metal textures using CVMetalTextureCacheCreate

For each frame:


  1. convert CMSampleBuffer > CVPixelBuffer > CIImage
  2. Pass that CIImage through your filter pipeline
  3. render the output image into a CVPixelBuffer from the pool created in step 1
  4. use CVMetalTextureCacheCreateTextureFromImage to create a metal texture with your filtered CVPixelBuffer

If setup correctly, all these steps will make sure your image data stays on the GPU, as opposed to travelling from GPU to CPU and back to GPU for display.

The good news is all this is demoed in the AVCamPhotoFilter sample code from Apple https://developer.apple.com/library/archive/samplecode/AVCamPhotoFilter/Introduction/Intro.html#//apple_ref/doc/uid/TP40017556. In particular see the RosyCIRenderer class and its superclass FilterRenderer.



Related Topics



Leave a reply



Submit