How to Display Mtkview with Rgba16Float Mtlpixelformat

how to display MTKView with rgba16Float MTLPixelFormat

So, it looks like you are very close to the complete solution. But what you have is not quite correct. Here is a Metal function that will convert from sRGB to a linear value which you can then write in your Metal shader (I still suggest that you write to a sRGB texture but you can also write to a 16 bit texture). Note that sRGB is not a simple 2.2 gamma curve.

// Convert a non-linear log value to a linear value.
// Note that normV must be normalized in the range [0.0 1.0].

static inline
float sRGB_nonLinearNormToLinear(float normV)
{
if (normV <= 0.04045f) {
normV *= (1.0f / 12.92f);
} else {
const float a = 0.055f;
const float gamma = 2.4f;
//const float gamma = 1.0f / (1.0f / 2.4f);
normV = (normV + a) * (1.0f / (1.0f + a));
normV = pow(normV, gamma);
}

return normV;
}

How to display very large Metal-based textures in an NSScrollView?

I received a suggestion from an Apple engineer on the developer forums which was to place an MTKView inside the document view of the NSScrollView, i.e.:

NSScrollView
NSClipView
NSView <-- document view
MTKView

This makes sense since Metal and AppKit don't really talk to one another. With this scheme one can manipulate the document view size to get the scroll bars to reflect correctly and manually set the MTKView to be no larger than what is visible in the clip view. Not the ideal solution since there is a lot of NSRect-math involved, but certainly doable. (I haven't had time to implement it yet but will update this answer with any useful information if it comes up.)

The reply on the forum suggested looking at the code that synchronizes two scroll views as a good starting point.

How can I find the brightest point in a CIImage (in Metal maybe)?

Check out the filters in the CICategoryReduction (like CIAreaAverage). They return images that are just a few pixels tall, containing the reduction result. But you still have to render them to be able to read the values in your Swift function.

The problem for using this approach for your problem is that you don't know the number of coordinates you are returning beforehand. Core Image needs to know the extend of the output when it calls your kernel, though. You could just assume a static maximum number of coordinates, but that all sounds tedious.

I think you are better off using Accelerate APIs for iterating the pixels of your image (parallelized, super efficiently) on the CPU to find the corresponding coordinates.

You could do a hybrid approach where you do the per-pixel heavy math on the GPU with Core Image and then do the analysis on the CPU using Accelerate. You can even integrate the CPU part into your Core Image pipeline using a CIImageProcessorKernel.



Related Topics



Leave a reply



Submit