iOS Glsl. How to Create an Image Histogram Using a Glsl Shader

iOS GLSL. Is There A Way To Create An Image Histogram Using a GLSL Shader?

Yes, it is. It's not clearly the best approach, but it's indeed the best one available in iOS, since OpenCL is not supported. You'll lose elegance, and your code will probably not as straightforward, but almost all OpenCL features can be achieved with shaders.

If it helps, DirectX11 comes with a FFT example for compute shaders. See DX11 August SDK Release Notes.

Create depth buffer histogram texture with GLSL

Luckily there's a trick: vertex shaders can sample textures too. So you can issue a lot of GL_POINTS, each corresponding to an individual fragment in the depth texture, then in the vertex shader you can read from the depth texture to determine the transformed position of the point. In your fragment shader for the points just plot a value with a suitable alpha to cause the accumulation you desire.

So, you've got the vertex shader reading one texture, the fragment shader not reading any textures and you're using the normal render-to-texture mechanism to write to your histogram.

Luminance histogram calculation in GPU-android opengl es 3.0

I'm using C and desktop GL, but here's the gist of it :

Vertex shader

#version 330
layout (location = 0) in vec2 inPosition;
void main()
{
int x = compute the bin (0 to 255) from inPosition;

gl_Position = vec4(
-1.0 + ((x + 0.5) / 128.0),
0.5,
0.0,
1.0
);
}

Fragment shader

#version 330
out vec4 outputColor;
void main()
{
outputColor = vec4(1.0, 1.0, 1.0, 1.0);
}

Init:

glGenTextures(1, &tex);
glGenFramebuffers(1, &fbo);

glActiveTexture(GL_TEXTURE0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

glTexStorage2D(GL_TEXTURE_2D, 1, GL_R32F, 256, 1);

glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, tex, 0);

Drawing :

/* Upload data */
glBufferData(GL_ARRAY_BUFFER, num_input_data * 2 * sizeof(float), input_data_ptr, GL_STREAM_DRAW);

/* Clear buffer */
const float zero[4] = { 0.0f, 0.0f, 0.0f, 0.0f };
glClearBufferfv(GL_COLOR, 0, zero);

/* Init viewport */
glViewport(0, 0, 256, 1);

/* Draw */
glDrawArrays(GL_POINTS, 0, num_input_data);

For brievety I only put the init code for the resulting buffer, all the VBO/VAO init/binding has been skipped

OpenGL ES 2.0 shader examples for image processing?

I'll assume you have a simple uncontroversial vertex shader, as it's not really relevant to the question, such as:

void main()
{
gl_Position = modelviewProjectionMatrix * position;
texCoordVarying = vec2(textureMatrix * vec4(texCoord0, 0.0, 1.0));
}

So that does much the same as ES 1.x would if lighting was disabled, including the texture matrix that hardly anyone ever uses.

I'm not a Photoshop expert, so please forgive my statements of what I think the various tools do — especially if I'm wrong.

I think I'm right to say that the levels tool effectively stretches (and clips) the brightness histogram? In that case an example shader could be:

varying mediump vec2 texCoordVarying;
uniform sampler2D tex2D;

const mediump mat4 rgbToYuv = mat4( 0.257, 0.439, -0.148, 0.06,
0.504, -0.368, -0.291, 0.5,
0.098, -0.071, 0.439, 0.5,
0.0, 0.0, 0.0, 1.0);

const mediump mat4 yuvToRgb = mat4( 1.164, 1.164, 1.164, -0.07884,
2.018, -0.391, 0.0, 1.153216,
0.0, -0.813, 1.596, 0.53866,
0.0, 0.0, 0.0, 1.0);

uniform mediump float centre, range;

void main()
{
lowp vec4 srcPixel = texture2D(tex2D, texCoordVarying);
lowp vec4 yuvPixel = rgbToYuv * srcPixel;

yuvPixel.r = ((yuvPixel.r - centre) * range) + 0.5;

gl_FragColor = yuvToRgb * yuvPixel;
}

You'd control that by setting the centre of the range you want to let through (which will be moved to the centre of the output range) and the total range you want to let through (1.0 for the entire range, 0.5 for half the range, etc).

One thing of interest is that I switch from the RGB input space to a YUV colour space for the intermediate adjustment. I do that using a matrix multiplication. I then adjust the brightness channel, and apply another matrix that transforms back from YUV to RGB. To me it made most sense to work in a luma/chroma colour space and from there I picked YUV fairly arbitrarily, though it has the big advantage for ES purposes of being a simple linear transform of RGB space.

I am under the understanding that the curves tool also remaps the brightness, but according to some function f(x) = y, which is monotonically increasing (so, will intersect any horizontal or vertical only exactly once) and is set in the interface as a curve from bottom left to top right somehow.

Because GL ES isn't fantastic with data structures and branching is to be avoided where possible, I'd suggest the best way to implement that is to upload a 256x1 luminance texture where the value at 'x' is f(x). Then you can just map through the secondary texture, e.g. with:

... same as before down to ...
lowp vec4 yuvPixel = rgbToYuv * srcPixel;

yuvPixel.r = texture2D(lookupTexture, vec2(yuvPixel.r, 0.0));

... and as above to convert back to RGB, etc ...

You're using a spare texture unit to index a lookup table, effectively. On iOS devices that support ES 2.0 you get at least eight texture units so you'll hopefully have one spare.

Hue/saturation adjustments are more painful to show because the mapping from RGB to HSV involves a lot of conditionals, but the process is basically the same — map from RGB to HSV, perform the modifications you want on H and S, map back to RGB and output.

Based on a quick Google search, this site offers some downloadable code that includes some Photoshop functions (though not curves or levels such that I can see) and, significantly, supplies example implementations of functions RGBToHSL and HSLToRGB. It's for desktop GLSL, which has a more predefined variables, types and functions, but you shouldn't have any big problems working around that. Just remember to add precision modifiers and supply your own replacements for the absent min and max functions.

Implementation limit of active vertex shader samplers on the iPhone

I had the exact same question, so I asked a couple of Apple's OpenGL ES engineers this at WWDC. According to them, the support for sampling from a texture within a vertex shader on certain devices in iOS 4.x was a bug, and this was removed in iOS 5.x.

It has never been officially supported, and this new error message is just describing why this fails. On iOS 5.x, and most devices running iOS 4.x, you'd just get a black screen if you tried this, with no warnings. All they've done is add some explanation for this behavior.

Get OpenGL histogram of masked texture

glGetHistogram is deprecated since OpenGL 3.1 anyway.

Using compute shaders or occlusion queries would be a better idea.

GPUImage - Custom Histogram Generator

The GPUImageHistogramFilter produces a 256x3 image where the center 256x1 line of that contains the red, green, and blue values for the histogram packed in the RGB channels. iOS doesn't support a 1-pixel-high render framebuffer, so I have to pad it out to three pixels high.

The GPUImageHistogramGenerator creates the visible histogram overlay you see in the sample applications, and it does that by taking in the 256x3 image and rendering an output image using a custom shader that colors in bars whose height depends on the input color value. It's a quick, on-GPU implementation.

If you want to do something more custom that doesn't use a shader, you can extract the histogram values using a GPUImageRawDataOutput and pulling out the RGB components of the center 256x1 line. From there, you could draw the rest of your interface overlay, although something done using Core Graphics may chew a lot of processing power to update on every frame.

Texture lookup in vertex shader behaves differently on iPad device vs iPad simulator - OpenGL ES 2.0

I just tested this as well. Using iOS 4.3, you can do a texture lookup on the vertex shader on both the device and the simulator. There is a bit of strangeness though (which is maybe why it's not "official" as szu mentioned). On the actual device (I tested on the iPad 2) you have to do a lookup on the fragment shader as well as on the vertex shader. That is, if you are not actually using it on the fragment shader, you'll still have to reference it in some way. Here's a trivial example where I'm passing in a texture and using the red pixel to reposition the y value of the vertex by a little bit:

/////fragment shader
uniform sampler2D tex; //necessary even though not actually used

void main() {
vec4 notUsed = texture2D(tex, vec2(0.0,0.0)); //necessary even though not actually used
gl_FragColor = vec4(1.0,1.0,1.0,1.0);
}

/////vertex shader
attribute vec4 position;
attribute vec2 texCoord;
uniform sampler2D tex;

void main() {
float offset = texture2D(tex, texCoord).x;
offset = (offset - 0.5) * 2.0; //map 0->1 to -1 to +1
float posx = position.x;
float posy = position.y + offset/8.0;

gl_Position = vec4(posx, posy, 0.0, 1.0);
}

I have a slightly fuller write-up of this at http://www.mat.ucsb.edu/a.forbes/blog/?p=453

CIKernel White Pixel with GLSL

Th oixel is "white" if each of the the tree color channels is >= 1.0. This can be checked by testing if the sum of the color channels is 3.0. Of course it has to be ensured that the three color channels are limited to 1.0 first:

bool is_white = dot(vec3(1.0), clamp(lightCol.rgb, 0.0, 1.0)) > 2.999;

or

float white = step(2.999, dot(vec3(1.0), clamp(lightCol.rgb, 0.0, 1.0))); 

In ths case min(vec3(1.0), lightCol.rgb) can be used instead of clamp(lightCol.rgb, 0.0, 1.0) too.

If it is well known, that each of the three color channels is <= 1.0, then the expression can be simplified:

dot(vec3(1.0), lightCol.rgb) > 2.999



Note, in this case the dot product calculates:

1.0*lightCol.r + 1.0*lightCol.g + 1.0*lightCol.b

and luma can be calculated as follows:

float luma = dot(vec3(0.2126, 0.7152, 0.0722), lightCol.rgb);


Related Topics



Leave a reply



Submit