Opengl Define Vertex Position in Pixels

OpenGL define vertex position in pixels

This is rather basic knowledge that your favourite OpenGL learning resource should teach you as one of the first things. But anyway the standard OpenGL pipeline is as follows:

  1. The vertex position is transformed from object-space (local to some object) into world-space (in respect to some global coordinate system). This transformation specifies where your object (to which the vertices belong) is located in the world

  2. Now the world-space position is transformed into camera/view-space. This transformation is determined by the position and orientation of the virtual camera by which you see the scene. In OpenGL these two transformations are actually combined into one, the modelview matrix, which directly transforms your vertices from object-space to view-space.

  3. Next the projection transformation is applied. Whereas the modelview transformation should consist only of affine transformations (rotation, translation, scaling), the projection transformation can be a perspective one, which basically distorts the objects to realize a real perspective view (with farther away objects being smaller). But in your case of a 2D view it will probably be an orthographic projection, that does nothing more than a translation and scaling. This transformation is represented in OpenGL by the projection matrix.

  4. After these 3 (or 2) transformations (and then following perspective division by the w component, which actually realizes the perspective distortion, if any) what you have are normalized device coordinates. This means after these transformations the coordinates of the visible objects should be in the range [-1,1]. Everything outside this range is clipped away.

  5. In a final step the viewport transformation is applied and the coordinates are transformed from the [-1,1] range into the [0,w]x[0,h]x[0,1] cube (assuming a glViewport(0, w, 0, h) call), which are the vertex' final positions in the framebuffer and therefore its pixel coordinates.

When using a vertex shader, steps 1 to 3 are actually done in the shader and can therefore be done in any way you like, but usually one conforms to this standard modelview -> projection pipeline, too.

The main thing to keep in mind is, that after the modelview and projection transforms every vertex with coordinates outside the [-1,1] range will be clipped away. So the [-1,1]-box determines your visible scene after these two transformations.

So from your question I assume you want to use a 2D coordinate system with units of pixels for your vertex coordinates and transformations? In this case this is best done by using glOrtho(0.0, w, 0.0, h, -1.0, 1.0) with w and h being the dimensions of your viewport. This basically counters the viewport transformation and therefore transforms your vertices from the [0,w]x[0,h]x[-1,1]-box into the [-1,1]-box, which the viewport transformation then transforms back to the [0,w]x[0,h]x[0,1]-box.

These have been quite general explanations without mentioning that the actual transformations are done by matrix-vector-multiplications and without talking about homogenous coordinates, but they should have explained the essentials. This documentation of gluProject might also give you some insight, as it actually models the transformation pipeline for a single vertex. But in this documentation they actually forgot to mention the division by the w component (v" = v' / v'(3)) after the v' = P x M x v step.

EDIT: Don't forget to look at the first link in epatel's answer, which explains the transformation pipeline a bit more practical and detailed.

How can I specify vertices in pixels in OpenGL?

If you want to use pixel coordinates your rendering, it's pretty easy to do so using an orthographic matrix, which you can create using glOrtho. Assuming your window is 800x600 you could use the following code:

// Set your projection matrix
glMatrixMode(GL_PROJECTION);
glOrtho(0, 800, 0, 600, -1, 1);
// Restore the default matrix mode
glMatrixMode(GL_MODELVIEW);

glOrtho expects the parameters to be 'left, right, bottom, top' so this will actually put the origin at the lower left (most OpenGL coordinate systems have Y increase as you move up). However, you want to have the origin in the upper left, as is common with most pixel based drawing systems, you'd want to reverse the bottom and top parameters.

This will let you call glVertex2f with pixel coordinates instead of the default clip coordinates. Note that you don't have to call a special function to convert from an int to a float. C and C++ should both do an implicit conversion. i.e. glVertext2f(420, 300);

Read vertex positions as pixels in Three.js

Solved. Quite a few things were wrong with the jsfiddle mentioned in the question.

  • width * height should be equal to the vertex count. A PlaneBufferGeometry with 4 by 4 segments results in 25 vertices. 3 by 3 results in 16. Always (w + 1) * (h + 1).
  • The positions in the vertex shader need a nudge of 1.0 / width.
  • The vertex shader needs to know about width and height, they can be passed in as uniforms.
  • Each vertex needs an attribute with its index so it can be correctly mapped.
  • Each position should be one pixel in the resulting texture.
  • The resulting texture should be drawn as gl.POINTS with gl_PointSize = 1.0.

Working jsfiddle: https://jsfiddle.net/brunoimbrizi/m0z8v25d/13/

Opengl vertex coordinates for perspective projection

Initially I was misunderstood difference between orthogonal and perspective projections. As I understood now all vertices mapped initially in NDC for perspective projection. Then they moved, scaled, etc with model matrix. Pixel perfect rendering can be realized only with some constant depth or with orthogonal. I't unuseful for 3D with perspective projection.

Modify single vertex position in OpenGL

You can simply modify the data in your buffer(s). The data is still there, the VAO is only pointing to it.

So, at the beginning of your rendering cycle, simply modify your box_vertices array as desired, then call the same thing as when you're putting data in for the first time:

glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(box_vertices), box_vertices, GL_STATIC_DRAW);

Your array is very small, so performance should not be a concern. If it is, be sure to only modify the buffer only if the shape has actually changed, consider using GL_DYNAMIC_DRAW instead of GL_STATIC_DRAW or use glBufferSubData to only replace a part of the buffer (I don't have a good example for this handy, unfortunately).

EDIT: The code to modify the buffer does not have to be inside the drawing loop, of course. It can be anywhere (for example in a mouse or keyboard handler). I suggested putting it inside the loop because the way the code is structured, there is nowhere else to place it.

OpenGL vertex at NDC (-1.0, -1.0) is at what sub-pixel position?

(-1,-1) is at the bottom-left corner of the bottom-left pixel.

(+1,+1) is at the top-right corner of the top-right pixel.

OpenGL subimages using pixel coordinates

I ended up doing this in the vertex shader. I passed in the vec4 as a uniform to the vertex shader, as well as the size of the image, and used the below calculation:

// convert pixel coordinates to vertex coordinates
float widthPixel = 1.0f / u_imageSize.x;
float heightPixel = 1.0f / u_imageSize.y;

float startX = u_sourceRect.x, startY = u_sourceRect.y, width = u_sourceRect.z, height = u_sourceRect.w;
v_texCoords = vec2(widthPixel * startX + width * widthPixel * texPos.x, heightPixel * startY + height * heightPixel * texPos.y);

v_texCoords is a varying that the fragment shader uses to map the texture.

How to transform vertex coordinates into screen pixel coordinates?

If I see correctly, the z-value you put into gluProject is the same value you put into glTranslate. But you still draw the polygon using the vertex (-1, -1, 0), the z translation comes from the glTranslate call (which in turn modifies the modelview matrix). But this matrix is also used in gluProject, so what actually happens is that you translate by z two times (not exactly, as the first translation is further distorted by the rotation). So put in the same vertex you also draw the polygon with, which would be (-1, -1, 0) and not (-1, -1, z).

Keep in mind that gluProject does the same thing as OpenGL's transformation pipeline (like explained in my answer to your other nearly exact same question), so you have to feed it with the same values you feed the OpenGL pipeline with (your polygon's vertices) if you want the same results (the polygon covering the screen).



Related Topics



Leave a reply



Submit