Opengl Scale Single Pixel Line

OpenGL Scale Single Pixel Line

As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. Of coarse that is much much slower then direct rendering to texture. So here is the deal:

  1. set low res view

    do not change resolution of your window just the glViewport values. Then render your scene in the low res (in just a fraction of screen space)

  2. copy rendered screen into texture

  3. set target resolution view
  4. render the texture

    do not forget to use GL_NEAREST filter. The most important thing is that you swap buffers only after this not before !!! otherwise you would have flickering.

Here C++ source for this:

void gl_draw()
{
// render resolution and multiplier
const int xs=320,ys=200,m=2;

// [low res render pass]
glViewport(0,0,xs,ys);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
// 50 random lines
RandSeed=0x12345678;
glColor3f(1.0,1.0,1.0);
glBegin(GL_LINES);
for (int i=0;i<100;i++)
glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
glEnd();

// [multiply resiolution render pass]
static bool _init=true;
GLuint txrid=0; // texture id
BYTE map[xs*ys*3]; // RGB
// init texture
if (_init) // you should also delte the texture on exit of app ...
{
// create texture
_init=false;
glGenTextures(1,&txrid);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST); // must be nearest !!!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
glDisable(GL_TEXTURE_2D);
}
// copy low res screen to CPU memory
glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
// and then to GPU texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
// set multiplied resolution view
glViewport(0,0,m*xs,m*ys);
glClear(GL_COLOR_BUFFER_BIT);
// render low res screen as texture
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glEnd();
glDisable(GL_TEXTURE_2D);

glFlush();
SwapBuffers(hdc); // swap buffers only here !!!
}

And preview:

preview

I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not).

Pixel perfect: choosing the right correction (OpenGL)

Check that you've passed correct screen size into shader.

When you create window, you define its outer size, but the client area, where the rendering context is put, can be smaller when window has header/decorations.

So you should either ask the client area size with GetClientRect(), or find the "correct" window size by calling AdjustWindowRect() before creating window - this would give you the window size that will have client area of the size you wanted initially.

Lines when attempting to color each pixel of the screen?

That's a rounding issue, because you did set up the projection in a very weird way:

glm::ortho(0.f,screen_width-1.f,0.f,screen_height-1.f,1.f,-1.f)

In OpenGL's window space, pixels are an actual area (of squares of size 1x1).

So, If your screen is for example 4 pixel wide, this is what you get in terms of pixels

+--+--+--+--+
|0 |1 |2 |3 | integer pixel coors
...

However, the space you work in is continuous (in principle, at least), adn OpenGL's window space will go from 0 to 4 (not 3), which makes perfect sense as there are four pixels, and each pixel being one unit wide:

0  1  2  3  4  OpenGL window space
+--+--+--+--+
|0 |1 |2 |3 | integer pixel coords
....

Using OpenGL's window space conventions, the pixel centers lie at half-integer coordinates.

Now you're not drawing in OpenGL window space directly, but you can set up transformation to basically undo the viewport transform (which goes from NDC to window space) to get a pixel-exact mapping. But you didn't do that. what you did instead is:

  3/4   9/4
0 6/4 3 your coordinate system
+--+--+--+--+
|0 |1 |2 |3 | integer pixel coords
....

And you're draw at full integers in that weird system, which just does not map very well to the actual pixels.

So you should do

glm::ortho(0.f,screen_width,0,screen_height,1.f,-1.f)

However, since you seem to be drawing at integer positions, you end up drawing exactly at the pixel corners, and might still see some rounding issues (and the rounding direction will be up to the implementation, as per the spec).

So, you better shift it so that integer coordinates lie at pixel centers

glm::ortho(-0.5f, screen_width-0.5f, -0.5f, screen_height-0.5f, 1.f, -1.f)


0.5 2.5
-0.5 1.5 3.5 your coordinate system
+--+--+--+--+
|0 |1 |2 |3 | integer pixel coords
....

OpenGL: How to fix missing corner pixel in rect (lines or line loop)

OpenGL gives a lot of leeway for how implementations rasterize lines. It requires some desirable properties, but those do not prevent gaps when mixing x-major ('horizontal') and y-major ('vertical') lines.

  • First thing, the "spirit of the spec" is to rasterize half-open lines; i.e. include the first vertex and exclude the final one. For that reason you should ensure that each vertex appears exactly once as a source and once as destination:

      _idxBuf.bufferData<GLuint>({0,1,1,2,2,3,3,0},BufferUsage::StaticDraw);

    This is contrary to your attempt of drawing "top to bottom and left to right".

    GL_LINE_LOOP already does that though, and you say that it doesn't solve the problem. That is indeed not guaranteed to solve the problem because you mix x-major and y-major lines here, but you still should follow the rule in order for the next point to work.

  • Next, I bet, some of your vertices fall right between the pixels; i.e. the window coordinates fractional part is exactly zero. When rasterizing such primitives the differences between different implementations (or in our case between x-major and y-major lines on the same implementation) become prominent.

    To solve that you can snap your vertices to the pixel grid:

      // xy - vertex in window coordinates, i.e. same as gl_FragCoord
    xy = floor(xy) + 0.5

    You can do this either in C++, or in the vertex shader. In either case you'll need to apply the projection and viewport transformations, and then undo them so that OpenGL can re-apply them afterwards. It's ugly, I know.

The only bullet-proof way to rasterize pixel-perfect lines, however, is to render triangles to cover the shape (either each line individually or the entire rectangle) and compute the coverage analytically from gl_FragCoord.xy in the fragment shader.

OpenGL ES Pixel Art - Scaling

Im not sure if my question was not clear, but to draw pixels to screen you have to create a texture and pass in the pixel data to it, then render that texture onto the screen. It would be the equivalent of glDrawPixels.

The code would be:

#define W 255,255,255

#define G 192,192,192

//8 x 8 tile with 3 bytes for each pixel RGB format
GLubyte pixels[8 * 8 * 3] = {
W,W,W,W,W,W,W,W,
W,W,G,G,G,W,W,W,
W,G,W,W,W,G,W,W,
W,G,W,W,W,G,W,W,
W,W,G,G,G,W,W,W,
W,G,W,W,W,G,W,W,
W,G,W,W,W,G,W,W,
W,W,G,G,G,W,W,W
};

somewhere in setup:

glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &tex);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 8, 8, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

Then draw the texture as usual:

glActiveTexture(GL_TEXTURE0);
glUniform1i([program uniformLocation:@"s_texture"], 0);
glBindTexture(GL_TEXTURE_2D, tex);

glEnableVertexAttribArray(positionAttrib);
glVertexAttribPointer(positionAttrib, 2, GL_FLOAT, GL_FALSE, 0, v);
glEnableVertexAttribArray(texAttrib);
glVertexAttribPointer(texAttrib, 2, GL_FLOAT, GL_FALSE, 0, t);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, i);

How do I make lines scale when using GLOrtho?

Well you can either scale it yourself using glLineWidth or you can emulate the line as 2 triangles.

Drawing pixels in OpenGL

From the comments, it sounds like you're trying to use OpenGL in place of a really old graphics library that is required for a class. Since computer graphics have changed so much that what you're trying to do in modern OpenGL is unreasonable, you should try doing this for the current assignment and your later ones:

Disclaimer: This is not a reasonable way to draw a line in modern OpenGL

  1. Create a 2d array of some arbitrary size, large enough that the entire line can be drawn on it.
  2. Draw the line using the original function, create some setPixel function that changes elements in that array
  3. Once you're done drawing the line (or doing whatever else future assignments will have you do), create an OpenGL texture from that array. An excellent guide is available here: https://open.gl/textures

Some rough psuedocode:

GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_RGB, GL_FLOAT, pixels);
glBindTexture(GL_TEXTURE_2D, 0);

Then you would create a quad (really just two triangles) that draw this texture to screen. Following the open.gl guide should work well since they already do this for their textures. Pulling from their provided code (which does a bit extra, all correct and following the spec though):

GLfloat vertices[] = {
// Position Color Texcoords
-0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, // Top-left
0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, // Top-right
0.5f, -0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, // Bottom-right
-0.5f, -0.5f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f // Bottom-left
};

GLuint elements[] = {
0, 1, 2,
2, 3, 0
};

// Set up buffer objects, create shaders, initialize GL, etc.

//drawing
//bind buffers, enable attrib arrays, etc
glBindTexture(GL_TEXTURE_2D, tex);

glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);


Related Topics



Leave a reply



Submit