How to Render Offscreen on Opengl

How to render offscreen on OpenGL?

It all starts with glReadPixels, which you will use to transfer the pixels stored in a specific buffer on the GPU to the main memory (RAM). As you will notice in the documentation, there is no argument to choose which buffer. As is usual with OpenGL, the current buffer to read from is a state, which you can set with glReadBuffer.

So a very basic offscreen rendering method would be something like the following. I use c++ pseudo code so it will likely contain errors, but should make the general flow clear:

//Before swapping
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_BACK);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);

This will read the current back buffer (usually the buffer you're drawing to). You should call this before swapping the buffers. Note that you can also perfectly read the back buffer with the above method, clear it and draw something totally different before swapping it. Technically you can also read the front buffer, but this is often discouraged as theoretically implementations were allowed to make some optimizations that might make your front buffer contain rubbish.

There are a few drawbacks with this. First of all, we don't really do offscreen rendering do we. We render to the screen buffers and read from those. We can emulate offscreen rendering by never swapping in the back buffer, but it doesn't feel right. Next to that, the front and back buffers are optimized to display pixels, not to read them back. That's where Framebuffer Objects come into play.

Essentially, an FBO lets you create a non-default framebuffer (like the FRONT and BACK buffers) that allow you to draw to a memory buffer instead of the screen buffers. In practice, you can either draw to a texture or to a renderbuffer. The first is optimal when you want to re-use the pixels in OpenGL itself as a texture (e.g. a naive "security camera" in a game), the latter if you just want to render/read-back. With this the code above would become something like this, again pseudo-code, so don't kill me if mistyped or forgot some statements.

//Somewhere at initialization
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);

//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);

//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
//after drawing
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
// Return to onscreen rendering:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,0);

This is a simple example, in reality you likely also want storage for the depth (and stencil) buffer. You also might want to render to texture, but I'll leave that as an exercise. In any case, you will now perform real offscreen rendering and it might work faster then reading the back buffer.

Finally, you can use pixel buffer objects to make read pixels asynchronous. The problem is that glReadPixels blocks until the pixel data is completely transfered, which may stall your CPU. With PBO's the implementation may return immediately as it controls the buffer anyway. It is only when you map the buffer that the pipeline will block. However, PBO's may be optimized to buffer the data solely on RAM, so this block could take a lot less time. The read pixels code would become something like this:

//Init:
GLuint pbo;
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, width*height*4, NULL, GL_DYNAMIC_READ);

//Deinit:
glDeleteBuffers(1,&pbo);

//Reading:
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,0); // 0 instead of a pointer, it is now an offset in the buffer.
//DO SOME OTHER STUFF (otherwise this is a waste of your time)
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo); //Might not be necessary...
pixel_data = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);

The part in caps is essential. If you just issue a glReadPixels to a PBO, followed by a glMapBuffer of that PBO, you gained nothing but a lot of code. Sure the glReadPixels might return immediately, but now the glMapBuffer will stall because it has to safely map the data from the read buffer to the PBO and to a block of memory in main RAM.

Please also note that I use GL_BGRA everywhere, this is because many graphics cards internally use this as the optimal rendering format (or the GL_BGR version without alpha). It should be the fastest format for pixel transfers like this. I'll try to find the nvidia article I read about this a few monts back.

When using OpenGL ES 2.0, GL_DRAW_FRAMEBUFFER might not be available, you should just use GL_FRAMEBUFFER in that case.

OpenGL offscreen render

I just had a look into the source code I did for Windows. As it was a study for productive code (and hence uses other stuff of our productive code) I cannot provide it as is. What I present here is a stripped version which should show how it works:

// standard C/C++ header:
#include <iostream>

// Windows header:
#include <Windows.h>

using namespace std;

int main(int argc, char **argv)
{
if (argc < 3) {
cerr << "USAGE: " << argv[0]
<< " FILE [FILES...] IMG_FILE" << endl;
return -1;
}
// Import Scene Graph
// excluded: initialize importers
// excluded: import 3d files
#ifdef _WIN32
// Window Setup
// set window properties
enum { Width = 1024, Height = 768 };
WNDCLASSEX wndClass; memset(&wndClass, 0, sizeof wndClass);
wndClass.cbSize = sizeof(WNDCLASSEX);
wndClass.style = CS_HREDRAW | CS_VREDRAW | CS_OWNDC | CS_DBLCLKS;
wndClass.lpfnWndProc = &DefWindowProc;
wndClass.cbClsExtra = 0;
wndClass.cbWndExtra = 0;
wndClass.hInstance = 0;
wndClass.hIcon = 0;
wndClass.hCursor = LoadCursor(0, IDC_ARROW);
wndClass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndClass.lpszMenuName = 0;
wndClass.lpszClassName = "WndClass";
wndClass.hIconSm = 0;
RegisterClassEx(&wndClass);
// style the window and remove the caption bar (WS_POPUP)
DWORD style = WS_CLIPSIBLINGS | WS_CLIPCHILDREN | WS_POPUP;
// Create the window. Position and size it.
HWND hwnd = CreateWindowEx(0,
"WndClass",
"",
style,
CW_USEDEFAULT, CW_USEDEFAULT, Width, Height,
0, 0, 0, 0);
HDC hdc = GetDC(hwnd);
// Windows OpenGL Setup
PIXELFORMATDESCRIPTOR pfd; memset(&pfd, 0, sizeof pfd);
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cDepthBits = 16;
pfd.cStencilBits = 8;
pfd.iLayerType = PFD_MAIN_PLANE;
// get the best available match of pixel format for the device context
int iPixelFormat = ChoosePixelFormat(hdc, &pfd);
// make that the pixel format of the device context
SetPixelFormat(hdc, iPixelFormat, &pfd);
// create the context
HGLRC hGLRC = wglCreateContext(hdc);
wglMakeCurrent(hdc, hGLRC);
#endif // _WIN32
// OpenGL Rendering Setup
/* excluded: init our private OpenGL binding as
* the Microsoft API for OpenGL is stuck <= OpenGL 2.0
*/
// create Render Buffer Object (RBO) for colors
GLuint rboColor = 0;
glGenRenderbuffers(1, &rboColor);
glBindRenderbuffer(GL_RENDERBUFFER, rboColor);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, Width, Height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
// create Render Buffer Object (RBO) for depth
GLuint rboDepth = 0;
glGenRenderbuffers(1, &rboDepth);
glBindRenderbuffer(GL_RENDERBUFFER, rboDepth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, Width, Height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
// create Frame Buffer Object (FBO)
GLuint fbo = 0;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
// attach RBO to FBO
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, rboColor);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER, rboDepth);
// GL Rendering Setup
// excluded: prepare our GL renderer
glViewport(0, 0, Width, Height);
glClearColor(0.525f, 0.733f, 0.851f, 1.0f);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
/* compute projection matrix from
* - field of view (property fov)
* - aspect ratio of view
* - near/far clip distance (properties dNear and dFar).
*/
const DegreeD fov(30.0);
const double dNear = 0.1, dFar = 100.0;
const double ar = (float)Width / Height;
const double d = ::tan(fov / 2.0) * 2.0 * dNear;
// excluded: construct a projection matrix for perspective view
// excluded: determine bounding sphere of 3D scene
// excluded: compute camera and view matrix from the bounding sphere of scene
// excluded: OpenGL rendering of 3d scene
// read image from render buffer
// excluded: prepare image object to store read-back
//Image::Object img(4, Image::BottomToTop);
//img.set(Width, Height, Image::RGB24);
//const size_t bytesPerLine = (3 * Width * 4 + 3) / 4;
//glReadPixels(0, 0, Width, Height, GL_RGB, GL_UNSIGNED_BYTE, img.getData());
// store image
const string filePath = argv[argc - 1];
// excluded: export image in a supported image file format
// clean-up
// excluded: clean-up of 3D scene (incl. OpenGL rendering add-ons)
glDeleteFramebuffers(1, &fbo);
glDeleteRenderbuffers(1, &rboColor);
glDeleteRenderbuffers(1, &rboDepth);
#ifdef _WIN32
wglMakeCurrent(NULL, NULL);
wglDeleteContext(hGLRC);
#endif // _WIN32
// done
return 0;
}

I didn't check whether this even compiles (as is above). It is stripped out of code which compiles and run on Windows 10 on my side.


A Note about OpenGL and Windows:

I did the GL binding by myself because the Microsoft Windows OpenGL API does not support OpenGL 3.0 or higher. (I could've used a library like glfw instead.) This means I have to assign function addresses to function pointers (to correct function prototypes) so that I can call OpenGL functions properly using C function calls.

The availability of the functions is granted if I have appropriate H/W and the appropriate drivers installed. (There are possibilities to check whether the driver provides certain functions.)

If such bound function call fails (e.g. with a segmentation fault) possible reasons could be:

  1. The signature of called function is wrong. (I used headers downloaded from khronos.org to grant correct prototypes. Hopefully, the driver provider did as well.)

  2. The function does not exist in the driver. (I use functions which are part of the OpenGL standard which is supported by the installed driver. The driver supports OpenGL 4.x but I need only OpenGL 3.x (at least until now).)

  3. The function pointers have to be initialized before I use them. (I have written an initialization which is not exposed in the code. This is where I placed the comment /* excluded: init our private OpenGL binding as the Microsoft API for OpenGL is stuck <= OpenGL 2.0 */.

To illustrate this, some code examples:

In my OpenGL init function, I do:

  glGenFramebuffers
= (PFNGLGENFRAMEBUFFERSPROC)wglGetProcAddress(
"glGenFramebuffers");

and the header provides:

extern PFNGLGENFRAMEBUFFERSPROC glGenFramebuffers;

PFNGLGENFRAMEBUFFERSPROC is provided by glext.h I downloaded from kronos.org:

typedef void (APIENTRYP PFNGLGENFRAMEBUFFERSPROC) (GLsizei n, GLuint *framebuffers);

wglGetProcAddress() is provided by the Microsoft Windows API.

A Note about OpenGL and Linux:

If H/W and the installed driver supports the desired OpenGL standard, functions can be used as usual by

  1. including the necessary headers (e.g. #include <GL/gl.h>)

  2. linking the necessary libraries (e.g. -lGL -lGLU).


derhass commented:

There is absolutely no guarantee that GL 3.x functions are exported by whatever libGL.so one is using, and even if they are exported, there is no guarantee that the function is supported (i.e. mesa uses the same frontend lib for all driver backends, but each driver may only support a subset of the functions). You have to use the extension mechanism on both platforms.

I'm not able to provide a simple recommendation how to handle this nor I've valuable practical experience about this. So, I want to provide at least these links (from khronos.org) I've found by google search:

  • Load OpenGL Functions

  • OpenGL Context

  • OpenGL Loading Library.

Python OpenGL How to render off-screen correctly

First of all the framebuffer needs a color buffer:

def setupSelfDefineFBO(program,image):

fbWidth, fbHeight = image.width, image.height

# Setup framebuffer
framebuffer = glGenFramebuffers (1)
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer)

# Setup colorbuffer
colorbuffer = glGenRenderbuffers (1)
glBindRenderbuffer(GL_RENDERBUFFER, colorbuffer)
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, fbWidth, fbHeight)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorbuffer)

# Setup depthbuffer
depthbuffer = glGenRenderbuffers (1)
glBindRenderbuffer(GL_RENDERBUFFER,depthbuffer)
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, fbWidth, fbHeight)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthbuffer)

# check status
status = glCheckFramebufferStatus (GL_FRAMEBUFFER)
if status != GL_FRAMEBUFFER_COMPLETE:
print( "Error in framebuffer activation")

# [...]

The size of the view port has to be adapted to the size of the framebuffer (glViewport):

def setupSelfDefineFBO(program,image):
# [...]

glViewport(0, 0, fbWidth, fbHeight)

# [...]

The binding boinpoint of the texture object an the texture sampler uniform is the texture unit. You have to assign the texture unit to the texture sampler uniform rather than the texture object (name) id (0 for GL_TEXTURE0, 1 for GL_TEXTURE1 ...):

def setupSelfDefineFBO(program,image):
# [...]

glActiveTexture(GL_TEXTURE0)
glBindTexture(GL_TEXTURE_2D,aTexture)
loc=glGetUniformLocation(program,"Texture")
glUniform1i(loc, 0) # <----

# [...]

To run the shader program, you have to draw some geometry. The application sets up the vertex coordinates and attributes, but the geometry id never drawn. Draw a screen space quad by a GL_TRIANGLE_STRIP Triangle primitives:

def setupSelfDefineFBO(program,image):
# [...]

glDrawArrays(GL_TRIANGLE_STRIP, 0, 4)
saveImageFromFBO(fbWidth, fbHeight)

glBindFramebuffer(GL_FRAMEBUFFER, GL_NONE)
glViewport(0, 0, 512, 512)

# [...]
def saveImageFromFBO(width, height):
glReadBuffer(GL_COLOR_ATTACHMENT0)
glPixelStorei(GL_PACK_ALIGNMENT, 1)
data = glReadPixels (0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE)
image = Image.new("RGB", (width, height), (0, 0, 0))
image.frombytes(data)
image = image.transpose(Image.FLIP_TOP_BOTTOM)
image.save ('9_result.jpg')

Offscreen rendering with OpenGL version 1.2

The legacy (pre-FBO) mechanism for off-screen rendering in OpenGL are pbuffers. For example for Windows, they are defined in the WGL_ARB_pbuffer extension. They are also available on other platforms, like Mac, Android, etc.

Aside from the ancientness of pbuffers, there is a fundamental difference to FBOs: pbuffer support is an extension of the window system interface of OpenGL, while FBO support is a feature of OpenGL itself.

One corollary of this is that rendering to a pbuffer happens in a separate context. Where for a FBO, you simply create a new OpenGL framebuffer object, and render to it using regular OpenGL calls within the same context, this is different in the pbuffer case. There you set up a context to render to a
off-screen surface instead of a window surface. The off-screen surface is then the primary framebuffer of this context, and you do all rendering to the pbuffer in this separate context.

That being said, please think twice before using pbuffers. FBO support, at least in an extension form, has been around for a long time. I have a hard time imagining writing new software that needs to run on hardware old enough to not support FBOs.

FBOs are superior to pbuffers in almost every possible way. The main benefits are that the can be used without platform specific APIs, and that they do not require additional contexts.

The only case supported by pbuffers that is not easily supported by FBOs is pure off-screen rendering where you do not want to create a window. For example, if you want to render images on a server that may not have a display, pbuffers are actually a useful mechanism.

Offscreen rendering to Framebuffer

glReadPixels reads date from the framebuffer, thus the target for the framebuffer binding has to be GL_READ_FRAMEBUFFER not GL_DRAW_FRAMEBUFFER:

glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo_off);

Draw(true);

GLubyte pixel_color[4];

glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo_off);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(x, 800 - y - 1, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixel_color);

Respectively

glBindFramebuffer(GL_FRAMEBUFFER, fbo_off);

Draw(true);

GLubyte pixel_color[4];

glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(x, 800 - y - 1, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixel_color);

I recommend to enable Debug Output for to find OpenGL errors. e.g:

#include <iostream>

void GLAPIENTRY DebugCallback(
unsigned int source,
unsigned int type,
unsigned int id,
unsigned int severity,
int length,
const char *message,
const void *userParam )
{
std::cout << message << std::endl;
}

void init_opengl_debug() {
glDebugMessageCallback(&DebugCallback, nullptr );
glDebugMessageControl(GL_DONT_CARE, GL_DONT_CARE, GL_DONT_CARE, 0, nullptr, GL_TRUE);
glEnable(GL_DEBUG_OUTPUT);
glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
}

OpenGL Framebuffer Offscreen Rendering

This might not be your only problem, but from the part you show, two vertices are swapped in your texture coordinates. The 3rd of the 4 vertices is the top-right corner, with coordinates (x2, y2), so it should have texture coordinates (1.0f, 1.0f):

glTexCoord2f(0.0f, 0.0f); glVertex2f(x1, y1);
glTexCoord2f(1.0f, 0.0f); glVertex2f(x2, y1);
glTexCoord2f(1.0f, 1.0f); glVertex2f(x2, y2);
glTexCoord2f(0.0f, 1.0f); glVertex2f(x1, y2);

You're also saying that you call glOrtho() after glMatrixMode(GL_MODELVIEW). glOrtho() sets a projection matrix, so it should normally be called after glMatrixMode(GL_PROJECTION).



Related Topics



Leave a reply



Submit