How to Create Opengl Context via Drm (Linux)

How to create opengl context via drm (Linux)

Yes, you need kms stack (example). Here is a simple example under linux, it use OpenGL es, But the step to have it working against OpenGL api are simple.

In the egl attribs set EGL_RENRERABLE_TYPE to EGL_OPENGL_BIT

And tell egl which api to bind to:

eglBindAPI(EGL_OPENGL_API);

Be sure to have latest kernel drivers and mesa-dev, libdrm-dev, libgbm-dev. This pieces of code is portable on android, it's just not so easy to have default android graphic stack silenced.

note: I had trouble with 32bit version, but still don't know why. those libs are actively developed, so not sure it wasn't a bug.

*note2: depending on your GLSL version, float precision is supported or not.

precision mediump float;

note3: if you have permision failure with /dev/dri/card0, grant it with:

sudo chmod 666 /dev/dri/card0

or add current user to video group with

sudo adduser $user video

you may also setguid for your executable with group set to video. (maybe best option)

OpenGL Without GUI


Is it possible to compile a program that uses the OpenGL libraries or directly uses the GPU driver to draw to the screen?

Yes. With the EGL API this has been formalized and works most well with NVidia GPUs and their proprietary drivers. NVidia has it described on their dev blog here https://devblogs.nvidia.com/egl-eye-opengl-visualization-without-x-server/

Essentially the steps are:

Create a OpenGL context for a PBuffer

#include <EGL/egl.h>

static const EGLint configAttribs[] = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};

static const int pbufferWidth = 9;
static const int pbufferHeight = 9;

static const EGLint pbufferAttribs[] = {
EGL_WIDTH, pbufferWidth,
EGL_HEIGHT, pbufferHeight,
EGL_NONE,
};

int main(int argc, char *argv[])
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);

EGLint major, minor;

eglInitialize(eglDpy, &major, &minor);

// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;

eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);

// 3. Create a surface
EGLSurface eglSurf = eglCreatePbufferSurface(eglDpy, eglCfg,
pbufferAttribs);

// 4. Bind the API
eglBindAPI(EGL_OPENGL_API);

// 5. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);

eglMakeCurrent(eglDpy, eglSurf, eglSurf, eglCtx);

// from now on use your OpenGL context

// 6. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}

and then go about the rest as per usual. Or you can even ditch the PBuffer completely and just use OpenGL manages resources, i.e. render to framebuffer objects. For that end you can omit creating the surface and just make the context current.

Here's an example for using EGL without display, no EGL surface, with OpenGL managed framebuffer.

#include <GL/glew.h>
#include <GL/glut.h>
#include <EGL/egl.h>

#include <unistd.h>
#include <stdlib.h>
#include <assert.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>

#include <math.h>
#include <stdio.h>

using namespace std;

namespace render
{
int width, height;
float aspect;

void init();
void display();

int const fbo_width = 512;
int const fbo_height = 512;

GLuint fb, color, depth;

void *dumpbuf;
int dumpbuf_fd;
};

static const EGLint configAttribs[] = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};

int main(int argc, char *argv[])
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);

EGLint major, minor;
eglInitialize(eglDpy, &major, &minor);

// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;

eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);

// 3. Bind the API
eglBindAPI(EGL_OPENGL_API);

// 3. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);

eglMakeCurrent(eglDpy, EGL_NO_SURFACE, EGL_NO_SURFACE, eglCtx);

glewInit();
// from now on use your OpenGL context
render::init();
render::display();

// 4. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}

void CHECK_FRAMEBUFFER_STATUS()
{
GLenum status;
status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER);
switch(status) {
case GL_FRAMEBUFFER_COMPLETE:
break;

case GL_FRAMEBUFFER_UNSUPPORTED:
/* choose different formats */
break;

default:
/* programming error; will fail on all hardware */
throw "Framebuffer Error";
}
}

namespace render
{
float const light_dir[]={1,1,1,0};
float const light_color[]={1,0.95,0.9,1};

void init()
{
glGenFramebuffers(1, &fb);
glGenTextures(1, &color);
glGenRenderbuffers(1, &depth);

glBindFramebuffer(GL_FRAMEBUFFER, fb);

glBindTexture(GL_TEXTURE_2D, color);
glTexImage2D( GL_TEXTURE_2D,
0,
GL_RGB8,
fbo_width, fbo_height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
NULL);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);

glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, fbo_width, fbo_height);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);

GLint red_bits, green_bits, blue_bits, alpha_bits;

glGetIntegerv(GL_RED_BITS, &red_bits);
glGetIntegerv(GL_GREEN_BITS, &green_bits);
glGetIntegerv(GL_BLUE_BITS, &blue_bits);
glGetIntegerv(GL_ALPHA_BITS, &alpha_bits);

fprintf(stderr, "FBO format R%dG%dB%dA%d\n",
(int)red_bits,
(int)green_bits,
(int)blue_bits,
(int)alpha_bits );

CHECK_FRAMEBUFFER_STATUS();

dumpbuf_fd = open("/tmp/fbodump.rgb", O_CREAT|O_SYNC|O_RDWR, S_IRUSR|S_IWUSR);
assert(-1 != dumpbuf_fd);
dumpbuf = malloc(fbo_width*fbo_height*3);
assert(dumpbuf);
}

void render()
{
static float a=0, b=0, c=0;

glBindTexture(GL_TEXTURE_2D, 0);
glEnable(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER, fb);

glViewport(0,0,fbo_width, fbo_height);

glClearColor(0,0,0,0);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1, 1, -1, 1, -1, 1);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

glBegin(GL_TRIANGLES);
glColor3f(1,0,0);
glVertex3f(1,0,0);

glColor3f(0,1,0);
glVertex3f(0,1,0);

glColor3f(0,0,1);
glVertex3f(0,0,1);
glEnd();

glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,fbo_width,fbo_height,GL_RGB,GL_UNSIGNED_BYTE,dumpbuf);
lseek(dumpbuf_fd, SEEK_SET, 0);
write(dumpbuf_fd, dumpbuf, fbo_width*fbo_height*3);
}
}

How can I use OpenGL to render to memory without requiring any windowing system library?

Depending on the operating system you're using and the availability of drivers, you can do pure, headless, GPU accelerated OpenGL rendering using EGL. Nvidia has a nice developer blog about how to do it at https://developer.nvidia.com/blog/egl-eye-opengl-visualization-without-x-server/

The gist of it is, to create a EGL context on a display device without associating it with an output. Source (copied directly from the linked article):

#include <EGL/egl.h>
static const EGLint configAttribs[] = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};

static const int pbufferWidth = …;
static const int pbufferHeight = …;

static const EGLint pbattr[] = {
EGL_WIDTH, pbufferWidth,
EGL_HEIGHT, pbufferHeight,
EGL_NONE,
};

int main(int argc, char *argv[])
{
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);
EGLint major, minor;
eglInitialize(eglDpy, &major, &minor);
EGLint numConfigs;
EGLConfig eglCfg;

eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);
EGLSurface eglSurf = eglCreatePbufferSurface(eglDpy, eglCfg, pbattr);
eglBindAPI(EGL_OPENGL_API);
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT, NULL);
eglMakeCurrent(eglDpy, eglSurf, eglSurf, eglCtx);

do_opengl_stuff();

eglTerminate(eglDpy);
return 0;
}

If you don't have access to EGL, but your OS and your GPU is supported by Linux DRM/DRI, you could go the KMS/GBM route and worth with framebuffer objects obtained through the extension mechanism (well, with Mesa you can just use them as if they were non extensions, even with OpenGL-1.x). The kmscube demo has a "surfaceless" mode, which demonstrates doing exactly that.

In short: EGL is the "clean" way do to it. KMS is the "hacky" way to do it.

Another option, probably completely outside of your scope right now, would be to use Vulkan, where strictly speaking, headless rendering is the "default", and methods for getting stuff on-screen are actual extensions to the specification:

    VK_KHR_wayland_surface
VK_KHR_xcb_surface
VK_KHR_xlib_surface
VK_KHR_win32_surface

OpenGL without X.org in linux

Update (Sep. 17, 2017):

NVIDIA recently published an article detailing how to use OpenGL on headless systems, which is a very similar use case as the question describes.

In summary:

  • Link to libOpenGL.so and libEGL.so instead of libGL.so. (Your linker options should therefore be -lOpenGL -lEGL
  • Call eglGetDisplay, then eglInitialize to initialize EGL.
  • Call eglChooseConfig with the config attribute EGL_SURFACE_TYPE followed with EGL_PBUFFER_BIT.
  • Call eglCreatePbufferSurface, then eglBindApi(EGL_OPENGL_API);, then eglCreateContext and eglMakeCurrent.

From that point on, do your OpenGL rendering as usual, and you can blit your pixel buffer surface wherever you like. This supplementary article from NVIDIA includes a basic example and an example for multiple GPUs. The PBuffer surface can also be replaced with a window surface or pixmap surface, according to the application needs.

I regret not doing more research on this on my previous edit, but oh well. Better answers are better answers.


Since my answer in 2010, there have been a number of major shakeups in the Linux graphics space. So, an updated answer:

Today, nouveau and the other DRI drivers have matured to the point where OpenGL software is stable and performs reasonably well in general. With the introduction of the EGL API in Mesa, it's now possible to write OpenGL and OpenGL ES applications on even Linux desktops.

You can write your application to target EGL, and it can be run without the presence of a window manager or even a compositor. To do so, you would call eglGetDisplay, eglInitialize, and ultimately eglCreateContext and eglMakeCurrent, instead of the usual glx calls to do the same.

I do not know the specific code path for working without a display server, but EGL accepts both X11 displays and Wayland displays, and I do know it is possible for EGL to operate without one. You can create GL ES 1.1, ES 2.0, ES 3.0 (if you have Mesa 9.1 or later), and OpenGL 3.1 (Mesa 9.0 or later) contexts. Mesa has not (as of Sep. 2013) yet implemented OpenGL 3.2 Core.

Notably, on the Raspberry Pi and on Android, EGL and GL ES 2.0 (1.1 on Android < 3.0) are supported by default. On the Raspberry Pi, I don't think Wayland yet works (as of Sep. 2013), but you do get EGL without a display server using the included binary drivers. Your EGL code should also be portable (with minimal modification) to iOS, if that interests you.


Below is the outdated, previously accepted post:

I'd like to open an OpenGL context without X in linux. Is there any way at all to do it?

I believe Mesa provides a framebuffer target. If it provides any hardware acceleration at all, it will only be with hardware for which there are open source drivers that have been adapted to support such a use.

Gallium3D is also immature, and support for this isn't even on the roadmap, as far as I know.

I'd like to get a solution that works with nvidia cards.

There isn't one. Period.

NVIDIA only provides an X driver, and the Nouveau project is still immature, and doesn't support the kind of use that you're looking for, as they are currently focused only on the X11 driver.

What is the relationship between EGL and OpenGL?

There is no relationship between OpenGL and EGL. EGL generally does not run on desktops, and there is no ability to create a desktop OpenGL context through EGL.

OpenGL contexts are instead created and managed by platform-specific APIs. On Windows, the WGL API is used. On X11-based platforms, GLX is used. And so forth.

There was some noise last year from Khronos about creating a version of EGL that could work on the desktop and make OpenGL contexts, but thus far, nothing came of it.



Related Topics



Leave a reply



Submit