Displaying Svg in Opengl Without Intermediate Raster

Displaying SVG in OpenGL without intermediate raster

Qt can do this.

QSvgRenderer can take an SVG and paint it over a QGLWidget
Its possibly you'll need to fiddle around with the paintEvent() abit if you want to draw anything else on the QGLWidget other than the SVG.

Rendering SVG with OpenGL (and OpenGL ES)

From http://shivavg.svn.sourceforge.net/viewvc/shivavg/trunk/src/shPipeline.c?revision=14&view=markup :

static void shDrawVertices(SHPath *p, GLenum mode)
{
int start = 0;
int size = 0;

/* We separate vertex arrays by contours to properly
handle the fill modes */
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, sizeof(SHVertex), p->vertices.items);

while (start < p->vertices.size) {
size = p->vertices.items[start].flags;
glDrawArrays(mode, start, size);
start += size;
}

glDisableClientState(GL_VERTEX_ARRAY);
}

So it does use a VBO. So I'd suggest making your own SVG parser / use a pre-made one, and forward the calls to ShivaVG.

You still have the problem that ShivaVG is in C (and not in Java) and creates an opengl context (and not opengles, if I read the code correctly). So even if you compile it using Android's NDK, you'll have to modify the code ( for instance, I've seen a few glVertex3f around, but they don't seem to be much needed... hope for the best). The other option, of course, it to port the code from C to Java. Maybe not as painful as you could imagine.

Good luck !

Rendering Vector Graphics in OpenGL?

Let me expand on Greg's answer.

It's true that Qt has a SVG renderer class, QSvgRenderer. Also, any drawing that you do in Qt can be done on any "QPaintDevice", where we're interested in the following "paint devices":

  • A Qt widget;
  • In particular, a GL-based Qt widget (QGLWidget);
  • A Qt image

So, if you decide to use Qt, your options are:

  1. Stop using your current method of setting up the window (and GL context), and start using QGLWidget for all your rendering, including the SVG rendering. This might be a pretty small change, depending on your needs. QGLWidget isn't particularly limiting in its capabilities.
  2. Use QSvgRenderer to render to a QImage, then put the data from that QImage into a GL texture (as you normally would), and render it any way you want (e.g. into a rectangular GL_QUAD). Might have worse performance than the other method but requires the least change to your code.

Wondering what QGLWidget does exactly? Well, when you issue Qt rendering commands to a QGLWidget, they're translated to GL calls for you. And this also happens when the rendering commands are issued by the SVG renderer. So in the end, your SVG is going to end up being rendered via a bunch of GL primitives (lines, polygons, etc).

This has a disadvantage. Different videocards implement OpenGL slightly differently, and Qt does not (and can not) account for all those differences. So, for example, if your user has a cheap on-board Intel videocard, then his videocard doesn't support OpenGL antialiasing, and this means your SVG will also look aliased (jaggy), if you render it directly to a QGLWidget. Going through a QImage avoids such problems.

You can use the QImage method when you're zooming in realtime, too. It just depends on how fast you need it to be. You may need careful optimizations such as reusing the same QImage, and enabling clipping for your QPainter.

2D Vector graphic renderer for OpenGL

I would not bother with anything OpenVG, not even with MonkVG, which is probably the most modern, albeit incomplete implementation. The OpenVG committee has folded in 2011 and most if not all implementations are abandonware or at best legacy software.

Since 2011, the state of the art is Mark Kilgard's baby, NV_path_rendering, which is currently only a vendor (Nvidia) extension as you might have guessed already from its name. There are a lot of materials on that:

  • https://developer.nvidia.com/nv-path-rendering Nvidia hub, but some material on the landing page is not the most up-to-date
  • http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/opengl/gpupathrender.pdf Siggraph 2012 paper
  • http://on-demand.gputechconf.com/gtc/2014/presentations/S4810-accelerating-vector-graphics-mobile-web.pdf GTC 2014 presentation
  • http://www.opengl.org/registry/specs/NV/path_rendering.txt official extension doc

NV_path_rendering is now used by Google's Skia library behind the scenes, when available. (Nvidia contributed the code in late 2013 and 2014.)

And to answer a more specific point raised in the comments, you can mix path rendering with other OpenGL (3D) stuff, as demoed at:

  • https://www.youtube.com/watch?v=FVYl4o1rgIs
  • https://www.youtube.com/watch?v=yZBXGLlmg2U

You can of course load SVGs and such https://www.youtube.com/watch?v=bCrohG6PJQE. They also support the PostScript syntax for paths.

An upstart having even less (or downright no) vendor support or academic glitz is NanoVG, which is currently developed and maintained. (https://github.com/memononen/nanovg) Given the number of 2D libraries over OpenGL that have come and gone over time, you're taking a big bet using something not supported by a major vendor, in my humble opinion.

OpenGL Raster Transformation not yielding expected results

After a lengthy chat, a few issues were exposed:

  1. GL_RGB pixels are not well liked by glReadPixels (...) and glDrawPixels (...)

    • Using the default pixel store, GL tries to store each row it reads on a 4-byte boundary; 3-byte RGB pixels make an absolute mess out of this and many other things in OpenGL.

    • You must call glPixelStorei (GL_PACK_ALIGNMENT, 1) before glReadPixels (...) with a GL_RGB format, or it may pad the end of each row in the output with extra bytes to satisfy 4-byte row alignment. Left unchecked, the default alignment will eventually lead to a memory overrun.

    • Likewise, you need to use glPixelStorei (GL_UNPACK_ALIGNMENT, 1) before glDrawPixels (...) so that it does not try to skip bytes to maintain 4-byte alignment while reading rows from your input data.

  2. Your array was declared incorrectly, you were allocating an array of pointers.

    • A pointer to a GLubyte is considerably larger than a GLubyte itself (4-8x as large depending on the compiler / CPU).

    • The proper way to declare your pixel array is GLubyte data [115 * 35 * 3], to store 115 * 35 pixels, each 3-bytes.

  3. The raster position is in object-space coordinates, but the position in glReadPixels (...) is in window-space coordinates.

    • Without an appropriate projection matrix, viewport, etc. the scale between coordinate systems (object and window) will not match, and you cannot accurately say that your triangle is 115 pixels tall and 35 pixels wide (those are its object-space dimensions).

Passing parameters to OpenGL display lists

Well, yes, the bound texture is global state and if you bind a texture and then call a display list, the texture should still be bound when the list executed.

But it's better to stop using display lists and use VAs/VBOs, i recommend them!

Convert Flash art to OpenGL-ready vector format?

I would recommend converting the swfs to .svg format, Either in Illustrator, or with this online tool. I actually used a small java console app to automate this part for what I had to do recently, it worked pretty well.

Once that's done, you can use Qt to do the rendering bit for you as commented on here.

Video as voxels in OpenGL

Check out this tutorial on setting up a 3D texture.

If you then render slices through the texture array with the appropriate UVW coordinates you will get what you are after.



Related Topics



Leave a reply



Submit