Limitation on Texture Size? Android Open Gl Es 2.0

Limitation on texture size? Android Open GL ES 2.0

There is a hardware limitation on the texture sizes. To manually look them up, you can go to a site such as glbenchmark.com (Here displaying details about google galaxy nexus).

To automatically find the maximum size from your code, you can use something like:

int[] max = new int[1];
gl.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE, max, 0); //put the maximum texture size in the array.

(For GL10, but the same method exists for GLES20)

When it comes to the processing or editing of an image you usually use an instance of Bitmap when working in android. This holds the uncompressed values of your image and is thus resolution dependant. However, it is recommended that you use compressed textures for your openGL applications as this improves the memory-use efficiency (note that you cannot modify these compressed textures).

From the previous link:

Texture compression can significantly increase the performance of your
OpenGL application by reducing memory requirements and making more
efficient use of memory bandwidth. The Android framework provides
support for the ETC1 compression format as a standard feature [...]

You should take a look at this document which contains many good practices and hints about texture loading and usage. The author explicitly writes:

Best practice: Use ETC for texture compression.

Best practice: Make sure your geometry and texture resolutions are
appropriate for the size they're displayed at. Don't use a 1k x 1k
texture for something that's at most 500 pixels wide on screen. The
same for geometry.

Get Maximum OpenGL ES 2.0 Texture Size Limit in Android

PBUFFER max size is unfortunately not related to max texture size (but may be the same).

I believe the best way to obtain max texture size is to create GL context (on the same context as one on which You will actually use this textures) and ask for GL_MAX_TEXTURE_SIZE.

There is strong reason behind this: the ogl driver is not initialized for current process before surface (and context) creation. Some drivers perform underlying HW/SKU detection on initialization and calculate max surface sizes depending on HW capabilities.

Furthermore, max texture size is permitted to vary depending on context (and EGLConfig context was created on).

And one more thing: eglGetConfigs will get You all EGLconfigs, even these from default, software android renderer, or theese from OpenGL ES 1.1CM HW driver (if there are separate drivers for 1.1 and 2.0 on target platform). Drivers are sort of independent in graphics stack and can return different max-es.

Minimum required Texture Size for compliance with OpenGL-ES 2.0 on Android?

Yes, any GLES2 implementation must support at least 64 pixel texture in width and height.
You can query actual actual max texture size with glGetIntegerv function using GL_MAX_TEXTURE_SIZE enum.

See official spec page 141, table 6.20.

Android OpenGL ES 2.0 Only Limited To 16 Textures In Memory?

The total number of texture objects is not normally limited. At least not within any reasonably range, theoretically you will run of ids that can be represented by a GLuint at some point. But you will run out of memory long before that happens. So the only practical limit is normally given by the amount of memory used for the texture data.

However, the number of texture units is very much limited. And from a quick look at your code, that's what you run into. From your texture loading code:

glActiveTexture(GL_TEXTURE0 + index);
glBindTexture(GL_TEXTURE_2D, texture[index]);

What you're trying to do is keep all textures bound, using a different texture unit for each. Then when you draw, you select which texture unit the shader samples from:

glUniform1i(uTextureUnit, index);

This is a perfectly valid approach... until you run out of texture units. Which is exactly what happens.

The maximum number of texture units is implementation dependent, and can be queried with:

GLint maxUnits = 0;
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &maxUnits);

The minimum for this value is 8. So unless you check the value, and find more, you can only rely on having 8 texture units.

If you need more than 8 textures, and want your code to run reliably across devices, your somewhat unconventional approach of keeping all textures bound will not work.

The easiest approach is that you do what most people do: Bind the texture you want to use before drawing. For this, you can always use texture unit 0. So you can remove all calls to glActiveTexture(), and simply place a bind call in the Draw_Polygon() method, instead of the glUniform1i() call:

glBindTexture(GL_TEXTURE_2D, texture[index]);

Android devices GL_MAX_TEXTURE_SIZE limitation, safe texture size

1024x1024 is about the safest you can go on any device, especially on older ones. Newer devices shouldn't have any problem, although I've seen recent devices (I recall a Galaxy Nexus, the newest ICS update fixed that though) render white quads with texture sizes of size 2048x1024.

If you're targeting new devices and want to keep older ones compatible, it shouldn't hurt to split your tilesets. After all, you aren't likely to do too many context switches if you use two or three spritesheets for background, etc.

Determining Max/Min Texture Size Limit in Android OpenGLES

I realize this is an old post but your problem is that you haven't properly initialized the EGLContext.

Usually you'd want to use a GLSurfaceView or a TextureView to actually include your GL content in the View hierarchy. The GLSurfaceView will handle a lot of things for you like properly creating the EGLContext and managing a render thread. The TextureView requires a bit more manual work.

Once you have a context through either of these means, you can use :

GLES20.glGetIntegerv(GLES20.GL_MAX_TEXTURE_SIZE, size, 0);

Assuming you have bound the OpenGL ES 2.0 API. First make sure that you have properly created your EGLContext and can execute EGL and GLES calls then you should be able to query the max texture size.

You can see Romain Guy's post about using a TextureView as you would a GLSurfaceView to see the nitty gritty details about managing your own EGLContext here (https://groups.google.com/d/msg/android-developers/U5RXFGpAHPE/IqHeIeGXhr0J):

GLSurfaceView handles GL setup for you, which TextureView will not do.
A TextureView can be used as the native window when you create an EGL
surface. Here is an example (the interesting part is the call to
eglCreateWindowSurface()):

@Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
mRenderThread = new RenderThread(getResources(), surface);
mRenderThread.start();
}

private static class RenderThread extends Thread {
private static final String LOG_TAG = "GLTextureView";

static final int EGL_CONTEXT_CLIENT_VERSION = 0x3098;
static final int EGL_OPENGL_ES2_BIT = 4;

private volatile boolean mFinished;

private final Resources mResources;
private final SurfaceTexture mSurface;

private EGL10 mEgl;
private EGLDisplay mEglDisplay;
private EGLConfig mEglConfig;
private EGLContext mEglContext;
private EGLSurface mEglSurface;
private GL mGL;

RenderThread(Resources resources, SurfaceTexture surface) {
mResources = resources;
mSurface = surface;
}

private static final String sSimpleVS =
"attribute vec4 position;\n" +
"attribute vec2 texCoords;\n" +
"varying vec2 outTexCoords;\n" +
"\nvoid main(void) {\n" +
" outTexCoords = texCoords;\n" +
" gl_Position = position;\n" +
"}\n\n";
private static final String sSimpleFS =
"precision mediump float;\n\n" +
"varying vec2 outTexCoords;\n" +
"uniform sampler2D texture;\n" +
"\nvoid main(void) {\n" +
" gl_FragColor = texture2D(texture, outTexCoords);\n" +
"}\n\n";

private static final int FLOAT_SIZE_BYTES = 4;
private static final int TRIANGLE_VERTICES_DATA_STRIDE_BYTES = 5 * FLOAT_SIZE_BYTES;
private static final int TRIANGLE_VERTICES_DATA_POS_OFFSET = 0;
private static final int TRIANGLE_VERTICES_DATA_UV_OFFSET = 3;
private final float[] mTriangleVerticesData = {
// X, Y, Z, U, V
-1.0f, -1.0f, 0.0f, 0.0f, 0.0f,
1.0f, -1.0f, 0.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f, 1.0f,
};

@Override
public void run() {
initGL();

FloatBuffer triangleVertices = ByteBuffer.allocateDirect(mTriangleVerticesData.length
* FLOAT_SIZE_BYTES).order(ByteOrder.nativeOrder()).asFloatBuffer();
triangleVertices.put(mTriangleVerticesData).position(0);

int texture = loadTexture(R.drawable.large_photo);
int program = buildProgram(sSimpleVS, sSimpleFS);

int attribPosition = glGetAttribLocation(program, "position");
checkGlError();

int attribTexCoords = glGetAttribLocation(program, "texCoords");
checkGlError();

int uniformTexture = glGetUniformLocation(program, "texture");
checkGlError();

glBindTexture(GL_TEXTURE_2D, texture);
checkGlError();

glUseProgram(program);
checkGlError();

glEnableVertexAttribArray(attribPosition);
checkGlError();

glEnableVertexAttribArray(attribTexCoords);
checkGlError();

glUniform1i(uniformTexture, texture);
checkGlError();

while (!mFinished) {
checkCurrent();

glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
checkGlError();

glClear(GL_COLOR_BUFFER_BIT);
checkGlError();

// drawQuad
triangleVertices.position(TRIANGLE_VERTICES_DATA_POS_OFFSET);
glVertexAttribPointer(attribPosition, 3, GL_FLOAT, false,
TRIANGLE_VERTICES_DATA_STRIDE_BYTES, triangleVertices);

triangleVertices.position(TRIANGLE_VERTICES_DATA_UV_OFFSET);
glVertexAttribPointer(attribTexCoords, 3, GL_FLOAT, false,
TRIANGLE_VERTICES_DATA_STRIDE_BYTES, triangleVertices);

glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

if (!mEgl.eglSwapBuffers(mEglDisplay, mEglSurface)) {
throw new RuntimeException("Cannot swap buffers");
}
checkEglError();

try {
Thread.sleep(2000);
} catch (InterruptedException e) {
// Ignore
}
}

finishGL();
}

private int loadTexture(int resource) {
int[] textures = new int[1];

glActiveTexture(GL_TEXTURE0);
glGenTextures(1, textures, 0);
checkGlError();

int texture = textures[0];
glBindTexture(GL_TEXTURE_2D, texture);
checkGlError();

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

Bitmap bitmap = BitmapFactory.decodeResource(mResources, resource);

GLUtils.texImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap, GL_UNSIGNED_BYTE, 0);
checkGlError();

bitmap.recycle();

return texture;
}

private int buildProgram(String vertex, String fragment) {
int vertexShader = buildShader(vertex, GL_VERTEX_SHADER);
if (vertexShader == 0) return 0;

int fragmentShader = buildShader(fragment, GL_FRAGMENT_SHADER);
if (fragmentShader == 0) return 0;

int program = glCreateProgram();
glAttachShader(program, vertexShader);
checkGlError();

glAttachShader(program, fragmentShader);
checkGlError();

glLinkProgram(program);
checkGlError();

int[] status = new int[1];
glGetProgramiv(program, GL_LINK_STATUS, status, 0);
if (status[0] != GL_TRUE) {
String error = glGetProgramInfoLog(program);
Log.d(LOG_TAG, "Error while linking program:\n" + error);
glDeleteShader(vertexShader);
glDeleteShader(fragmentShader);
glDeleteProgram(program);
return 0;
}

return program;
}

private int buildShader(String source, int type) {
int shader = glCreateShader(type);

glShaderSource(shader, source);
checkGlError();

glCompileShader(shader);
checkGlError();

int[] status = new int[1];
glGetShaderiv(shader, GL_COMPILE_STATUS, status, 0);
if (status[0] != GL_TRUE) {
String error = glGetShaderInfoLog(shader);
Log.d(LOG_TAG, "Error while compiling shader:\n" + error);
glDeleteShader(shader);
return 0;
}

return shader;
}

private void checkEglError() {
int error = mEgl.eglGetError();
if (error != EGL10.EGL_SUCCESS) {
Log.w(LOG_TAG, "EGL error = 0x" + Integer.toHexString(error));
}
}

private void checkGlError() {
int error = glGetError();
if (error != GL_NO_ERROR) {
Log.w(LOG_TAG, "GL error = 0x" + Integer.toHexString(error));
}
}

private void finishGL() {
mEgl.eglDestroyContext(mEglDisplay, mEglContext);
mEgl.eglDestroySurface(mEglDisplay, mEglSurface);
}

private void checkCurrent() {
if (!mEglContext.equals(mEgl.eglGetCurrentContext()) ||
!mEglSurface.equals(mEgl.eglGetCurrentSurface(EGL10.EGL_DRAW))) {
if (!mEgl.eglMakeCurrent(mEglDisplay, mEglSurface, mEglSurface, mEglContext)) {
throw new RuntimeException("eglMakeCurrent failed "
+ GLUtils.getEGLErrorString(mEgl.eglGetError()));
}
}
}

private void initGL() {
mEgl = (EGL10) EGLContext.getEGL();

mEglDisplay = mEgl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY);
if (mEglDisplay == EGL10.EGL_NO_DISPLAY) {
throw new RuntimeException("eglGetDisplay failed "
+ GLUtils.getEGLErrorString(mEgl.eglGetError()));
}

int[] version = new int[2];
if (!mEgl.eglInitialize(mEglDisplay, version)) {
throw new RuntimeException("eglInitialize failed " +
GLUtils.getEGLErrorString(mEgl.eglGetError()));
}

mEglConfig = chooseEglConfig();
if (mEglConfig == null) {
throw new RuntimeException("eglConfig not initialized");
}

mEglContext = createContext(mEgl, mEglDisplay, mEglConfig);

mEglSurface = mEgl.eglCreateWindowSurface(mEglDisplay, mEglConfig, mSurface, null);

if (mEglSurface == null || mEglSurface == EGL10.EGL_NO_SURFACE) {
int error = mEgl.eglGetError();
if (error == EGL10.EGL_BAD_NATIVE_WINDOW) {
Log.e(LOG_TAG, "createWindowSurface returned EGL_BAD_NATIVE_WINDOW.");
return;
}
throw new RuntimeException("createWindowSurface failed "
+ GLUtils.getEGLErrorString(error));
}

if (!mEgl.eglMakeCurrent(mEglDisplay, mEglSurface, mEglSurface, mEglContext)) {
throw new RuntimeException("eglMakeCurrent failed "
+ GLUtils.getEGLErrorString(mEgl.eglGetError()));
}

mGL = mEglContext.getGL();
}

EGLContext createContext(EGL10 egl, EGLDisplay eglDisplay, EGLConfig eglConfig) {
int[] attrib_list = { EGL_CONTEXT_CLIENT_VERSION, 2, EGL10.EGL_NONE };
return egl.eglCreateContext(eglDisplay, eglConfig, EGL10.EGL_NO_CONTEXT, attrib_list);
}

private EGLConfig chooseEglConfig() {
int[] configsCount = new int[1];
EGLConfig[] configs = new EGLConfig[1];
int[] configSpec = getConfig();
if (!mEgl.eglChooseConfig(mEglDisplay, configSpec, configs, 1, configsCount)) {
throw new IllegalArgumentException("eglChooseConfig failed " +
GLUtils.getEGLErrorString(mEgl.eglGetError()));
} else if (configsCount[0] > 0) {
return configs[0];
}
return null;
}

private int[] getConfig() {
return new int[] {
EGL10.EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
EGL10.EGL_RED_SIZE, 8,
EGL10.EGL_GREEN_SIZE, 8,
EGL10.EGL_BLUE_SIZE, 8,
EGL10.EGL_ALPHA_SIZE, 8,
EGL10.EGL_DEPTH_SIZE, 0,
EGL10.EGL_STENCIL_SIZE, 0,
EGL10.EGL_NONE
};
}

void finish() {
mFinished = true;
}
}

You could have also went the NDK route and that would likely be a more straightforward transition from Objective C.

Opengl-es Drawing to texture

Yes, absolutely. As long as the texture has a color-renderable format (which in ES 2.0 are only RGBA4, RGB5_A1 and RGB565), it can be used as a render target. Depending on the limits of your GPU, this can be significantly larger than the display resolution.

This means that you are primarily limited by the maximum texture size. You can query that value with glGetIntegerv(GL_MAX_TEXTURE_SIZE, ...).

There is another limit that comes into play. GL_MAX_VIEWPORT_DIMS defines the maximum viewport size you can set. You can't render to a surface larger than these values.

Putting these two together, this would give the maximum size of a texture you can render to:

GLint maxTexSize = 0;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTexSize);

GLint maxViewportSize[2] = {0};
glGetIntegerv(GL_MAX_VIEWPORT_DIMS, maxViewportSize);

GLint maxRenderWidth = maxTexSize;
if (maxRenderWidth > maxViewportSize[0]) {
maxRenderWidth = maxViewportSize[0];
}

GLint maxRenderHeight = maxTexSize;
if (maxRenderHeight > maxViewportSize[1]) {
maxRenderHeight = maxViewportSize[1];
}

Using these values, you can create a texture of size maxRenderWidth x maxRenderHeight, and use it as an FBO attachment. Remember to set the viewport to the same size as well before you start rendering.

I checked the limits on a couple of tablets with GPUs from two different major vendors. On both of them, the maximum texture size and maximum viewport size were the same (4096 on one, 8192 on the other). It wouldn't surprise me if this is very common, but it's definitely not guaranteed by the spec.

ES 2.0 allows the maximum texture size to be as small as 64. But you will find the limit to be much larger on anything halfway recent. 2048 is at the lower end of what you see on any reasonably current GPU, and 4096/8192 is common. The maximum viewport dimensions are guaranteed to be at least as large as the display.

Clarification needed for opengl texture size requirements

Maximum texture size: I guess it's device dependent and somewhat about 2048x2048 should be save on halfway modern devices. Correct? Does this apply to the texture atlas or the whole texture object?"

The GLES2 spec dictates that max texture size must be 64x64 or better. The vast majority of Android and all iOS GLES2 devices are 2048x2048 or better. If you are able to drop down to 1024x1024 then you would be safe on pretty much all Android devices. For the desktop targets, 2048x2048 would be fine I think.

Power-of-two: OpenGL 2 supports textures with a edge length that is not a power of two. But I've heard that it might be safer to use such old-fashinod sized textures. Correct?

The thing to watch out for with non-power-of-two textures is that mipmapping and texture wrapping can be restricted. For that reason, I use power-of-two textures pretty much all the time. Using texture atlases is a good way to improve performance and reduce texture wastage as many awkwardly shaped textures can be combined into a single large power-of-two texture.

Bitmap size: Does the size of the bitmap (e.g. player_sprite.png) that is loaded into the game have anything to do with this maximum texture size or this power-of-two thing?

Not sure I understand the question. If you're going to be rendering the texture, then the png will presumably be loaded by the engine and put into an OpenGL texture, at which point it is subject to the rules relating to OpenGL max texture size and power-of-two.

HW accelerated activity - how to get OpenGL texture size limit?

Currently the minimum limit is 2048px (i.e. the hardware must support textures at least 2048x2048.) In ICS we will introduce a new API on the Canvas class that will give you this information:

Canvas.getMaximumBitmapWidth() and Canvas.getMaximumBitmapHeight().



Related Topics



Leave a reply



Submit