Read texture bytes with glReadPixels?
glReadPixels function reads from framebuffers, not textures. To read a texture object you must use glGetTexImage but it isn't available in OpenGL ES :(
If you want to read the buffer from your texture then you can bind it to an FBO (FrameBuffer Object) and use glReadPixels:
//Generate a new FBO. It will contain your texture.
glGenFramebuffersOES(1, &offscreen_framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, offscreen_framebuffer);
//Create the texture
glGenTextures(1, &my_texture);
glBindTexture(GL_TEXTURE_2D, my_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
//Bind the texture to your FBO
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, my_texture, 0);
//Test if everything failed
GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if(status != GL_FRAMEBUFFER_COMPLETE_OES) {
printf("failed to make complete framebuffer object %x", status);
}
Then, you only must call to glReadPixels when you want to read from your texture:
//Bind the FBO
glBindFramebufferOES(GL_FRAMEBUFFER_OES, offscreen_framebuffer);
// set the viewport as the FBO won't be the same dimension as the screen
glViewport(0, 0, width, height);
GLubyte* pixels = (GLubyte*) malloc(width * height * sizeof(GLubyte) * 4);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
//Bind your main FBO again
glBindFramebufferOES(GL_FRAMEBUFFER_OES, screen_framebuffer);
// set the viewport as the FBO won't be the same dimension as the screen
glViewport(0, 0, screen_width, screen_height);
How can i read data by glReadPixels?
Your main problem happens before glReadPixels()
. The primary issue is with the way you use glTexImage2D()
:
FloatBuffer fb = BufferUtils.array2FloatBuffer(data);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0,
GL_RGBA, GL_UNSIGNED_BYTE, fb);
The GL_UNSIGNED_BYTE
value for the 8th argument specifies that the data passed in consists of unsigned bytes. However, the values in your buffer are floats. So your float values are interpreted as bytes, which can't possibly end well because they are completely different formats, with different sizes and memory layouts.
Now, you might be tempted to do this instead:
FloatBuffer fb = BufferUtils.array2FloatBuffer(data);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0,
GL_RGBA, GL_FLOAT, fb);
This would work in desktop OpenGL, which supports implicit format conversions as part of specifying texture data. But it is not supported in OpenGL ES. In ES 2.0, GL_FLOAT
is not even a legal value for the format argument. In ES 3.0, it is legal, but only for internal formats that actually store floats, like GL_RGBA16F
or GL_RGBA32F
. It is an error to use it in combination with the GL_RGBA
internal format (3rd argument).
So unless you use float textures in ES 3.0 (which consume much more memory), you need to convert your original data to bytes. If you have float values between 0.0 and 1.0, you can do that by multiplying them by 255, and rounding to the next integer.
Then you can read them back also as bytes with glReadPixels()
, and should get the same values again.
glReadPixels directly to a texture
You can use glCopyTexImage2D to copy from the back buffer:
glBindTexture(GL_TEXTURE_2D, textureID);
glCopyTexImage2D(GL_TEXTURE_2D, level, internalFormat, x, y, width, height, border);
OpenGL ES 2.0 always copies from the back buffer (or front buffer for single-buffered configurations). Using OpenGL ES 3.0, you can specify the source for the copy with:
glReadBuffer(GL_BACK);
In light of ClayMontgomery's answer (glCopyTexImage2D
is slow) - you might find using glCopyTexSubImage2D
with a correctly sized and formatted texture is faster because it writes to the pre-allocated texture instead of allocating a a new buffer each time. If this is still too slow, you should try doing as he suggests and render to a framebuffer (although you'll also need to draw a quad to the screen using the framebuffer's texture to get the same results).
How can I read the texture data so I can edit it?
You've 2 possibilities. The texture is attached to a framebuffer. Either read the pixels from the framebuffer or read the texture image from the texture.
The pixels of the framebuffer can be read by glReadPixels
. Bind the framebuffer for reading and read the pixels:
glBindFramebuffer(GL_FRAMEBUFFER, ssaoFBO);
glReadBuffer(GL_FRONT);
glReadPixels(0, 0, width, height, format, type, pixels);
The texture image can be read by glGetTexImage
. Bind the texture and read the data:
glBindTexture(GL_TEXTURE_2D, ssaoTexture);
glGetTexImage(GL_TEXTURE_2D, 0, format, type, pixels);
In both cases format
and type
define the pixel format of the target data.
e.g. If you want to store the pixels to an buffer with 4 color channels which 1 byte for each channel then format = GL_RGBA
and type = GL_UNSIGNED_BYTE
.
The size of the target buffer has to be widht * height * 4
.
e.g.
#include <vector>
int width = ...;
int height = ...;
std::vector<GLbyte> pixels(width * height * 4); // 4 because of RGBA * 1 byte
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels.data());
or
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels.data());
Note, if the size in bytes of 1 row of the image, is not dividable by 4, then the GL_PACK_ALIGNMENT
parameter has to be set, to adapt the alignment requirements for the start of each pixel row.
e.g. for an tightly packed GL_RGB
image:
int width = ...;
int height = ...;
std::vector<GLbyte> pixels(width * height * 3); // 3 because of RGB * 1 byte
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels.data());
getting a crash while using glReadPixels to map texture in memory
You are unbinding your GL_PIXEL_PACK_BUFFER
before the glReadPixels
, so the NULL
pointer will be interpreted relative to your client memory address space, and trying to copy the data to there will almost certainly result in a crash.
OpenGL glReadPixels returns 0
The 5th and 6th parameter (format
and type
) of glReadPixels
specifies the format and data type of the target pixel data.
Since you want to read to a buffer with the element data type GLubyte
, the type has to be GL_BYTE
.
Change your code like this:
glReadPixels(360, 240, 1, 1, GL_RGB, GL_BYTE, pixels);
Or read the data to a buffer of type GLfloat
:
GLfloat pixels[3];
glReadPixels(360, 240, 1, 1, GL_RGB, GL_FLOAT, pixels);
Note, what you do, is to read 12 bytes (sizeof(float)*3
) to a buffer with a size of 3 bytes (GLubyte pixels[3]
). This means a part of the floating point value which represents the red color channel is stored to the buffer. The rest overwrites some memory with bad access.
Related Topics
How to Subclass Uitableviewcontroller in Swift
How to Populate Uitableview from the Bottom Upwards
How to Know If Nsassert Is Disabled in Release Builds
New Foursquare Venue Detail Map
How to Downscale a Uiimage in iOS by the Data Size
How to Compare Ssl Certificates Using Afnetworking
In Swift: Difference Between Array VS Nsarray VS [Anyobject]
How to Add a Toolbar to the Bottom of a Uitableviewcontroller in Storyboards
Show Search Bar in Navigation Bar Without Scrolling on iOS 11
--Resource-Rules Has Been Deprecated in MAC Os X >= 10.10
How to Use Cocoapods When Creating a Cocoa Touch Framework
Space Between Sections in Uitableview
Compare Two Version Strings in Swift
Connect to Vpn Programmatically in iOS 8
How to Stop Firebase from Logging Status Updates When App Is Launched
Printing the View in iOS with Swift
Xcode 4.2 Jumps to Main.M Every Time After Stopping Simulator
Why Does My Uitableview "Jump" When Inserting or Removing a Row