Where Is the Official Documentation for Cvopenglestexture Method Types

Where is the official documentation for CVOpenGLESTexture method types?

Unfortunately, there really isn't any documentation on these new functions. The best you're going to find right now is in the CVOpenGLESTextureCache.h header file, where you'll see a basic description of the function parameters:

/*!
@function CVOpenGLESTextureCacheCreate
@abstract Creates a new Texture Cache.
@param allocator The CFAllocatorRef to use for allocating the cache. May be NULL.
@param cacheAttributes A CFDictionaryRef containing the attributes of the cache itself. May be NULL.
@param eaglContext The OpenGLES 2.0 context into which the texture objects will be created. OpenGLES 1.x contexts are not supported.
@param textureAttributes A CFDictionaryRef containing the attributes to be used for creating the CVOpenGLESTexture objects. May be NULL.
@param cacheOut The newly created texture cache will be placed here
@result Returns kCVReturnSuccess on success
*/
CV_EXPORT CVReturn CVOpenGLESTextureCacheCreate(
CFAllocatorRef allocator,
CFDictionaryRef cacheAttributes,
void *eaglContext,
CFDictionaryRef textureAttributes,
CVOpenGLESTextureCacheRef *cacheOut) __OSX_AVAILABLE_STARTING(__MAC_NA,__IPHONE_5_0);

The more difficult elements are the attributes dictionaries, which unfortunately you need to find examples of in order to use these functions properly. Apple has the GLCameraRipple and RosyWriter examples that show off how to use the fast texture upload path with BGRA and YUV input color formats. Apple also provided the ChromaKey example at WWDC (which may still be accessible along with the videos) that demonstrated how to use these texture caches to pull information from an OpenGL ES texture.

I just got this fast texture uploading working in my GPUImage framework (the source code for which is available at that link), so I'll lay out what I was able to parse out of this. First, I create a texture cache using the following code:

CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)[[GPUImageOpenGLESContext sharedImageProcessingOpenGLESContext] context], NULL, &coreVideoTextureCache);
if (err)
{
NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
}

where the context referred to is an EAGLContext configured for OpenGL ES 2.0.

I use this to keep video frames from the iOS device camera in video memory, and I use the following code to do this:

CVPixelBufferLockBaseAddress(cameraFrame, 0);

CVOpenGLESTextureRef texture = NULL;
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, coreVideoTextureCache, cameraFrame, NULL, GL_TEXTURE_2D, GL_RGBA, bufferWidth, bufferHeight, GL_BGRA, GL_UNSIGNED_BYTE, 0, &texture);

if (!texture || err) {
NSLog(@"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
return;
}

outputTexture = CVOpenGLESTextureGetName(texture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

// Do processing work on the texture data here

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

CVOpenGLESTextureCacheFlush(coreVideoTextureCache, 0);
CFRelease(texture);
outputTexture = 0;

This creates a new CVOpenGLESTextureRef, representing an OpenGL ES texture, from the texture cache. This texture is based on the CVImageBufferRef passed in by the camera. That texture is then retrieved from the CVOpenGLESTextureRef and appropriate parameters set for it (which seemed to be necessary in my processing). Finally, I do my work on the texture and clean up when I'm done.

This fast upload process makes a real difference on the iOS devices. It took the upload and processing of a single 640x480 frame of video on an iPhone 4S from 9.0 ms to 1.8 ms.

I've heard that this works in reverse, as well, which might allow for the replacement of glReadPixels() in certain situations, but I've yet to try this.

How can I rewrite the GLCameraRipple sample without using iOS 5.0-specific features?

The iOS 5.0 fast texture upload capabilities can make for very fast uploading of camera frames and extraction of texture data, which is why Apple uses them in their latest sample code. For camera data, I've seen 640x480 frame upload times go from 9 ms to 1.8 ms using these iOS 5.0 texture caches on an iPhone 4S, and for movie capturing I've seen more than a fourfold improvement when switching to them.

That said, you still might want to provide a fallback for stragglers who have not yet updated to iOS 5.x. I do this in my open source image processing framework by using a runtime check for the texture upload capability:

+ (BOOL)supportsFastTextureUpload;
{
return (CVOpenGLESTextureCacheCreate != NULL);
}

If this returns NO, I use the standard upload process that we have had since iOS 4.0:

CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);

CVPixelBufferLockBaseAddress(cameraFrame, 0);

glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));

// Do your OpenGL ES rendering here

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

GLCameraRipple has one quirk in its upload process, and that is the fact that it uses YUV planar frames (split into Y and UV images), instead of one BGRA image. I get pretty good performance from my BGRA uploads, so I haven't seen the need to work with YUV data myself. You could either modify GLCameraRipple to use BGRA frames and the above code, or rework what I have above into YUV planar data uploads.

How can I rewrite the GLCameraRipple sample without using iOS 5.0-specific features?

The iOS 5.0 fast texture upload capabilities can make for very fast uploading of camera frames and extraction of texture data, which is why Apple uses them in their latest sample code. For camera data, I've seen 640x480 frame upload times go from 9 ms to 1.8 ms using these iOS 5.0 texture caches on an iPhone 4S, and for movie capturing I've seen more than a fourfold improvement when switching to them.

That said, you still might want to provide a fallback for stragglers who have not yet updated to iOS 5.x. I do this in my open source image processing framework by using a runtime check for the texture upload capability:

+ (BOOL)supportsFastTextureUpload;
{
return (CVOpenGLESTextureCacheCreate != NULL);
}

If this returns NO, I use the standard upload process that we have had since iOS 4.0:

CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);

CVPixelBufferLockBaseAddress(cameraFrame, 0);

glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));

// Do your OpenGL ES rendering here

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

GLCameraRipple has one quirk in its upload process, and that is the fact that it uses YUV planar frames (split into Y and UV images), instead of one BGRA image. I get pretty good performance from my BGRA uploads, so I haven't seen the need to work with YUV data myself. You could either modify GLCameraRipple to use BGRA frames and the above code, or rework what I have above into YUV planar data uploads.

glTexSubImage2D - GL_INVALID_OPERATION

Ah.. I fixed it immediately after posting this question. Had to change GL_RGBA into GL_BRGA.

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 512, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);

Hope it helps someone.

BTW. If you want to write AR apps then consider using CVOpenGLESTextureCache instead of using glTexSubImage2d. It's supposed to be faster.

Copy A Texture to PixelBuffer (CVPixelBufferRef)

For reading back data from OpenGL ES on iOS, you basically have two routes: using glReadPixels(), or using the texture caches (iOS 5.0+ only).

The fact that you just have a texture ID and access to nothing else is a little odd, and limits your choices here. If you have no way of setting what texture to use in this third-party API, you're going to need to re-render that texture to an offscreen framebuffer to extract the pixels for it either using glReadPixels() or the texture caches. To do this, you'd use an FBO sized to the same dimensions as your texture, a simple quad (two triangles making up a rectangle), and a passthrough shader that will just display each texel of your texture in the output framebuffer.

At that point, you can just use glReadPixels() to pull your bytes back into the the internal byte array of your CVPixelBufferRef or preferably use the texture caches to eliminate the need for that read. I describe how to set up the caching for that approach in this answer, as well as how to feed that into an AVAssetWriter. You'll need to set your offscreen FBO to use the CVPixelBufferRef's associated texture as a render target for this to work.

However, if you have the means of setting what ID to use for this rendered texture, you can avoid having to re-render it to grab its pixel values. Set up the texture caching like I describe in the above-linked answer and pass the texture ID for that pixel buffer into the third-party API you're using. It will then render into the texture that's associated with the pixel buffer, and you can record from that directly. This is what I use to accelerate the recording of video from OpenGL ES in my GPUImage framework (with the glReadPixels() approach as a fallback for iOS 4.x).

iOS CVImageBuffer distorted from AVCaptureSessionDataOutput with AVCaptureSessionPresetPhoto

This was a doozy.

As Lio Ben-Kereth pointed out, the padding is 48 as you can see from the debugger

(gdb) po pixelBuffer
<CVPixelBuffer 0x2934d0 width=852 height=640 bytesPerRow=3456 pixelFormat=BGRA
# => 3456 - 852 * 4 = 48

OpenGL can compensate for this, but OpenGL ES cannot (more info here openGL SubTexturing)

So here is how I'm doing it in OpenGL ES:

(CVImageBufferRef)pixelBuffer   // pixelBuffer containing the raw image data is passed in

/* ... */
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture_);

int frameWidth = CVPixelBufferGetWidth(pixelBuffer);
int frameHeight = CVPixelBufferGetHeight(pixelBuffer);

size_t bytesPerRow, extraBytes;

bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
extraBytes = bytesPerRow - frameWidth*4;

GLubyte *pixelBufferAddr = CVPixelBufferGetBaseAddress(pixelBuffer);

if ( [[captureSession sessionPreset] isEqualToString:@"AVCaptureSessionPresetPhoto"] )
{

glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, frameWidth, frameHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );

for( int h = 0; h < frameHeight; h++ )
{
GLubyte *row = pixelBufferAddr + h * (frameWidth * 4 + extraBytes);
glTexSubImage2D( GL_TEXTURE_2D, 0, 0, h, frameWidth, 1, GL_BGRA, GL_UNSIGNED_BYTE, row );
}
}
else
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, frameWidth, frameHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixelBufferAddr);
}

Before, I was using AVCaptureSessionPresetMedium and getting 30fps. In AVCaptureSessionPresetPhoto I'm getting 16fps on an iPhone 4. The looping for the sub-texture does not seem to affect the frame rate.

I'm using an iPhone 4 on iOS 5.

How to disable mod_deflate in apache2?

You could set the environment variable no-gzip for that directory/type of file:

# for URL paths that begin with "/foo/bar/"
SetEnvIf Request_URI ^/foo/bar/ no-gzip=1

# for files that end with ".py"
<FilesMatch \.py$>
SetEnv no-gzip 1
</FilesMatch>


Related Topics



Leave a reply



Submit