Simplified Screen Capture: Record Video of Only What Appears Within the Layers of a Uiview

Programmatically capture video of screen

Check out ScreenCaptureView, this has video-recording support built-in (see link).

What this does is it saves the contents of a UIView to a UIImage. The author suggests you can save a video of the app in use by passing the frames through AVCaptureSession.

I believe it hasn't been tested with an OpenGL subview, but assuming that it works you might be able to modify it slightly to include audio and then you'd be set.

Screen capture video in iOS programmatically

The link is not dead.

http://codethink.no-ip.org/wordpress/archives/673

If you check the comments, there is also some code that will mix audio & video and save it as a quicktime movie.

If you still can't access the link, there is a guy selling the same code on binpress:

http://www.binpress.com/app/ios-screen-capture-view/1038

Screen capture using renderincontext of presentationlayer not working

EDIT - Hooray ! Apple finally added this -

[view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:NO];

wrap between Begin and End Graphics Context calls !

------------ pre iOS7 ---------------
From Apple's Documentation for CALayer here!
In renderInContext they clearly say

"Important: The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties."

The only option for me is to use server side solution OR handcode the animation frame by frame by manually overlaying the transformation at time t.

How to superimpose views over each captured frame inside CVImageBuffer, realtime not post process

The best way that helps you to achieve your goal is to use a Metal framework. Using a Metal camera is good for minimising the impact on device’s limited computational resources. If you are trying to achieve the lowest-overhead access to camera sensor, using a AVCaptureSession would be a really good start.

You need to grab each frame data from CMSampleBuffer (you're right) and then to convert a frame to a MTLTexture. AVCaptureSession will continuously send us frames from device’s camera via a delegate callback.

All available overlays must be converted to MTLTextures too. Then you can composite all MTLTextures layers with over operation.

So, here you'll find all necessary info in four-part Metal Camera series.

And here's a link to a blog: About Compositing in Metal.

Also, I'd like to publish code's excerpt (working with AVCaptureSession in Metal):

import Metal

guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
// Handle an error here.
}

// Texture cache for converting frame images to textures
var textureCache: CVMetalTextureCache?

// `MTLDevice` for initializing texture cache
var metalDevice = MTLCreateSystemDefaultDevice()

guard
let metalDevice = metalDevice
where CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &textureCache) == kCVReturnSuccess
else {
// Handle an error (failed to create texture cache)
}

let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)

var imageTexture: CVMetalTexture?
let result = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache.takeUnretainedValue(), imageBuffer, nil, pixelFormat, width, height, planeIndex, &imageTexture)

// `MTLTexture` is in the `texture` variable now.
guard
let unwrappedImageTexture = imageTexture,
let texture = CVMetalTextureGetTexture(unwrappedImageTexture),
result == kCVReturnSuccess
else {
throw MetalCameraSessionError.failedToCreateTextureFromImage
}

And here you can find a final project on a GitHub: MetalRenderCamera

Record overlay at the same time AVFoundation iOS

I am not sure if this is the thing you are looking for but i guess you can use Brad Larson's GPU library,there is a class called GPUImageElement which lets you add overlays and views.Please check out the examples,especially the one called Filter showcase and scroll to something called UIElement.

Here is some sample code:

 else if (filterType == GPUIMAGE_UIELEMENT)
{
GPUImageAlphaBlendFilter *blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
blendFilter.mix = 1.0;

NSDate *startTime = [NSDate date];

UILabel *timeLabel = [[UILabel alloc] initWithFrame:CGRectMake(0.0, 0.0, 240.0f, 320.0f)];
timeLabel.font = [UIFont systemFontOfSize:17.0f];
timeLabel.text = @"Time: 0.0 s";
timeLabel.textAlignment = UITextAlignmentCenter;
timeLabel.backgroundColor = [UIColor clearColor];
timeLabel.textColor = [UIColor whiteColor];

uiElementInput = [[GPUImageUIElement alloc] initWithView:timeLabel];

[filter addTarget:blendFilter];
[uiElementInput addTarget:blendFilter];

[blendFilter addTarget:filterView];

__unsafe_unretained GPUImageUIElement *weakUIElementInput = uiElementInput;

[filter setFrameProcessingCompletionBlock:^(GPUImageOutput * filter, CMTime frameTime){
timeLabel.text = [NSString stringWithFormat:@"Time: %f s", -[startTime timeIntervalSinceNow]];
[weakUIElementInput update];
}];
}

how to take a UIView screenshot faster?

You can use exact same view in magnifier view, and change position to visible words.



Related Topics



Leave a reply



Submit