Swift 4 - Avfoundation Screen and Audio Recording Using Avassetwriter on MAC Os - Video Frozen

Video freezes on camera switch with AVFoundation

Are you getting error logs? If not, you need to fix the code above and see what they say.
What version of AVCam are you using? They've recently updated the project to version 1.2, which is much more efficient and uses blocks.

From my experience, you shouldn't be creating and recreating a session, you can just leave it on. Maybe you need to structure your app a little differently. What exactly is your app about? Maybe we can help you. If your app is centred around the camera, then it's easier to leave the session on, if your just taking video modally, then maybe using AVCam is overkill.

Your actual problem, to me, sounds like it's with AVCaptureDeviceInput. Download the original AVCam package and see if you've changed any retain counts, or safety if statements. If there's any other code, please post.

UPDATE: Can you change

} else if (error) {
NSLog(@"%@",[error localizedDescription]);
}

to

} if (error) {
NSLog(@"%@",[error localizedDescription]);
}

and tell me if there's an error?

Also, before you release the view controller that owns the session, make sure you stop the session and set the capture manager to nil.

UPDATE 2: Try this toggle code. It's what I have been using. AVCamMirringMode is a struct as follows:

enum {
AVCamMirroringOff = 1,
AVCamMirroringOn = 2,
AVCamMirroringAuto = 3
};
typedef NSInteger AVCamMirroringMode;

- (BOOL) toggleCamera
{
BOOL success = NO;

if ([self cameraCount] > 1) {
NSError *error;
AVCaptureDeviceInput *newVideoInput;
AVCaptureDevicePosition position = [[videoInput device] position];

BOOL mirror;
if (position == AVCaptureDevicePositionBack){
newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self frontFacingCamera] error:&error];
switch ([self mirroringMode]) {
case AVCamMirroringOff:
mirror = NO;
break;
case AVCamMirroringOn:
mirror = YES;
break;
case AVCamMirroringAuto:
default:
mirror = YES;
break;
}
}
else if (position == AVCaptureDevicePositionFront){
newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:&error];
switch ([self mirroringMode]) {
case AVCamMirroringOff:
mirror = NO;
break;
case AVCamMirroringOn:
mirror = YES;
break;
case AVCamMirroringAuto:
default:
mirror = NO;
break;
}
}
else
goto bail;

if (newVideoInput != nil) {
[[self session] beginConfiguration];
[[self session] removeInput:[self videoInput]];
if ([[self session] canAddInput:newVideoInput]) {
[[self session] addInput:newVideoInput];
AVCaptureConnection *connection = [AVCamUtilities connectionWithMediaType:AVMediaTypeVideo fromConnections:[[self stillImageOutput] connections]];
if ([connection isVideoMirroringSupported]) {
[connection setVideoMirrored:mirror];
}
[self setVideoInput:newVideoInput];
} else {
[[self session] addInput:[self videoInput]];
}
[[self session] commitConfiguration];
success = YES;
[newVideoInput release];

} else if (error) {
if ([[self delegate] respondsToSelector:@selector(captureManager:didFailWithError:)]) {
[[self delegate] captureManager:self didFailWithError:error];
}
}
}

bail:
return success;
}

Adding filters to video with AVFoundation (OSX) - how do I write the resulting image back to AVWriter?

I eventually found a solution by digging through a lot of half complete samples and poor AVFoundation documentation from Apple.

The biggest confusion is that while at a high level, AVFoundation is "reasonably" consistent between iOS and OSX, the lower level items behave differently, have different methods and different techniques. This solution is for OSX.

Setting up your AssetWriter

The first thing is to make sure that when you set up the asset writer, you add an adaptor to read in from a CVPixelBuffer. This buffer will contain the modified frames.

    // Create the asset writer input and add it to the asset writer.
AVAssetWriterInput *assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[[videoTracks objectAtIndex:0] mediaType] outputSettings:videoSettings];
// Now create an adaptor that writes pixels too!
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput
sourcePixelBufferAttributes:nil];
assetWriterVideoInput.expectsMediaDataInRealTime = NO;
[assetWriter addInput:assetWriterVideoInput];

Reading and Writing

The challenge here is that I couldn't find directly comparable methods between iOS and OSX - iOS has the ability to render a context directly to a PixelBuffer, where OSX does NOT support that option. The context is also configured differently between iOS and OSX.

Note that you should include the QuartzCore.Framework into your XCode Project as well.

Creating the context on OSX.

    CIContext *context = [CIContext contextWithCGContext:
[[NSGraphicsContext currentContext] graphicsPort]
options: nil]; // We don't want to always create a context so we put it outside the loop

Now you want want to loop through, reading off the AssetReader and writing to the AssetWriter... but note that you are writing via the adaptor created previously, not with the SampleBuffer.

    while ([adaptor.assetWriterInput isReadyForMoreMediaData] && !done)
{
CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
if (sampleBuffer)
{
CMTime currentTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

// GRAB AN IMAGE FROM THE SAMPLE BUFFER
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:640.0], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:360.0], kCVPixelBufferHeightKey,
nil];

CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer options:options];

//-----------------
// FILTER IMAGE - APPLY ANY FILTERS IN HERE

CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone"];
[filter setDefaults];
[filter setValue: inputImage forKey: kCIInputImageKey];
[filter setValue: @1.0f forKey: kCIInputIntensityKey];

CIImage *outputImage = [filter valueForKey: kCIOutputImageKey];

//-----------------
// RENDER OUTPUT IMAGE BACK TO PIXEL BUFFER
// 1. Firstly render the image
CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];

// 2. Grab the size
CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));

// 3. Convert the CGImage to a PixelBuffer
CVPixelBufferRef pxBuffer = NULL;
// pixelBufferFromCGImage is documented below.
pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];

// 4. Write things back out.
// Calculate the frame time
CMTime frameTime = CMTimeMake(1, 30); // Represents 1 frame at 30 FPS
CMTime presentTime=CMTimeAdd(currentTime, frameTime); // Note that if you actually had a sequence of images (an animation or transition perhaps), your frameTime would represent the number of images / frames, not just 1 as I've done here.

// Finally write out using the adaptor.
[adaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];

CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why we couldn't get another sample buffer....
if (assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = assetReader.error;
// Do something with this error.
}
else
{
// Some kind of success....
done = YES;
[assetWriter finishWriting];
}
}
}
}

Creating the PixelBuffer

There MUST be an easier way, however for now, this works and is the only way I found to get directly from a CIImage to a PixelBuffer (via a CGImage) on OSX. The following code is cut and paste from AVFoundation + AssetWriter: Generate Movie With Images and Audio

    - (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;

CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);

CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);

CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

return pxbuffer;
}


Related Topics



Leave a reply



Submit