How to Record and Save at 240 Frames Per Second

How to record and save at 240 frames per second?

I figured this out with a deeper read of https://stackoverflow.com/a/41109637/292947

In configureDevice() I was setting self.session.sessionPreset = .high when in fact I need to set self.session.sessionPreset = .inputPriority which is the Swift 4 equivalent to the AVCaptureSessionPresetInputPriority value suggested in the above answer.

VLC shows 240 fps

Saving video at 120/240fps

You're calling configDevice() too early and your configuration is being replaced.

Call configDevice() after you've added the capture device's input:

// Configure the session with the input and the output devices
captureSession.addInput(captureDeviceInput)
configureDevice()

AVFileCaptureOutput: Not recording at 240 fps

I ended up answering my own question.

In addition to using a preset, I discovered that in order to set the camera configuration, I need to add the camera to the capture session, configure it and then start the capture session immediately. Whereas, I was adding the camera to the capture session before I had configured it, which doesn't seem to cause the configuration to be committed.

Relevant iOS documentation: https://developer.apple.com/reference/avfoundation/avcapturedevice/1387810-lockforconfiguration?language=objc

What is the optimal fps to record an OpenCV video at?

Let's say your camera recording with 25 FPS. If you are capturing 15 FPS while your camera is recording with 25 FPS, the video will be approximately 1.6 times faster than real life.

You can find out frame rate with get(CAP_PROP_FPS) or get(CV_CAP_PROP_FPS) but it's invalid unless the source is a video file.

For cameras or webcams you have to calculate(estimate) FPS programmatically:

num_frames = 240; # Number of frames to capture

print "Capturing {0} frames".format(num_frames)

start = time.time()# Start time

# Grab a few frames
for i in xrange(0, num_frames) :
ret, frame = video.read()

end = time.time() # End time

seconds = end - start # Time elapsed
print "Time taken : {0} seconds".format(seconds)

# Calculate frames per second
fps = num_frames / seconds;
print "Estimated frames per second : {0}".format(fps);

So this program estimates frame rate of your video source by recording first 240 frames as sample and then calculates delta time. Lastly, the result of a simple division gives you the FPS.

Capture 120/240 fps using AVCaptureVideoDataOutput into frame buffer using low resolution

// core image use GPU to all image ops, crop / transform / ...

// --- create once ---
EAGLContext *glCtx = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *ciContext = [CIContext contextWithEAGLContext:glCtx options:@{kCIContextWorkingColorSpace:[NSNull null]}];
// use rgb faster 3x
CGColorSpaceRef ciContextColorSpace = CGColorSpaceCreateDeviceRGB();
OSType cvPixelFormat = kCVPixelFormatType_32BGRA;

// create compression session
VTCompressionSessionRef compressionSession;
NSDictionary* pixelBufferOptions = @{(__bridge NSString*) kCVPixelBufferPixelFormatTypeKey:@(cvPixelFormat),
(__bridge NSString*) kCVPixelBufferWidthKey:@(outputResolution.width),
(__bridge NSString*) kCVPixelBufferHeightKey:@(outputResolution.height),
(__bridge NSString*) kCVPixelBufferOpenGLESCompatibilityKey : @YES,
(__bridge NSString*) kCVPixelBufferIOSurfacePropertiesKey : @{}};

OSStatus ret = VTCompressionSessionCreate(kCFAllocatorDefault,
outputResolution.width,
outputResolution.height,
kCMVideoCodecType_H264,
NULL,
(__bridge CFDictionaryRef)pixelBufferOptions,
NULL,
VTEncoderOutputCallback,
(__bridge void*)self,
&compressionSession);

CVPixelBufferRef finishPixelBuffer;
// I'm use VTCompressionSession pool, you can use AVAssetWriterInputPixelBufferAdaptor
CVReturn res = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, VTCompressionSessionGetPixelBufferPool(compressionSession), &finishPixelBuffer);
// -------------------

// ------ scale ------
// new buffer comming...
// - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);

CIImage *baseImg = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CGFloat outHeight = 240;
CGFloat scale = 1 / (CVPixelBufferGetHeight(pixelBuffer) / outHeight);
CGAffineTransform transform = CGAffineTransformMakeScale(scale, scale);

// result image not changed after
CIImage *resultImg = [baseImg imageByApplyingTransform:transform];
// resultImg = [resultImg imageByCroppingToRect:...];

// CIContext applies transform to CIImage and draws to finish buffer
[ciContext render:resultImg toCVPixelBuffer:finishPixelBuffer bounds:resultImg.extent colorSpace:ciContextColorSpace];
CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);

// [videoInput appendSampleBuffer:CMSampleBufferCreateForImageBuffer(... finishPixelBuffer...)]
VTCompressionSessionEncodeFrame(compressionSession, finishPixelBuffer, CMSampleBufferGetPresentationTimeStamp(sampleBuffer), CMSampleBufferGetDuration(sampleBuffer), NULL, sampleBuffer, NULL);
// -------------------

How do I control AVAssetWriter to write at the correct FPS

I'm reaching here, but I think this is where you're going wrong. Think of your video capture as a pipeline.

(1) Capture buffer -> (2) Do Something With buffer -> (3) Write buffer as frames in video.

Sounds like you've successfully completed (1) and (2), you're getting the buffer fast enough and you're processing them so you can vend them as frames.

The problem is almost certainly in (3) writing the video frames.

https://developer.apple.com/reference/avfoundation/avmutablevideocomposition

Check out the frameDuration setting in your AVMutableComposition, you'll need something like CMTime(1, 60) //60FPS or CMTime(1, 240) // 240FPS to get what you're after (telling the video to WRITE this many frames and encode at this rate).

Using AVAssetWriter, it's exactly the same principle but you set the frame rate as a property in the AVAssetWriterInput outputSettings adding in the AVVideoExpectedSourceFrameRateKey.

NSDictionary *videoCompressionSettings = @{AVVideoCodecKey                  : AVVideoCodecH264,
AVVideoWidthKey : @(videoWidth),
AVVideoHeightKey : @(videoHeight),
AVVideoExpectedSourceFrameRateKey : @(60),
AVVideoCompressionPropertiesKey : @{ AVVideoAverageBitRateKey : @(bitsPerSecond),
AVVideoMaxKeyFrameIntervalKey : @(1)}
};

To expand a little more - you can't strictly control or sync your camera capture exactly to the output / playback rate, the timing just doesn't work that way and isn't that exact, and of course the processing pipeline adds overhead. When you capture frames they are time stamped, which you've seen, but in the writing / compression phase, it's using only the frames it needs to produce the output specified for the composition.

It goes both ways, you could capture only 30 FPS and write out at 240 FPS, the video would display fine, you'd just have a lot of frames "missing" and being filled in by the algorithm. You can even vend only 1 frame per second and play back at 30FPS, the two are separate from each other (how fast I capture Vs how many frames and what I present per second)

As to how to play it back at different speed, you just need to tweak the playback speed - slow it down as needed.

If you've correctly set the time base (frameDuration), it will always play back "normal" - you're telling it "play back is X Frames Per Second", of course, your eye may notice a difference (almost certainly between low FPS and high FPS), and the screen may not refresh that high (above 60FPS), but regardless the video will be at a "normal" 1X speed for it's timebase. By slowing the video, if my timebase is 120, and I slow it to .5x I know effectively see 60FPS and one second of playback takes two seconds.

You control the playback speed by setting the rate property on AVPlayer https://developer.apple.com/reference/avfoundation/avplayer

iPhone capture session: Set custom frame-rate

Thanks, this answered my question!:)

For anyone still wondering below is the code that I used:

    // Instantiate the video device: wide angle camera, back position
let videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video,
position: .back)


// Set the frame rate to 60, as expected by the model
try! videoDevice?.lockForConfiguration()

videoDevice?.activeFormat = (videoDevice?.formats[30])!
videoDevice?.activeVideoMinFrameDuration = CMTimeMake(value: 1, timescale: 60)
videoDevice?.activeVideoMaxFrameDuration = CMTimeMake(value: 1, timescale: 60)

videoDevice?.unlockForConfiguration()

// Debug only
// print(videoDevice?.activeFormat)

However, make sure to add some error handling:D

Thanks, again.

How can I call captureOutput at 240fps?

captureOutput only logs is 240fps.

captureOutput with take photos to album is about 70~100fps.

This code can get 240fps logs.

//
// ViewController.swift
// CustomCamera
//
// Created by chunibyo on 2021/3/8.
//

import UIKit
import AVFoundation
import Vision
import VideoToolbox

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {

@IBOutlet weak var captureButton: UIButton!
let sessionQueue = DispatchQueue(label: "Session Queue")
var status = false
var zoomStatus = 1
private var MyCaptureDevice: AVCaptureDevice?

override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.

captureButton.layer.cornerRadius = captureButton.frame.width / 2
captureButton.layer.masksToBounds = true
captureButton.layer.zPosition = 10


guard let captureDevice = AVCaptureDevice.default(for: AVMediaType.video) else {return}

guard let input = try? AVCaptureDeviceInput(device: captureDevice) else {return}

let captureSession = AVCaptureSession();
// captureSession.sessionPreset = .photo
captureSession.addInput(input)

// 1
for vFormat in captureDevice.formats {
// 2
let ranges = vFormat.videoSupportedFrameRateRanges as [AVFrameRateRange]

let frameRates = ranges[0]
// 3
if frameRates.maxFrameRate == 240 {
// 4
try? captureDevice.lockForConfiguration()
captureDevice.activeFormat = vFormat as AVCaptureDevice.Format
captureDevice.activeVideoMinFrameDuration = CMTimeMake(value: 1, timescale: Int32(240))
captureDevice.activeVideoMaxFrameDuration = CMTimeMake(value: 1, timescale: Int32(240))
captureDevice.videoZoomFactor = captureDevice.minAvailableVideoZoomFactor
captureDevice.unlockForConfiguration()
}
}

captureSession.startRunning();

let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(previewLayer)
previewLayer.frame = view.frame

let dataOutput = AVCaptureVideoDataOutput()
dataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
dataOutput.alwaysDiscardsLateVideoFrames = true;
captureSession.addOutput(dataOutput)

print(captureDevice.minAvailableVideoZoomFactor)
print(captureDevice.maxAvailableVideoZoomFactor)

MyCaptureDevice = captureDevice
}

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
print(CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)))
// if !status { return }
// guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
// guard let uiImage = UIImage(pixelBuffer: pixelBuffer) else { return }
// UIImageWriteToSavedPhotosAlbum(uiImage, nil, nil, nil)
//
// guard let captureDevice = self.MyCaptureDevice else { return }
// if self.zoomStatus == 1 && captureDevice.videoZoomFactor >= CGFloat(Int32(captureDevice.maxAvailableVideoZoomFactor * 0.6)) { self.zoomStatus = -1
// }
// else if self.zoomStatus == -1 && captureDevice.videoZoomFactor <= (captureDevice.minAvailableVideoZoomFactor + 1.0) {
// self.zoomStatus = 1
// }
// UIImageWriteToSavedPhotosAlbum(uiImage, nil, nil, nil)
// try? captureDevice.lockForConfiguration()
// captureDevice.videoZoomFactor += (0.1 * CGFloat(self.zoomStatus))
// captureDevice.unlockForConfiguration()
}

@IBAction func captureControl(_ sender: UIButton) {
DispatchQueue.main.async {
if self.status {
self.captureButton.backgroundColor = .white
print("stop")
self.status = !self.status
}
else {
self.captureButton.backgroundColor = .red
print("recording...")
self.status = !self.status
}

}
}
}

extension UIImage {
public convenience init?(pixelBuffer: CVPixelBuffer) {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)
guard let _cgImage = cgImage else { return nil }
self.init(cgImage: _cgImage)
}
}



Related Topics



Leave a reply



Submit