How to Use Avcapturephotooutput

How to use AVCapturePhotoOutput

Updated to Swift 4
Hi it's really easy to use AVCapturePhotoOutput.

You need the AVCapturePhotoCaptureDelegate which returns the CMSampleBuffer.

You can get as well a preview image if you tell the AVCapturePhotoSettings the previewFormat

    class CameraCaptureOutput: NSObject, AVCapturePhotoCaptureDelegate {

let cameraOutput = AVCapturePhotoOutput()

func capturePhoto() {

let settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
kCVPixelBufferWidthKey as String: 160,
kCVPixelBufferHeightKey as String: 160]
settings.previewPhotoFormat = previewFormat
self.cameraOutput.capturePhoto(with: settings, delegate: self)

}

func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?, previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
if let error = error {
print(error.localizedDescription)
}

if let sampleBuffer = photoSampleBuffer, let previewBuffer = previewPhotoSampleBuffer, let dataImage = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: sampleBuffer, previewPhotoSampleBuffer: previewBuffer) {
print("image: \(UIImage(data: dataImage)?.size)") // Your Image
}
}
}

For more information visit https://developer.apple.com/reference/AVFoundation/AVCapturePhotoOutput

Note: You have to add the AVCapturePhotoOutput to the AVCaptureSession before taking the picture. So something like: session.addOutput(output), and then: output.capturePhoto(with:settings, delegate:self) Thanks @BigHeadCreations

AVCapturePhotoOutput - settings may not be reused

AVCapturePhotoSettings object is unique and cannot be reused, so you need to get new settings every time using this method:

func getSettings(camera: AVCaptureDevice, flashMode: CurrentFlashMode) -> AVCapturePhotoSettings {
let settings = AVCapturePhotoSettings()

if camera.hasFlash {
switch flashMode {
case .auto: settings.flashMode = .auto
case .on: settings.flashMode = .on
default: settings.flashMode = .off
}
}
return settings
}

As you can see, lockConfiguration is not needed.

CurrentFlashMode is enum, which has been created to keep things clear:

enum CurrentFlashMode {
case off
case on
case auto
}

Then simply use it while capturing photo:

 @IBAction func captureButtonPressed(_ sender: UIButton) {
let currentSettings = getSettings(camera: currentCamera, flashMode: currentFlashMode)
photoOutput.capturePhoto(with: currentSettings, delegate: self)
}

Unable to use AVCapturePhotoOutput to capture photo swift + xcode

You are almost there.

For Output as AVCapturePhotoOutput

Check out AVCapturePhotoOutput documentation for more help.

These are the steps to capture a photo.

  1. Create an AVCapturePhotoOutput object. Use its properties to
    determine supported capture settings and to enable certain features
    (for example, whether to capture Live Photos).
  2. Create and configure an AVCapturePhotoSettings object to choose
    features and settings for a specific capture (for example, whether
    to enable image stabilization or flash).
  3. Capture an image by passing your photo settings object to the
    capturePhoto(with:delegate:) method along with a delegate object
    implementing the AVCapturePhotoCaptureDelegate protocol. The photo
    capture output then calls your delegate to notify you of significant
    events during the capture process.

have this below code on your clickCapture method and don't forgot to confirm and implement to delegate in your class.

let settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
kCVPixelBufferWidthKey as String: 160,
kCVPixelBufferHeightKey as String: 160,
]
settings.previewPhotoFormat = previewFormat
self.cameraOutput.capturePhoto(with: settings, delegate: self)

For Output as AVCaptureStillImageOutput

if you intend to snap a photo from video connection. you can follow the below steps.

Step 1: Get the connection

if let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
// ...
// Code for photo capture goes here...
}

Step 2: Capture the photo

  • Call the captureStillImageAsynchronouslyFromConnection function on
    the stillImageOutput.
  • The sampleBuffer represents the data that is captured.

stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: { (sampleBuffer, error) -> Void in
// ...
// Process the image data (sampleBuffer) here to get an image file we can put in our captureImageView
})

Step 3: Process the Image Data

  • We will need to to take a few steps to process the image data found in sampleBuffer in order to end up with a UIImage that we can insert into our captureImageView and easily use elsewhere in our app.

if sampleBuffer != nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
// ...
// Add the image to captureImageView here...
}

Step 4: Save the image

Based on your need either save the image to photos gallery or show that in a image view


For more details check out Create custom camera view guide under Snap a Photo

iOS 10 - Objective-C : How to implement AVCapturePhotoOutput() to capture image and videos?

_avCaptureOutput = [[AVCapturePhotoOutput alloc]init];
_avSettings = [AVCapturePhotoSettings photoSettings];

AVCaptureSession* captureSession = [[AVCaptureSession alloc] init];
[captureSession startRunning];

[self.avCaptureOutput capturePhotoWithSettings:self.avSettings delegate:self];

self must implement the AVCapturePhotoCaptureDelegate

#pragma mark - AVCapturePhotoCaptureDelegate
-(void)captureOutput:(AVCapturePhotoOutput *)captureOutput didFinishProcessingPhotoSampleBuffer:(CMSampleBufferRef)photoSampleBuffer previewPhotoSampleBuffer:(CMSampleBufferRef)previewPhotoSampleBuffer resolvedSettings:(AVCaptureResolvedPhotoSettings *)resolvedSettings bracketSettings:(AVCaptureBracketedStillImageSettings *)bracketSettings error:(NSError *)error
{
if (error) {
NSLog(@"error : %@", error.localizedDescription);
}

if (photoSampleBuffer) {
NSData *data = [AVCapturePhotoOutput JPEGPhotoDataRepresentationForJPEGSampleBuffer:photoSampleBuffer previewPhotoSampleBuffer:previewPhotoSampleBuffer];
UIImage *image = [UIImage imageWithData:data];
}
}

Now, you get the image, and do whatever you want.


Note: Since iOS 11, -captureOutput:didFinishProcessingPhotoSampleBuffer:... is deprecated, need to use -captureOutput:didFinishProcessingPhoto:error: instead:

- (void)captureOutput:(AVCapturePhotoOutput *)output didFinishProcessingPhoto:(AVCapturePhoto *)photo error:(nullable NSError *)error
{
NSData *imageData = [photo fileDataRepresentation];
UIImage *image = [UIImage imageWithData:data];
...
}

AVCaptureStillImageOutput vs AVCapturePhotoOutput in Swift 3

AVCaptureStillImageOutput being deprecated means you can keep using it in iOS 10, but:

  • Apple makes no promises as to how long past iOS 10 it'll stay available.
  • as new hardware and software features get added in iOS 10 and beyond, you won't get access to all of them. For example, you can set up AVCaptureStillImageOutput for wide color but it's a lot easier to do wide color with AVCapturePhotoOutput. And for RAW capture or Live Photos, AVCapturePhotoOutput is the only game in town.

If you're happy proceeding despite the deprecation, your issue isn't that outputSettings is removed — it's still there.

Something to be aware of for beta 6 and beyond (though it turns out not to be an issue here): APIs that use NSDictionary without explicit key and value types come into Swift 3 as [AnyHashable: Any] and the Foundation or CoreFoundation types you might use in a dictionary are no longer implicitly bridged to Swift types. (Some of the other questions about beta 6 dictionary conversions might point you in the right direction there.)

However, I'm not getting any compilation errors for setting outputSettings. Whether in your full code or by reducing it to the essential parts for that line:

var stillImageOutput : AVCaptureStillImageOutput?
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput?.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]

...the only warnings I see are about the deprecation.

No active and enabled video connection when trying to use AVCapturePhotoOutput

I was eventually able to find a working example of how to use AVCapturePhotoOutput from Swift.

The following code was taken from Dan Clipca's GitHub project, AVCapturePhotoOutput:

//
// ViewController.swift
//

import AVFoundation
import Cocoa
import AppKit

class ViewController: NSViewController, AVCapturePhotoCaptureDelegate {
// MARK: - Properties
var previewLayer: AVCaptureVideoPreviewLayer?
var captureSession: AVCaptureSession?
var captureConnection: AVCaptureConnection?
var cameraDevice: AVCaptureDevice?
var photoOutput: AVCapturePhotoOutput?
var mouseLocation: NSPoint { NSEvent.mouseLocation }

// MARK: - LyfeCicle
override func viewDidLoad() {
super.viewDidLoad()

// Do any additional setup after loading the view.
prepareCamera()
startSession()
}

// MARK: - UtilityFunctions
override var representedObject: Any? {
didSet {
// Update the view, if already loaded.
}
}

@IBAction func button(_ sender: Any) {
moveMouseToRandomScreenPoint()
capturePhoto()
}

func startSession() {
if let videoSession = captureSession {
if !videoSession.isRunning {
videoSession.startRunning()
}
}
}

func stopSession() {
if let videoSession = captureSession {
if videoSession.isRunning {
videoSession.stopRunning()
}
}
}

internal func photoOutput(_ output: AVCapturePhotoOutput, willBeginCaptureFor resolvedSettings: AVCaptureResolvedPhotoSettings) {
print("willBeginCaptureFor")
}

internal func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
print("didFinishProcessingPhoto")
print(photo)
}

func capturePhoto() {
print(captureConnection?.isActive)
let photoSettings = AVCapturePhotoSettings()
photoOutput?.capturePhoto(with: photoSettings, delegate: self)
}

func prepareCamera() {
photoOutput = AVCapturePhotoOutput()
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSession.Preset.photo
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
previewLayer!.videoGravity = AVLayerVideoGravity.resizeAspectFill
do {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.front)
let cameraDevice = deviceDiscoverySession.devices[0]
let videoInput = try AVCaptureDeviceInput(device: cameraDevice)
captureSession!.beginConfiguration()
if captureSession!.canAddInput(videoInput) {
print("Adding videoInput to captureSession")
captureSession!.addInput(videoInput)
} else {
print("Unable to add videoInput to captureSession")
}
if captureSession!.canAddOutput(photoOutput!) {
captureSession!.addOutput(photoOutput!)
print("Adding videoOutput to captureSession")
} else {
print("Unable to add videoOutput to captureSession")
}
captureConnection = AVCaptureConnection(inputPorts: videoInput.ports, output: photoOutput!)
captureSession!.commitConfiguration()
if let previewLayer = previewLayer {
if ((previewLayer.connection?.isVideoMirroringSupported) != nil) {
previewLayer.connection?.automaticallyAdjustsVideoMirroring = false
previewLayer.connection?.isVideoMirrored = true
}
previewLayer.frame = view.bounds
view.layer = previewLayer
view.wantsLayer = true
}
captureSession!.startRunning()
} catch {
print(error.localizedDescription)
}
}
}

SwiftUI AVCapturePhotoOutput Does Not Work

The main problem is that you create a PhotoDelegate but do not store it. In iOS, the delegate object is usually stored as a weak reference to prevent a circular reference / retain cycle.

You can fix this by simply creating another property in your view, but instead I suggest you create a model class. If you're doing something unrelated to the view itself, that's a sign that you're better off moving it to some other place, like ObservableObject. You can also make it your delegate, so you don't have to create a separate object and use a singleton: that's another sign that you're doing something wrong.

class CaptureModel: NSObject, ObservableObject {
let captureSession = AVCaptureSession()
var backCamera: AVCaptureDevice?
var frontCamera: AVCaptureDevice?
var photoOutput: AVCapturePhotoOutput?
var currentCamera: AVCaptureDevice?
@Published
var capturedImage: UIImage?

override init() {
super.init()
setupCaptureSession()
setupDevices()
setupInputOutput()
}

func setupCaptureSession() {
captureSession.sessionPreset = AVCaptureSession.Preset.photo
}//setupCaptureSession

func setupDevices() {

let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: .video, position: .unspecified)

let devices = deviceDiscoverySession.devices
for device in devices {
if device.position == AVCaptureDevice.Position.back {
backCamera = device
} else if device.position == AVCaptureDevice.Position.front {
frontCamera = device
}//if else
}//for in

currentCamera = frontCamera

}//setupDevices

func setupInputOutput() {

do {
//you only get here if there is a camera ( ! ok )
let captureDeviceInput = try AVCaptureDeviceInput(device: currentCamera!)
captureSession.addInput(captureDeviceInput)
photoOutput = AVCapturePhotoOutput()
photoOutput?.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])], completionHandler: {(success, error) in
})
captureSession.addOutput(photoOutput!)
captureSession.commitConfiguration()

} catch {
print("Error creating AVCaptureDeviceInput:", error)
}

}//setupInputOutput

func startRunningCaptureSession() {
let settings = AVCapturePhotoSettings()

captureSession.startRunning()
photoOutput?.capturePhoto(with: settings, delegate: self)
}//startRunningCaptureSession

func stopRunningCaptureSession() {
captureSession.stopRunning()
}//startRunningCaptureSession
}

extension CaptureModel: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let data = photo.fileDataRepresentation(),
let image = UIImage(data: data) else {
return
}
capturedImage = image
}
}

struct ContentView: View {
@StateObject
var model = CaptureModel()

var body: some View {
VStack {
Text("Take a Photo Automatically")
.padding()

ZStack {
RoundedRectangle(cornerRadius: 0)
.stroke(Color.blue, lineWidth: 4)
.frame(width: 320, height: 240, alignment: .center)

model.capturedImage.map { capturedImage in
Image(uiImage: capturedImage)
}
}

Spacer()
}
.onAppear {
if UIImagePickerController.isSourceTypeAvailable(.camera) {
model.startRunningCaptureSession()
} else {
print("No Camera is Available")
}
}
.onDisappear {
model.stopRunningCaptureSession()
}
}
}//struct


Related Topics



Leave a reply



Submit