How to Take Uiimage of Avcapturevideopreviewlayer Instead of Avcapturephotooutput Capture

How to take UIImage of AVCaptureVideoPreviewLayer instead of AVCapturePhotoOutput capture

Basically instead of using AVCaptureVideoPreviewLayer for grabbing frames you should use AVCaptureVideoDataOutputSampleBufferDelegate.
Here is example:

import Foundation
import UIKit
import AVFoundation

protocol CaptureManagerDelegate: class {
func processCapturedImage(image: UIImage)
}

class CaptureManager: NSObject {
internal static let shared = CaptureManager()
weak var delegate: CaptureManagerDelegate?
var session: AVCaptureSession?

override init() {
super.init()
session = AVCaptureSession()

//setup input
let device = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
let input = try! AVCaptureDeviceInput(device: device)
session?.addInput(input)

//setup output
let output = AVCaptureVideoDataOutput()
output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable: kCVPixelFormatType_32BGRA]
output.setSampleBufferDelegate(self, queue: DispatchQueue.main)
session?.addOutput(output)
}

func statSession() {
session?.startRunning()
}

func stopSession() {
session?.stopRunning()
}

func getImageFromSampleBuffer(sampleBuffer: CMSampleBuffer) ->UIImage? {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer)
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
guard let context = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) else {
return nil
}
guard let cgImage = context.makeImage() else {
return nil
}
let image = UIImage(cgImage: cgImage, scale: 1, orientation:.right)
CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
return image
}
}

extension CaptureManager: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
guard let outputImage = getImageFromSampleBuffer(sampleBuffer: sampleBuffer) else {
return
}
delegate?.processCapturedImage(image: outputImage)
}
}

Update: To process images you should implement a processCapturedImage method of the CaptureManagerDelegate protocol in any other class where you want, like:

import UIKit

class ViewController: UIViewController {
@IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
CaptureManager.shared.statSession()
CaptureManager.shared.delegate = self
}
}

extension ViewController: CaptureManagerDelegate {
func processCapturedImage(image: UIImage) {
self.imageView.image = image
}
}

AVCaptureVideoPreviewLayer is not visible on the screenshot

I was in the same position, and researched two separate solutions to this problem.

  1. Set up the ViewController as an AVCaptureVideoDataOutputSampleBufferDelegate and sample the video output to take the screenshot.

  2. Set up the ViewController as an AVCapturePhotoCaptureDelegate and capture the photo.

The mechanism for setting up the former is described in this question for example: How to take UIImage of AVCaptureVideoPreviewLayer instead of AVCapturePhotoOutput capture

I implemented both to check if there was any difference in the quality of the image (there wasn't).

If all you need is the camera snapshot, then that's it. But it sounds like you need to draw an additional animation on top. For this, I created a container UIView of the same size as the snapshot, added a UIImageView to it with the snapshot and then drew the animation on top. After that you can use UIGraphicsGetImageFromCurrentImageContext on the container.

As for which of solutions (1) and (2) to use, if you don't need to support different camera orientations in the app, it probably doesn't matter. However, if you need to switch between front and back camera and support different camera orientations, then you need to know the snapshot orientation to apply the animation in the right place, and getting that right turned out to be a total bear with method (1).

The solution I used:

  1. UIViewController extends AVCapturePhotoCaptureDelegate

  2. Add photo output to the AVCaptureSession

    private let session = AVCaptureSession()
private let photoOutput = AVCapturePhotoOutput()

....

// When configuring the session
if self.session.canAddOutput(self.photoOutput) {
self.session.addOutput(self.photoOutput)
self.photoOutput.isHighResolutionCaptureEnabled = true
}

  1. Capture snapshot
    let settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [
kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
kCVPixelBufferWidthKey as String: 160,
kCVPixelBufferHeightKey as String: 160
]
settings.previewPhotoFormat = previewFormat
photoOutput.capturePhoto(with: settings, delegate: self)

  1. Rotate or flip the snapshot before doing the rest
    func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {

guard error == nil else {
// Do something
// return
}

if let dataImage = photo.fileDataRepresentation() {
print(UIImage(data: dataImage)?.size as Any)

let dataProvider = CGDataProvider(data: dataImage as CFData)
let cgImageRef: CGImage! = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
//https://developer.apple.com/documentation/uikit/uiimageorientation?language=objc
let orientation = UIApplication.shared.statusBarOrientation
var imageOrientation = UIImage.Orientation.right
switch orientation {
case .portrait:
imageOrientation = self.cameraPosition == .back ? UIImage.Orientation.right : UIImage.Orientation.leftMirrored
case .landscapeRight:
imageOrientation = self.cameraPosition == .back ? UIImage.Orientation.up : UIImage.Orientation.downMirrored
case .portraitUpsideDown:
imageOrientation = self.cameraPosition == .back ? UIImage.Orientation.left : UIImage.Orientation.rightMirrored
case .landscapeLeft:
imageOrientation = self.cameraPosition == .back ? UIImage.Orientation.down : UIImage.Orientation.upMirrored
case .unknown:
imageOrientation = self.cameraPosition == .back ? UIImage.Orientation.right : UIImage.Orientation.leftMirrored
@unknown default:
imageOrientation = self.cameraPosition == .back ? UIImage.Orientation.right : UIImage.Orientation.leftMirrored
}
let image = UIImage.init(cgImage: cgImageRef, scale: 1.0, orientation: imageOrientation)

// Do whatever you need to do with the image

} else {
// Handle error
}
}

If you need to know the size of the image to position the animations you can use the AVCaptureVideoDataOutputSampleBufferDelegate strategy to detect the size of the buffer once.

Overlay an image (UIImageView) into photo

There is at least two ways of doing this.

1) Capture photo from camera and add overlay image based on draggable image view position. This will produce best possible quality.

2) If you don't need photo quality you can use another solution - render draggable view layer in graphic context and video preview layer:

func takePhotoWithOverlay() -> UIImage? {
let overlayImageView // UIImageView with overlay image
let videoPreviewLayer // AVCaptureVideoPreviewLayer

guard let superview = overlayImageView.superview else { return nil }

UIGraphicsBeginImageContextWithOptions(superview.bounds.size, false, 0.0)
defer { UIGraphicsEndImageContext() }

guard let context = UIGraphicsGetCurrentContext() else { return nil }

videoPreviewLayer.render(in: context)
overlayImageView.layer.render(in: context)

return UIGraphicsGetImageFromCurrentImageContext()
}

How to use AVCapturePhotoOutput

Updated to Swift 4
Hi it's really easy to use AVCapturePhotoOutput.

You need the AVCapturePhotoCaptureDelegate which returns the CMSampleBuffer.

You can get as well a preview image if you tell the AVCapturePhotoSettings the previewFormat

    class CameraCaptureOutput: NSObject, AVCapturePhotoCaptureDelegate {

let cameraOutput = AVCapturePhotoOutput()

func capturePhoto() {

let settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
kCVPixelBufferWidthKey as String: 160,
kCVPixelBufferHeightKey as String: 160]
settings.previewPhotoFormat = previewFormat
self.cameraOutput.capturePhoto(with: settings, delegate: self)

}

func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?, previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
if let error = error {
print(error.localizedDescription)
}

if let sampleBuffer = photoSampleBuffer, let previewBuffer = previewPhotoSampleBuffer, let dataImage = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: sampleBuffer, previewPhotoSampleBuffer: previewBuffer) {
print("image: \(UIImage(data: dataImage)?.size)") // Your Image
}
}
}

For more information visit https://developer.apple.com/reference/AVFoundation/AVCapturePhotoOutput

Note: You have to add the AVCapturePhotoOutput to the AVCaptureSession before taking the picture. So something like: session.addOutput(output), and then: output.capturePhoto(with:settings, delegate:self) Thanks @BigHeadCreations

Cropping AVCaptureVideoPreviewLayer output to a square

Even if the preview layer is square, keep in mind that the generated still image keeps its original size.

From what I see, the problem is here:

UIGraphicsBeginImageContext(CGSizeMake(width, width));
[image drawInRect: CGRectMake(0, 0, width, width)];

You already made your context square with the first line. You still need to draw your image in its original format, it will be clipped by that context. On the second line, you are forcing the drawing of the original image in a square, thus making it look "squeezed".

You should find the right image height that keeps the original ratio while fitting your "width". Next, you will want to draw that image with the correct size (keeping the original ratio) in your square context. If you want to clip the center, change your Y position of the drawing.

Something similar to this:

- (void) processImage:(UIImage *)image {
UIGraphicsBeginImageContext(CGSizeMake(width, width));
CGFloat imageHeight = floorf(width / image.width * image.height);
CGFloat offsetY = floorf((imageHeight - width) / 2.0f);
[image drawInRect: CGRectMake(0, -offsetY, width, imageHeight)];
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[captureImageGrab setImage:smallImage];
}

That should do it.

SwiftUI AVCapturePhotoOutput Does Not Work

The main problem is that you create a PhotoDelegate but do not store it. In iOS, the delegate object is usually stored as a weak reference to prevent a circular reference / retain cycle.

You can fix this by simply creating another property in your view, but instead I suggest you create a model class. If you're doing something unrelated to the view itself, that's a sign that you're better off moving it to some other place, like ObservableObject. You can also make it your delegate, so you don't have to create a separate object and use a singleton: that's another sign that you're doing something wrong.

class CaptureModel: NSObject, ObservableObject {
let captureSession = AVCaptureSession()
var backCamera: AVCaptureDevice?
var frontCamera: AVCaptureDevice?
var photoOutput: AVCapturePhotoOutput?
var currentCamera: AVCaptureDevice?
@Published
var capturedImage: UIImage?

override init() {
super.init()
setupCaptureSession()
setupDevices()
setupInputOutput()
}

func setupCaptureSession() {
captureSession.sessionPreset = AVCaptureSession.Preset.photo
}//setupCaptureSession

func setupDevices() {

let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: .video, position: .unspecified)

let devices = deviceDiscoverySession.devices
for device in devices {
if device.position == AVCaptureDevice.Position.back {
backCamera = device
} else if device.position == AVCaptureDevice.Position.front {
frontCamera = device
}//if else
}//for in

currentCamera = frontCamera

}//setupDevices

func setupInputOutput() {

do {
//you only get here if there is a camera ( ! ok )
let captureDeviceInput = try AVCaptureDeviceInput(device: currentCamera!)
captureSession.addInput(captureDeviceInput)
photoOutput = AVCapturePhotoOutput()
photoOutput?.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])], completionHandler: {(success, error) in
})
captureSession.addOutput(photoOutput!)
captureSession.commitConfiguration()

} catch {
print("Error creating AVCaptureDeviceInput:", error)
}

}//setupInputOutput

func startRunningCaptureSession() {
let settings = AVCapturePhotoSettings()

captureSession.startRunning()
photoOutput?.capturePhoto(with: settings, delegate: self)
}//startRunningCaptureSession

func stopRunningCaptureSession() {
captureSession.stopRunning()
}//startRunningCaptureSession
}

extension CaptureModel: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let data = photo.fileDataRepresentation(),
let image = UIImage(data: data) else {
return
}
capturedImage = image
}
}

struct ContentView: View {
@StateObject
var model = CaptureModel()

var body: some View {
VStack {
Text("Take a Photo Automatically")
.padding()

ZStack {
RoundedRectangle(cornerRadius: 0)
.stroke(Color.blue, lineWidth: 4)
.frame(width: 320, height: 240, alignment: .center)

model.capturedImage.map { capturedImage in
Image(uiImage: capturedImage)
}
}

Spacer()
}
.onAppear {
if UIImagePickerController.isSourceTypeAvailable(.camera) {
model.startRunningCaptureSession()
} else {
print("No Camera is Available")
}
}
.onDisappear {
model.stopRunningCaptureSession()
}
}
}//struct

Erros with AVCaptureSession to UIImage in Swift 3

For first error:

Implement AVCaptureVideoDataOutputSampleBufferDelegate delegate which will give you AVCaptureOutput and CMSampleBuffer for current session.

class YourClass : AVCaptureVideoDataOutputSampleBufferDelegate {}

public func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!){ print("DidDropSampleBuffer")
//use this CMSampleBuffer to get Image
}

For second error ,
replace this :

let pixelBuffer = CMSampleBufferGetImageBuffer(CMSampleBuffer)

With :

`func ciimageFromSampleBuffer(sampleBuffer : CMSampleBuffer) -> CIImage
{
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
let cameraImage = CIImage(cvImageBuffer: pixelBuffer!)
return cameraImage
}`

this will give you CIImage.



Related Topics



Leave a reply



Submit