Capturing Still Image with Avfoundation

How to capture a still image with AVFoundation and display the image in another view controller using Swift

Don't assign image Directly like:

destination.capturedImage.image = self.imageDetail

But declare another instance which will hold your image into CaptureSessionDetailViewController like shown below:

var capturedImageRef = UIImage()

Now you can assign image to CaptureSessionDetailViewController from AddPhotoViewController this way into your Segue method:

destination.capturedImageRef = self.imageDetail

Now in viewDidLoad of CaptureSessionDetailViewController you can assign that image to imageView:

capturedImage.image = capturedImageRef

Capturing still image with AVFoundation

if you are targeting iOS 10 or above. captureStillImageAsynchronously(from:completionHandler:) is deprecated along with AVCaptureStillImageOutput.

As per the documentation

The AVCaptureStillImageOutput class is deprecated in iOS 10.0 and does
not support newer camera capture features such as RAW image output,
Live Photos, or wide-gamut color. In iOS 10.0 and later, use the
AVCapturePhotoOutput class instead. (The AVCaptureStillImageOutput
class remains supported in macOS 10.12.)

As per your code you are already using AVCapturePhotoOutput. So just follow these below steps to take a photo from session. Same can be found here in Apple documentation.

  1. Create an AVCapturePhotoOutput object. Use its properties to determine supported capture settings and to enable certain features (for example, whether to capture Live Photos).
  2. Create and configure an AVCapturePhotoSettings object to choose features and settings for a specific capture (for example, whether to enable image stabilization or flash).
  3. Capture an image by passing your photo settings object to the capturePhoto(with:delegate:) method along with a delegate object implementing the AVCapturePhotoCaptureDelegate protocol. The photo capture output then calls your delegate to notify you of significant events during the capture process.

you are already doing step 1 and 2. So add this line in your code

@IBAction func takePhoto(_sender: Any) {
print("Taking Photo")
sessionOutput.capturePhoto(with: sessionOutputSetting, delegate: self as! AVCapturePhotoCaptureDelegate)

}

and implement the AVCapturePhotoCaptureDelegate function

optional public func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?)

Note that this delegate will give lots of control over taking photos. Check out the documentation for more functions. Also you need to process the image data which means you have to convert the sample buffer to UIImage.

if sampleBuffer != nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
// ...
// Add the image to captureImageView here...
}

Note that the image you get is rotated left so we have to manually rotate right so get preview like image.

More info can be found in my previous SO answer

How to return image from AV Foundation still capture and add it to graphics context for a screenshot?

You could add the captured still image as a UIImage as a subview of the previewView. Then make the call to renderInContext.

Assuming this code is all going on in the same class, you can make image a property like

class CameraViewController: UIViewController {
var image: UIImage!
...

Then in captureStillImageAsynchronouslyFromConnection

instead of

let image:UIImage = UIImage( data: data)!

use

self.image = UIImage( data: data)!

Then in downloadButton use

var imageView = UIImageView(frame: previewView.frame)
imageView.image = self.image
previewView.addSubview(imageView)
previewView.layer.renderInContext(UIGraphicsGetCurrentContext()!)

You can then remove the imageView if you want

imageView.removeFromSuperview()

Capturing still image from AVFoundation that matches viewfinder border on AVCaptureVideoPreviewLayer in Swift

Steps to solve this:

First, get the full size image: I also used an extension to the UIImage class called "correctlyOriented".

let correctImage = UIImage(data: imageData!)!.correctlyOriented()

All this does is un-rotate the iPhone image, so a portrait image (taken with home button on the bottom of the iPhone) is oriented as expected. That extension is below:

extension UIImage {

func correctlyOriented() -> UIImage {

if imageOrientation == .up {
return self
}

// We need to calculate the proper transformation to make the image upright.
// We do it in 2 steps: Rotate if Left/Right/Down, and then flip if Mirrored.
var transform = CGAffineTransform.identity

switch imageOrientation {
case .down, .downMirrored:
transform = transform.translatedBy(x: size.width, y: size.height)
transform = transform.rotated(by: CGFloat.pi)
case .left, .leftMirrored:
transform = transform.translatedBy(x: size.width, y: 0)
transform = transform.rotated(by: CGFloat.pi * 0.5)
case .right, .rightMirrored:
transform = transform.translatedBy(x: 0, y: size.height)
transform = transform.rotated(by: -CGFloat.pi * 0.5)
default:
break
}

switch imageOrientation {
case .upMirrored, .downMirrored:
transform = transform.translatedBy(x: size.width, y: 0)
transform = transform.scaledBy(x: -1, y: 1)
case .leftMirrored, .rightMirrored:
transform = transform.translatedBy(x: size.height, y: 0)
transform = transform.scaledBy(x: -1, y: 1)
default:
break
}

// Now we draw the underlying CGImage into a new context, applying the transform
// calculated above.
guard
let cgImage = cgImage,
let colorSpace = cgImage.colorSpace,
let context = CGContext(data: nil,
width: Int(size.width),
height: Int(size.height),
bitsPerComponent: cgImage.bitsPerComponent,
bytesPerRow: 0,
space: colorSpace,
bitmapInfo: cgImage.bitmapInfo.rawValue) else {
return self
}

context.concatenate(transform)

switch imageOrientation {
case .left, .leftMirrored, .right, .rightMirrored:
context.draw(cgImage, in: CGRect(x: 0, y: 0, width: size.height, height: size.width))
default:
context.draw(cgImage, in: CGRect(origin: .zero, size: size))
}

// And now we just create a new UIImage from the drawing context
guard let rotatedCGImage = context.makeImage() else {
return self
}

return UIImage(cgImage: rotatedCGImage)
}

Next, calculate the height factor:

let heightFactor = self.view.frame.height / correctImage.size.height

Create a new CGSize based on the height factor, and then resize the image (using a resize image function, not shown):

let newSize = CGSize(width: correctImage.size.width * heightFactor, height: correctImage.size.height * heightFactor)

let correctResizedImage = self.imageWithImage(image: correctImage, scaledToSize: newSize)

Now, we have an image that is the same height as our device, but wider, due to the 4:3 aspect ratio of the iPhone camera vs the 16:9 aspect ratio of the iPhone screen. So, crop the image to be the same size as the device screen:

let screenCrop: CGRect = CGRect(x: (newSize.width - self.view.bounds.width) * 0.5,
y: 0,
width: self.view.bounds.width,
height: self.view.bounds.height)

var correctScreenCroppedImage = self.crop(image: correctResizedImage, to: screenCrop)

Lastly, we need to replicate the "crop" created by the green "viewfinder". So, we perform another crop to make the final image match:

let correctCrop: CGRect = CGRect(x: 0,
y: (correctScreenCroppedImage!.size.height * 0.5) - (correctScreenCroppedImage!.size.width * 0.5),
width: correctScreenCroppedImage!.size.width,
height: correctScreenCroppedImage!.size.width)

var correctCroppedImage = self.crop(image: correctScreenCroppedImage!, to: correctCrop)

Credit for this answer goes to @damirstuhec

How to capture picture with AVCaptureSession in Swift?

AVCaptureSession Sample

import UIKit
import AVFoundation
class ViewController: UIViewController {
let captureSession = AVCaptureSession()
let stillImageOutput = AVCaptureStillImageOutput()
var error: NSError?
override func viewDidLoad() {
super.viewDidLoad()
let devices = AVCaptureDevice.devices().filter{ $0.hasMediaType(AVMediaTypeVideo) && $0.position == AVCaptureDevicePosition.Back }
if let captureDevice = devices.first as? AVCaptureDevice {

captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &error))
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
captureSession.startRunning()
stillImageOutput.outputSettings = [AVVideoCodecKey:AVVideoCodecJPEG]
if captureSession.canAddOutput(stillImageOutput) {
captureSession.addOutput(stillImageOutput)
}
if let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) {
previewLayer.bounds = view.bounds
previewLayer.position = CGPointMake(view.bounds.midX, view.bounds.midY)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
let cameraPreview = UIView(frame: CGRectMake(0.0, 0.0, view.bounds.size.width, view.bounds.size.height))
cameraPreview.layer.addSublayer(previewLayer)
cameraPreview.addGestureRecognizer(UITapGestureRecognizer(target: self, action:"saveToCamera:"))
view.addSubview(cameraPreview)
}
}
}
func saveToCamera(sender: UITapGestureRecognizer) {
if let videoConnection = stillImageOutput.connectionWithMediaType(AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
UIImageWriteToSavedPhotosAlbum(UIImage(data: imageData), nil, nil, nil)
}
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
}

UIImagePickerController Sample

import UIKit

class ViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
let imagePicker = UIImagePickerController()
@IBOutlet weak var imageViewer: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func imagePickerController(picker: UIImagePickerController, didFinishPickingImage image: UIImage!, editingInfo: [NSObject : AnyObject]!) {
dismissViewControllerAnimated(true, completion: nil)
imageViewer.image = image
}
@IBAction func presentImagePicker(sender: AnyObject) {

if UIImagePickerController.isCameraDeviceAvailable( UIImagePickerControllerCameraDevice.Front) {

imagePicker.delegate = self
imagePicker.sourceType = UIImagePickerControllerSourceType.Camera
presentViewController(imagePicker, animated: true, completion: nil)

}
}
}

How to deal with asynchronous completion handler to capture still images programmatically AND sequentially with AVFoundation

So, according to your description you have two asynchronous methods and you want them to be called sequentially in a loop.

You can approach this problem as follows:

First define these asynchronous methods:

typedef void (^completion_t)(id result, NSError* error);

- (void)captureImage:(completion_t)completion {
// ...
}

- (void)uploadImage:(UIImage*)image params:(NSDictionary*)params completion:(completion_t)completion {
// ...
}

Then define a private helper method captureAndUpload which basically sets up a continuation for method captureImage which calls uploadImage. The continuation of uploadImage tests whether it should repeat and if this turns out to be true calls captureAndUpload again:

- (void)captureAndUpload {
[self captureImage:^(UIImage* image, NSError *error) {
if (image != nil) {
[self uploadImage:image params:someParams
completion:^(id result, NSError *error) {
if (result != nil) {
if (repeat) {
[self captureAndUpload];
}
else {
// ..
}
}
else {
// ...
}
}];
}
else {
// ...
}
}];
}

A more complete example which takes care of concurrency issues:

@interface Foo : NSObject

-(instancetype)init;

- (void) resume;
- (void) suspend;

@end

typedef void (^completion_t)(id result, NSError* error);

@implementation Foo {
int _resumed;
BOOL _running;
dispatch_queue_t _sync_queue;
}

- (instancetype)init {
self = [super init];
if (self) {
_resumed = 0;
_running = NO;
_sync_queue = dispatch_queue_create("imageuploader.sync_queue", DISPATCH_QUEUE_SERIAL);

}
return self;
}

- (void) resume {
dispatch_async(_sync_queue, ^{
if (++_resumed == 1) {
[self captureAndUpload];
}
});
}

- (void) suspend {
dispatch_async(_sync_queue, ^{
--_resumed;
});
}

- (void)captureAndUpload {
_running = YES;
[self captureImage:^(UIImage* image, NSError *error) {
if (image != nil) {
[self uploadImage:image params:nil
completion:^(id result, NSError *error) {
if (result != nil) {
if (_resumed > 0) {
dispatch_async(_sync_queue, ^{
[self captureAndUpload];
});
}
else {
dispatch_async(_sync_queue, ^{
_running = NO;
});
}
}
else {
// ...
}
}];
}
else {
// ...
}
}];
}

- (void)captureImage:(completion_t)completion {
// ...
}

- (void)uploadImage:(UIImage*)image params:(NSDictionary*)params completion:(completion_t)completion {
// ...
}

@end

This example is still not complete, though: it does not handle errors, there's no way to cancel the underlying network request, and it's not clear what the state of the "uploader" (an instance of Foo) currently is. That is, we might want to know when the last upload has been completed after suspending the uploader.



Related Topics



Leave a reply



Submit