How to capture depth data from camera in iOS 11 and Swift 4?
First, you need to use the dual camera, otherwise you won't get any depth data.
let device = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .back)
And keep a reference to your queue
let dataOutputQueue = DispatchQueue(label: "data queue", qos: .userInitiated, attributes: [], autoreleaseFrequency: .workItem)
You'll also probably want to synchronize the video and depth data
var outputSynchronizer: AVCaptureDataOutputSynchronizer?
Then you can synchronize the two outputs in your viewDidLoad() method like this
if sessionOutput?.isDepthDataDeliverySupported {
sessionOutput?.isDepthDataDeliveryEnabled = true
depthDataOutput?.connection(with: .depthData)!.isEnabled = true
depthDataOutput?.isFilteringEnabled = true
outputSynchronizer = AVCaptureDataOutputSynchronizer(dataOutputs: [sessionOutput!, depthDataOutput!])
outputSynchronizer!.setDelegate(self, queue: self.dataOutputQueue)
}
I would recommend watching WWDC session 507 - they also provide a full sample app that does exactly what you want.
https://developer.apple.com/videos/play/wwdc2017/507/
Getting depth data from custom camera
This is the problem with my code:
DepthDataDelivery won't be supported unless the photo output is added to a session and the input to the session is properly configured to deliver depth.
Set the session preset first:
self.captureSession.sessionPreset = .photo
After adding dual camera input, add the photo output.
guard self.captureSession.canAddOutput(photoOutput!)
Now set depth delivery enabled:
self.photoOutput!.isDepthDataDeliveryEnabled = photoOutput!.isDepthDataDeliverySupported
Can AVDepthData be returned with more than 8-bit data?
I haven't tried, so I'm not sure if you can get a "deeper" depth format out of the TrueDepth camera.
If you can, however, converting the depth data that comes out of the capture isn't the way to do it. depthData.converting(toDepthDataType:)
is analogous to converting scalar types. For example, if you have a value of type Float
, the extra decimal places you gain by converting it to Double
are all zeros — your existing measurement hasn't gained any precision.
The way you specify depth capture formats is before capture. Set your capture device's activeDepthDataFormat
to one of the supportedDepthDataFormats
compatible with its current activeFormat
. The values you find in supportedDepthDataFormats
will tell you what types / precision of depth data your capture device is capable of recording.
How to capture depth data as kCVPixelFormatType_DepthFloat16 on iOS?
AVDepthData
has method converting(toDepthDataType:)
. Just call:
avDepth.converting(toDepthDataType: kCVPixelFormatType_DepthFloat16)
Related Topics
Dynamically Change Text of Realitykit Entity
Is There a Difference Between "Is" and Iskindofclass()
Getting Dyld_Fatal_Error After Updating to Xcode 6 Beta 4 Using Swift
Core Data: Failed to Load Model
Trouble Left-Aligning Uibutton Title (Ios/Swift)
How to Run Xctest for a Swift Application from the Command Line
Iso8601 Date JSON Decoding Using Swift4
Differencebetween Http Parameters and Http Headers
Swift Convert String to Unsafemutablepointer<Int8>
Present Uiviewcontroller as a Modal with Transparent Background
Storing Different Types of Value in Array in Swift
Difference Between Using Objectidentifier() and '===' Operator
Rxswift Map and Flatmap Difference
Differencebetween Type Safety and Type Inference
How to Link a .Sks File to a .Swift File in Spritekit
How to Use Combine to Track Uitextfield Changes in a Uiviewrepresentable Class