How to Perform Face Detection in Swift

Recognizing whose face it is in iOS Swift

What you're talking about, if I understand it correctly, is that you're looking for FaceID. Unfortunately, Apple only gives this feature to developers as a means to authenticate the user and not recognize the face, per se. What you can do is take a picture of the face in the app; create a machine learning model with Apple's CoreML framework and train the model to recognize faces. But the problem with this is that you'd have to virtually train the model with every face, which is not possible. If you're keen, you can write your own algorithm for face recognition and analyze the taken picture with your model. For this, you'd need a really large amount of data.

Edits

Explore the following links. Maybe they can help.

https://gorillalogic.com/blog/how-to-build-a-face-recognition-app-in-ios-using-coreml-and-turi-create-part-1/

https://blog.usejournal.com/humanizing-your-ios-application-with-face-detection-api-kairos-60f64d4b68f7

Keep in mind, however, that these will not be secure like FaceID is. They can easily be spoofed.

How do I perform Face Detection in Swift

Xcode 9 • Swift 4

extension NSImage {
var ciImage: CIImage? {
guard let data = tiffRepresentation else { return nil }
return CIImage(data: data)
}
var faces: [NSImage] {
guard let ciImage = ciImage else { return [] }
return (CIDetector(ofType: CIDetectorTypeFace, context: nil, options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])?
.features(in: ciImage) as? [CIFaceFeature])?
.map {
let ciimage = ciImage.cropped(to: $0.bounds) // Swift 3 use cropping(to:)
let imageRep = NSCIImageRep(ciImage: ciimage)
let nsImage = NSImage(size: imageRep.size)
nsImage.addRepresentation(imageRep)
return nsImage
} ?? []
}
}

Testing

let image = NSImage(contentsOf: URL(string: "https://i.stack.imgur.com/Xs4RX.jpg")!)!
let faces = image.faces

Sample Image

Face Detection with Camera

There are two ways to detect faces: CIFaceDetector and AVCaptureMetadataOutput. Depending on your requirements, choose what is relevant for you.

CIFaceDetector has more features, it gives you the location of the eyes and mouth, a smile detector, etc.

On the other hand, AVCaptureMetadataOutput is computed on the frames and the detected faces are tracked and there is no extra code to be added by us. I find that, because of tracking. faces are detected more reliably in this process. The downside of this is that you will simply detect faces, no the position of the eyes or mouth.
Another advantage of this method is that orientation issues are smaller as you can use videoOrientation whenever the device orientation changes and the orientation of the faces will relative to that orientation.

In my case, my application uses YUV420 as the required format so using CIDetector (which works with RGB) in real-time was not viable. Using AVCaptureMetadataOutput saved a lot of effort and performed more reliably due to continuous tracking.

Once I had the bounding box for the faces, I coded extra features, such as skin detection and applied it on the still image.

Note: When you capture a still image, the face box information is added along with the metadata so there are no sync issues.

You can also use a combination of the two to get better results.

Explore and evaluate the pros and cons as per your application.


The face rectangle is wrt image origin. So, for the screen, it may be different.
Use:

for (AVMetadataFaceObject *faceFeatures in metadataObjects) {
CGRect face = faceFeatures.bounds;
CGRect facePreviewBounds = CGRectMake(face.origin.y * previewLayerRect.size.width,
face.origin.x * previewLayerRect.size.height,
face.size.width * previewLayerRect.size.height,
face.size.height * previewLayerRect.size.width);

/* Draw rectangle facePreviewBounds on screen */
}

Real time face detection with Camera on swift 3

Swift 3

I have found a solution using AVFoundation that will create square face tracking in real time on iOS. I have modified some code here.

import UIKit
import AVFoundation

class DetailsView: UIView {
func setup() {
layer.borderColor = UIColor.red.withAlphaComponent(0.7).cgColor
layer.borderWidth = 5.0
}
}

class ViewController: UIViewController {

let stillImageOutput = AVCaptureStillImageOutput()

var session: AVCaptureSession?
var stillOutput = AVCaptureStillImageOutput()
var borderLayer: CAShapeLayer?

let detailsView: DetailsView = {
let detailsView = DetailsView()
detailsView.setup()

return detailsView
}()

lazy var previewLayer: AVCaptureVideoPreviewLayer? = {
var previewLay = AVCaptureVideoPreviewLayer(session: self.session!)
previewLay?.videoGravity = AVLayerVideoGravityResizeAspectFill

return previewLay
}()

lazy var frontCamera: AVCaptureDevice? = {
guard let devices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as? [AVCaptureDevice] else { return nil }

return devices.filter { .position == .front }.first
}()

let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: [CIDetectorAccuracy : CIDetectorAccuracyLow])

override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
previewLayer?.frame = view.frame
}

override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
guard let previewLayer = previewLayer else { return }

view.layer.addSublayer(previewLayer)
view.addSubview(detailsView)
view.bringSubview(toFront: detailsView)
}

override func viewDidLoad() {
super.viewDidLoad()
sessionPrepare()
session?.startRunning()
}
//function to store image
func saveToCamera() {

if let videoConnection = stillImageOutput.connection(withMediaType: AVMediaTypeVideo) {

stillImageOutput.captureStillImageAsynchronously(from: videoConnection, completionHandler: { (CMSampleBuffer, Error) in
if let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(CMSampleBuffer) {

if let cameraImage = UIImage(data: imageData) {

UIImageWriteToSavedPhotosAlbum(cameraImage, nil, nil, nil)
}
}
})
}
}
}

extension ViewController {

func sessionPrepare() {
session = AVCaptureSession()

guard let session = session, let captureDevice = frontCamera else { return }

session.sessionPreset = AVCaptureSessionPresetPhoto

do {
let deviceInput = try AVCaptureDeviceInput(device: captureDevice)
session.beginConfiguration()
stillImageOutput.outputSettings = [AVVideoCodecKey:AVVideoCodecJPEG]

if session.canAddOutput(stillImageOutput) {
session.addOutput(stillImageOutput)
}

if session.canAddInput(deviceInput) {
session.addInput(deviceInput)
}

let output = AVCaptureVideoDataOutput()
output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String : NSNumber(value: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)]

output.alwaysDiscardsLateVideoFrames = true

if session.canAddOutput(output) {
session.addOutput(output)
}

session.commitConfiguration()

let queue = DispatchQueue(label: "output.queue")
output.setSampleBufferDelegate(self, queue: queue)

} catch {
print("error with creating AVCaptureDeviceInput")
}
}
}

extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate)
let ciImage = CIImage(cvImageBuffer: pixelBuffer!, options: attachments as! [String : Any]?)
let options: [String : Any] = [CIDetectorImageOrientation: exifOrientation(orientation: UIDevice.current.orientation),
CIDetectorSmile: true,
CIDetectorEyeBlink: true]
let allFeatures = faceDetector?.features(in: ciImage, options: options)

let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer)
let cleanAperture = CMVideoFormatDescriptionGetCleanAperture(formatDescription!, false)

guard let features = allFeatures else { return }

for feature in features {
if let faceFeature = feature as? CIFaceFeature {
let faceRect = calculateFaceRect(facePosition: faceFeature.mouthPosition, faceBounds: faceFeature.bounds, clearAperture: cleanAperture)
update(with: faceRect)
}
}

if features.count == 0 {
DispatchQueue.main.async {
self.detailsView.alpha = 0.0
}
}

}

func exifOrientation(orientation: UIDeviceOrientation) -> Int {
switch orientation {
case .portraitUpsideDown:
return 8
case .landscapeLeft:
return 3
case .landscapeRight:
return 1
default:
return 6
}
}

func videoBox(frameSize: CGSize, apertureSize: CGSize) -> CGRect {
let apertureRatio = apertureSize.height / apertureSize.width
let viewRatio = frameSize.width / frameSize.height

var size = CGSize.zero

if (viewRatio > apertureRatio) {
size.width = frameSize.width
size.height = apertureSize.width * (frameSize.width / apertureSize.height)
} else {
size.width = apertureSize.height * (frameSize.height / apertureSize.width)
size.height = frameSize.height
}

var videoBox = CGRect(origin: .zero, size: size)

if (size.width < frameSize.width) {
videoBox.origin.x = (frameSize.width - size.width) / 2.0
} else {
videoBox.origin.x = (size.width - frameSize.width) / 2.0
}

if (size.height < frameSize.height) {
videoBox.origin.y = (frameSize.height - size.height) / 2.0
} else {
videoBox.origin.y = (size.height - frameSize.height) / 2.0
}

return videoBox
}

func calculateFaceRect(facePosition: CGPoint, faceBounds: CGRect, clearAperture: CGRect) -> CGRect {
let parentFrameSize = previewLayer!.frame.size
let previewBox = videoBox(frameSize: parentFrameSize, apertureSize: clearAperture.size)

var faceRect = faceBounds

swap(&faceRect.size.width, &faceRect.size.height)
swap(&faceRect.origin.x, &faceRect.origin.y)

let widthScaleBy = previewBox.size.width / clearAperture.size.height
let heightScaleBy = previewBox.size.height / clearAperture.size.width

faceRect.size.width *= widthScaleBy
faceRect.size.height *= heightScaleBy
faceRect.origin.x *= widthScaleBy
faceRect.origin.y *= heightScaleBy

faceRect = faceRect.offsetBy(dx: 0.0, dy: previewBox.origin.y)
let frame = CGRect(x: parentFrameSize.width - faceRect.origin.x - faceRect.size.width - previewBox.origin.x / 2.0, y: faceRect.origin.y, width: faceRect.width, height: faceRect.height)

return frame
}

}
extension ViewController {
func update(with faceRect: CGRect) {
DispatchQueue.main.async {
UIView.animate(withDuration: 0.2) {
self.detailsView.alpha = 1.0
self.detailsView.frame = faceRect
}
}
}
}

**Edited *******

Swift 4

Apple's own Vision frameworks available to detect face in real time from Swift 4. click the link for document and sample app.

Face detection swift vision kit

Hope you were able to use VNDetectFaceRectanglesRequest and able to detect faces. To show rectangle boxes there are lots of ways to achieve it. But simplest one would be using CAShapeLayer to draw layer on top your image for each face you detected.

Consider you have VNDetectFaceRectanglesRequest like below

let request = VNDetectFaceRectanglesRequest { [unowned self] request, error in
if let error = error {
// somthing is not working as expected
}
else {
// we got some face detected
self.handleFaces(with: request)
}
}
let handler = VNImageRequestHandler(ciImage: ciImage, options: [:])
do {
try handler.perform([request])
}
catch {
// catch exception if any
}

You can implement a simple method called handleFace for each face detected and use VNFaceObservation property to draw a CAShapeLayer.

func handleFaces(with request: VNRequest) {
imageView.layer.sublayers?.forEach { layer in
layer.removeFromSuperlayer()
}
guard let observations = request.results as? [VNFaceObservation] else {
return
}
observations.forEach { observation in
let boundingBox = observation.boundingBox
let size = CGSize(width: boundingBox.width * imageView.bounds.width,
height: boundingBox.height * imageView.bounds.height)
let origin = CGPoint(x: boundingBox.minX * imageView.bounds.width,
y: (1 - observation.boundingBox.minY) * imageView.bounds.height - size.height)

let layer = CAShapeLayer()
layer.frame = CGRect(origin: origin, size: size)
layer.borderColor = UIColor.red.cgColor
layer.borderWidth = 2

imageView.layer.addSublayer(layer)
}
}

More info can be found here in Github repo iOS-11-by-Examples

Swift: CoreImage Face Detection will Detect Every round object as a Face

You can change the CIDetectorAccuracyHigh to Low instead. it would be more usable

IOS ML Kit face tracking does not work correctly

The problem was the imageOrientation, I set the orientation to portrait only in Xcode but rotating the image-based on UIDeviceOrientation which is wrong, fixing it by setting the imageOrientation to be fixed at .up position.

Edit :
Also, make sure you don't override the output image orientation like this:

 private let videoDataOutput = AVCaptureVideoDataOutput()
guard let connection = self.videoDataOutput.connection(with: AVMediaType.video),
connection.isVideoOrientationSupported else { return }
connection.videoOrientation = .portrait

Detect if face is within a circle

I have also done similar project for fun. Link here: https://github.com/sawin0/FaceDetection

For those who don't want to dive into repo.

I have quick suggestion for you, if you have path of circle and face as CGPath then you can compare circle's and face's bounding box using contains(_:using:transform:) .

Here is a code snippet

    let circleBox = circleCGPath.boundingBox
let faceBox = faceRectanglePath.boundingBox

if(circleBox.contains(faceBox)){
print("face is inside the circle")
}else{
print("face is outside the circle")
}

I hope this helps you and others too.

P.S. If there is any better way to do this then please feel free to share.



Related Topics



Leave a reply



Submit