Face Recognition on the Iphone

Face Recognition on the iPhone

Face detection

I would use the Haarcascades available in open CV to perform quick and accurate face detection.

http://opencv.willowgarage.com/wiki/FaceDetection

Face recognition

I would use a method such as Principal Component Analysis (PCA) a.k.a eigenfaces.

http://www.cognotics.com/opencv/servo_2007_series/part_5/index.html

That link shows a tutorial on how to get that working with OpenCV - I think this is written for C but i'm sure you can get the basic jist of it.

You could also look at implementing it yourself if you feel brave (it's not too bad)...

http://www.face-rec.org/algorithms/PCA/jcn.pdf

http://blog.zabarauskas.com/eigenfaces-tutorial/

Database

I actually did something similar to you albeit on a PC not an iPhone but its still the same concept. I stored all my images in the database as Blob data types then loaded them into my program when necessary.

Edit

The database is a particularly tricky part of the system as this is where the biggest bottleneck is. In my application, I would go through the following steps...

  1. Open application and grab training images from database
  2. Generate training set based on these images
  3. Once 1 and 2 have been completed the system is very quick as it just performs recognition against the training set.

Fortunately for me, my database server was located on a LAN therefore speed wasn't a problem, however I can see why you have an issue due to the fact that on a mobile device you have a limited data connection (speed/bandwidth). You can compress the images however this may lead to a worse recognition rate, due to image quality reduction and also you will have to decode on the device. There is also the issue of how to expose the remote database to the application, however I do believe this is possible using PHP and JSON (and other technologies, see below).

Retrieving data from a remote database

Maybe you could do an initial synchronize with the database so that the images are cached on the phone? One way or another I think you are probably going to have to have the images on the phone at some point regardless.

Figuring out the best way to store the recognition data/images in the database was one of the biggest challenges I faced so I would be interested to hear if you find a good method.

How to develop a Face recognition iPhone app?

Core Image has a new CIFaceDetector to detect faces in real time; you can start with these examples to take an overview:

SquareCam

iOS Facial Recognition

Easy Face detection with Core Image

iOS Face Recognition

Vision

Apply high-performance image analysis and computer vision techniques to identify faces, detect features, and classify scenes in images and video.

see in Apple Docs here vision

Facial recognition iOS

I built something similar some years ago.
I would suggest you look into perceptual hashing as it's an easy and inexpensive way of matching images.

Recognizing whose face it is in iOS Swift

What you're talking about, if I understand it correctly, is that you're looking for FaceID. Unfortunately, Apple only gives this feature to developers as a means to authenticate the user and not recognize the face, per se. What you can do is take a picture of the face in the app; create a machine learning model with Apple's CoreML framework and train the model to recognize faces. But the problem with this is that you'd have to virtually train the model with every face, which is not possible. If you're keen, you can write your own algorithm for face recognition and analyze the taken picture with your model. For this, you'd need a really large amount of data.

Edits

Explore the following links. Maybe they can help.

https://gorillalogic.com/blog/how-to-build-a-face-recognition-app-in-ios-using-coreml-and-turi-create-part-1/

https://blog.usejournal.com/humanizing-your-ios-application-with-face-detection-api-kairos-60f64d4b68f7

Keep in mind, however, that these will not be secure like FaceID is. They can easily be spoofed.

iOS, How to do face tracking using the rear camera?

Here's how I adapted the sample to make it work on my iPad Pro.


1) Download the sample project from here: Tracking the User’s Face in Real Time.


2) Change the function which loads the front facing camera to use back facing. Rename it to configureBackCamera and call this method setupAVCaptureSession:

fileprivate func configureBackCamera(for captureSession: AVCaptureSession) throws -> (device: AVCaptureDevice, resolution: CGSize) {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back)

if let device = deviceDiscoverySession.devices.first {
if let deviceInput = try? AVCaptureDeviceInput(device: device) {
if captureSession.canAddInput(deviceInput) {
captureSession.addInput(deviceInput)
}

if let highestResolution = self.highestResolution420Format(for: device) {
try device.lockForConfiguration()
device.activeFormat = highestResolution.format
device.unlockForConfiguration()

return (device, highestResolution.resolution)
}
}
}

throw NSError(domain: "ViewController", code: 1, userInfo: nil)
}

3) Change the implementation of the method highestResolution420Format. The problem is, now that the back-facing camera is used, you have access to formats with much higher resolution than with front-facing camera, which can impact the performance of the tracking. You need to adapt to your use case, but here's an example of limiting the resolution to 1080p.

fileprivate func highestResolution420Format(for device: AVCaptureDevice) -> (format: AVCaptureDevice.Format, resolution: CGSize)? {
var highestResolutionFormat: AVCaptureDevice.Format? = nil
var highestResolutionDimensions = CMVideoDimensions(width: 0, height: 0)

for format in device.formats {
let deviceFormat = format as AVCaptureDevice.Format

let deviceFormatDescription = deviceFormat.formatDescription
if CMFormatDescriptionGetMediaSubType(deviceFormatDescription) == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange {
let candidateDimensions = CMVideoFormatDescriptionGetDimensions(deviceFormatDescription)
if (candidateDimensions.height > 1080) {
continue
}
if (highestResolutionFormat == nil) || (candidateDimensions.width > highestResolutionDimensions.width) {
highestResolutionFormat = deviceFormat
highestResolutionDimensions = candidateDimensions
}
}
}

if highestResolutionFormat != nil {
let resolution = CGSize(width: CGFloat(highestResolutionDimensions.width), height: CGFloat(highestResolutionDimensions.height))
return (highestResolutionFormat!, resolution)
}

return nil
}

4) Now the tracking will work, but the face positions will not be correct. The reason is that UI presentation is wrong, because original sample was designed for front facing cameras with mirrored display, while the back facing camera doesn't need mirroring.

In order to tweak for this, simply change updateLayerGeometry() method. Specifically, you need to change this:

// Scale and mirror the image to ensure upright presentation.
let affineTransform = CGAffineTransform(rotationAngle: radiansForDegrees(rotation))
.scaledBy(x: scaleX, y: -scaleY)
overlayLayer.setAffineTransform(affineTransform)

into this:

// Scale the image to ensure upright presentation.
let affineTransform = CGAffineTransform(rotationAngle: radiansForDegrees(rotation))
.scaledBy(x: -scaleX, y: -scaleY)
overlayLayer.setAffineTransform(affineTransform)

After this, the tracking should work and the results should be correct.



Related Topics



Leave a reply



Submit