Multi-Face Detection in Realitykit

Is it possible to track a face and render it in RealityKit ARView?


If you wanna use RealityKit rendering technology you should use its own anchors.

So, for RealityKit face tracking experience you just need:

AnchorEntity(AnchoringComponent.Target.face)

And you don't even need session(_:didAdd:) and session(_:didUpdate:) instance methods in case you're using Reality Composer scene.

If you prepare a scene in Reality Composer .face type of anchor is available for you at start. Here's how non-editable hidden Swift code in .reality file looks like:

public static func loadFace() throws -> Facial.Face {

guard let realityFileURL = Foundation.Bundle(for: Facial.Face.self).url(forResource: "Facial",
withExtension: "reality")
else {
throw Facial.LoadRealityFileError.fileNotFound("Facial.reality")
}

let realityFileSceneURL = realityFileURL.appendingPathComponent("face", isDirectory: false)
let anchorEntity = try Facial.Face.loadAnchor(contentsOf: realityFileSceneURL)
return createFace(from: anchorEntity)
}

If you need a more detailed info about anchors, please read this post.

P.S.

But, at the moment, there's one unpleasant problem – if you're using a scene built in Reality Composer, you can use only one type of anchor at a time (horizontal, vertical, image, face, or object). Hence, if you need to use ARWorldTrackingConfig along with ARFaceTrackingConfig – don't use Reality Composer scenes. I'm sure this situation will be fixed in the nearest future.

Implement a crosshair kind behaviour in RealityKit

Try the following solution:

import ARKit
import RealityKit

class ViewController: UIViewController {

@IBOutlet var arView: ARView!
var sphere: ModelEntity?

override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = arView.center
let results: [CollisionCastHit] = arView.hitTest(touch)

if let result: CollisionCastHit = results.first {
if result.entity.name == "Cube" && sphere?.isAnchored == true {
print("BOOM!")
}
}
}

override func viewDidLoad() {
super.viewDidLoad()
// Crosshair
let mesh01 = MeshResource.generateSphere(radius: 0.01)
sphere = ModelEntity(mesh: mesh01)
sphere?.transform.translation.z = -0.15
let cameraAnchor = AnchorEntity(.camera)
sphere?.setParent(cameraAnchor)
arView.scene.addAnchor(cameraAnchor)

// Model for collision
let mesh02 = MeshResource.generateBox(size: 0.3)
let box = ModelEntity(mesh: mesh02, materials: [SimpleMaterial()])
box.generateCollisionShapes(recursive: true)
box.name = "Cube"
let planeAnchor = AnchorEntity(.plane(.any,
classification: .any,
minimumBounds: [0.2, 0.2]))
box.setParent(planeAnchor)
arView.scene.addAnchor(planeAnchor)
}
}

Which goal should be chosen for ARCoachingOverlayView when detecting faces or images?

When using ARCoachingOverlayView with ARFaceTrackingConfiguration you should choose tracking case:

ARCoachingOverlayView.Goal.tracking

And when using ARCoachingOverlayView with ARImageTrackingConfiguration you should choose anyPlane case:

ARCoachingOverlayView.Goal.anyPlane


Related Topics



Leave a reply



Submit