Flip Arfaceanchor from Left-Handed to Right-Handed Coordinate System

Flip ARFaceAnchor from left-handed to right-handed coordinate system

At first, let's see what a default matrix values are.


Identity matrix

Here's an Identity 4x4 matrix with default values.

If you need additional info on elements of 4x4 matrices read this post.

 // 0  1  2  3
┌ ┐
| 1 0 0 0 | x
| 0 1 0 0 | y
| 0 0 1 0 | z
| 0 0 0 1 |
└ ┘


Position Mirroring

To flip entity's Z axis and X axis, in order to reorient axis from Face tracking environment to World tracking environment, use the following form of 4x4 transform matrix.

 // 0  1  2  3
┌ ┐
| -1 0 0 0 | x
| 0 1 0 0 | y
| 0 0 -1 0 | z
| 0 0 0 1 |
└ ┘


Rotation Mirroring

In 4x4 transform matrices rotation is a combination of scale and skew. To choose a right rotation's direction, clockwise (CW) or counter-clockwise (CCW) use + or - for cos expressions.

Here's a positive Z rotation expression (CCW).

Sample Image

Positive +cos(a):

 ┌                    ┐
| cos -sin 0 0 |
| sin cos 0 0 |
| 0 0 1 0 |
| 0 0 0 1 |
└ ┘


And here's a negative Z rotation expression (CW).

Sample Image

Negative -cos(a):

 ┌                    ┐
|-cos -sin 0 0 |
| sin -cos 0 0 |
| 0 0 1 0 |
| 0 0 0 1 |
└ ┘

Why does ARFaceAnchor have negative Z position?

It seems quite obvious:

When ARSession is running and ARCamera begins tracking environment, it places WorldOriginAxis in front of your face at (x: 0, y: 0, z: 0). Just check it using:

 sceneView.debugOptions = [.showWorldOrigin]

So your face's position must be at positive part of Z axis of World Coordinates.

Sample Image

Thus, ARFaceAnchor will be placed at positive Z-axis direction, as well.

Sample Image

And when you use ARFaceTrackingConfiguration vs ARWorldTrackingConfiguration there's two things to consider:

  • Rear Camera moves towards objects along negative Z-axes (positive X-axis is on the right).

  • Front Camera moves towards faces along positive Z-axes (positive X-axis is on the left).

Hence, when you are "looking" through TrueDepth Camera, a 4x4 Matrix is mirrored.

How to set ARSCNView to non-mirroring?

Selfie camera's matrix is mirrored absolutely correctly.

ARFaceTrackingConfig uses selfie camera that is oriented 180 degrees - away from the rear camera. Such an orientation places a user's face in the positive Z direction. To the right of the user is the negative X-axis. Thus, when combining the scene with the ARWorldTrackingConfig and ARFaceTrackingConfig, we get an absolutely correct 3D environment.

Sample Image

Displaying an ARAnchor in ARSCNView

Just to follow up on @sj-r and @Rickster's comments.

The example code that @Rickster was talking about in regard to the coordinateOrigin.scn is found here: Creating Face Based Experiences

And here is a little snippet I have used before to visualize Axis:

class BMOriginVisualizer: SCNNode {

//----------------------
//MARK: - Initialization
//---------------------

/// Creates An AxisNode To Vizualize ARAnchors
///
/// - Parameter scale: CGFloat
init(scale: CGFloat = 1) {

super.init()

//1. Create The X Axis
let xNode = SCNNode()
let xNodeGeometry = SCNBox(width: 1, height: 0.01, length: 0.01, chamferRadius: 0)
xNode.geometry = xNodeGeometry
xNodeGeometry.firstMaterial?.diffuse.contents = UIColor.red
xNode.position = SCNVector3(0.5, 0, 0)
self.addChildNode(xNode)

//2. Create The Y Axis
let yNode = SCNNode()
let yNodeGeometry = SCNBox(width: 0.01, height: 1, length: 0.01, chamferRadius: 0)
yNode.geometry = yNodeGeometry
yNode.position = SCNVector3(0, 0.5, 0)
yNodeGeometry.firstMaterial?.diffuse.contents = UIColor.green
self.addChildNode(yNode)

//3. Create The Z Axis
let zNode = SCNNode()
let zNodeGeometry = SCNBox(width: 0.01, height: 0.01, length: 1, chamferRadius: 0)
zNode.geometry = zNodeGeometry
zNodeGeometry.firstMaterial?.diffuse.contents = UIColor.blue
zNode.position = SCNVector3(0, 0, 0.5)
self.addChildNode(zNode)

//4. Scale Our Axis
self.scale = SCNVector3(scale, scale, scale)

}


required init?(coder aDecoder: NSCoder) { fatalError("Vizualizer Coder Not Implemented") }
}

Which can be initialised like so:

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {

let anchorVizualizer = BMOriginVisualizer(scale: 0.5)
node.addChildNode(anchorVizualizer)

}

Hopefully this will provide useful as an expansion to the answer provided by @sj-r.

Is Apple's documentation on ARCamera.transform orientation backwards?

Looks like they've fixed their documentation for UIDeviceOrientationLandscapeRight:

The device is in landscape mode, with the device held upright and the Home button on the right side.

Swift: Obtain and save the updated SCNNode over time using projectPoint in scenekit

If I understand your question correctly you want to save the vertex positions each time they are updated, keeping track of all previous updates as well as the most recent one. In order to do this you simply need to append the new vertex position array to the global array with saved data.

 var savedPositions = [CGPoint]()
var beginSaving = false

func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard anchor == currentFaceAnchor,
let contentNode = selectedContentController.contentNode,
contentNode.parent == node
else { return }
for vertex in vertices {
let projectedPoint = sceneView.projectPoint(node.convertPosition(SCNVector3(vertex), to: nil))
if beginSaving {
savedPositions.append(CGPoint(x: projectedPoint.x, y: projectedPoint.y))
}
}
selectedContentController.session = sceneView?.session
selectedContentController.sceneView = sceneView
selectedContentController.renderer(renderer, didUpdate: contentNode, for: anchor)
}

@IBAction private func startPressed() {
beginSaving = true
}

@IBAction private func stopPressed() {
beginSaving = false
....//do whatever with your array of saved positions
}


Related Topics



Leave a reply



Submit