Arkit Body Tracking Using Xamarin and C# Inaccurate

ARKit Body Tracking using Xamarin and C# Inaccurate

So I got it working. It seems I was overcomplicating how to determine the X,Y,Z position of each joint node.. all I needed to do was this..

private SCNVector3 GetJointPosition(ARBodyAnchor bodyAnchor, string jointName)
{
NMatrix4 jointTransform = bodyAnchor.Skeleton.GetModelTransform((NSString)jointName);
return new SCNVector3(jointTransform.Column3);
}

Here is the full listing..

using ARKit;
using Foundation;
using OpenTK;
using SceneKit;
using System;
using System.Collections.Generic;
using UIKit;

namespace XamarinArkitSample
{
public partial class BodyDetectionViewController : UIViewController
{
private readonly ARSCNView sceneView;

public BodyDetectionViewController()
{
this.sceneView = new ARSCNView
{
AutoenablesDefaultLighting = true,
Delegate = new SceneViewDelegate()
};

this.View.AddSubview(this.sceneView);
}

public override void ViewDidLoad()
{
base.ViewDidLoad();

this.sceneView.Frame = this.View.Frame;
}

public override void ViewDidAppear(bool animated)
{
base.ViewDidAppear(animated);

var bodyTrackingConfiguration = new ARBodyTrackingConfiguration()
{
WorldAlignment = ARWorldAlignment.Gravity
};

this.sceneView.Session.Run(bodyTrackingConfiguration);
}

public override void ViewDidDisappear(bool animated)
{
base.ViewDidDisappear(animated);
this.sceneView.Session.Pause();
}

public override void DidReceiveMemoryWarning()
{
base.DidReceiveMemoryWarning();
}

public class SceneViewDelegate : ARSCNViewDelegate
{
Dictionary<string, JointNode> joints = new Dictionary<string, JointNode>();
float jointRadius = 0.04f;
UIColor jointColour = UIColor.Yellow;

public override void DidAddNode(ISCNSceneRenderer renderer, SCNNode node, ARAnchor anchor)
{
if (!(anchor is ARBodyAnchor bodyAnchor))
return;

foreach (var jointName in ARSkeletonDefinition.DefaultBody3DSkeletonDefinition.JointNames)
{
JointNode jointNode = MakeJoint(jointRadius, jointColour);

var jointPosition = GetJointPosition(bodyAnchor, jointName);
jointNode.Position = jointPosition;

if (!joints.ContainsKey(jointName))
{
node.AddChildNode(jointNode);
joints.Add(jointName, jointNode);
}
}
}

public override void DidUpdateNode(ISCNSceneRenderer renderer, SCNNode node, ARAnchor anchor)
{
if (!(anchor is ARBodyAnchor bodyAnchor))
return;

foreach (var jointName in ARSkeletonDefinition.DefaultBody3DSkeletonDefinition.JointNames)
{
var jointPosition = GetJointPosition(bodyAnchor, jointName);

if (joints.ContainsKey(jointName))
{
joints[jointName].Update(jointPosition);
}
}
}

private SCNVector3 GetJointPosition(ARBodyAnchor bodyAnchor, string jointName)
{
NMatrix4 jointTransform = bodyAnchor.Skeleton.GetModelTransform((NSString)jointName);
return new SCNVector3(jointTransform.Column3);
}

private JointNode MakeJoint(float jointRadius, UIColor jointColour)
{
var jointNode = new JointNode();

var material = new SCNMaterial();
material.Diffuse.Contents = jointColour;

var jointGeometry = SCNSphere.Create(jointRadius);
jointGeometry.FirstMaterial = material;
jointNode.Geometry = jointGeometry;

return jointNode;
}
}

public class JointNode : SCNNode
{
public void Update(SCNVector3 position)
{
this.Position = position;
}
}
}


public static class Extensions
{
public static SCNMatrix4 ToSCNMatrix4(this NMatrix4 self)
{
var row0 = new SCNVector4(self.M11, self.M12, self.M13, self.M14);
var row1 = new SCNVector4(self.M21, self.M22, self.M23, self.M24);
var row2 = new SCNVector4(self.M31, self.M32, self.M33, self.M34);
var row3 = new SCNVector4(self.M41, self.M42, self.M43, self.M44);
return new SCNMatrix4(row0, row1, row2, row3);
}
}
}

Which with a bit of tweaking looks like this..

Sample Image

And a video of it working here..

https://www.youtube.com/watch?v=VxM1RMlYdAo

Apple ARKit inaccurate on iPhone x

Yes model shifts for longer distance in ARKit.

ARKit works by mapping environment and placing virtual coordinates on top. So when you start ARKit app first it searches for and creates anchor for the real world where it can find enough feature points. As we move around these anchors are added for different real world objects or places. And it tries to match already found places with created anchors and position virtual world (3D coordinates) accordingly.

You know if enough feature point is not found model shifts from its place because it gets confused between real and virtual positioning. And when anchor is added in these case we will have origin of virtual world shifted for this anchor.

Say when AR session started the origin was in one corner of a table and you have model placed in center of table. Now when you moved to next end of table and the model shifts to edge of the table because it did not find enough feature point. And suddenly it found new anchor when model is on the edge. Now what happens is it have two anchors for two ends of table. If you move your camera to first end of table it matches with first anchor and model is placed on center of table. And if you move your camera to next end, it matches with second anchor and shifts the model to edge of the table.

And this chance increases with increase in distance.



Related Topics



Leave a reply



Submit