Arkit: Place Object on a Plane Doesn't Work Properly

ARKit placing objct on a plane

ARHitTestResult.worldTransform is of type matrix_float4x4. So it's a 4x4 matrix. .columns are numbered from 0, so the vector (hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z) is the three things at the top of the final column of the 4x4 vector.

You can safely assume that the bottom row of the matrix is (0, 0, 0, 1) and that positional vectors are of the form (x, y, z, 1). So then look at what the matrix does when applied to a vector:

a b c d         x         a*x + b*y + c*z + d
e f g h y e*x + f*y + g*z + h
i j k l * z = i*x + j*y + k*z + l
0 0 0 1 1 1

The (d, h, l) don't get multiplied and are just added on as if they were a separate vector. It's the same as:

a b c         x         d
e f g * y + h
i j k z l

So, the top-left 3x3 part of the matrix does something to (x, y, z) but doesn't move it. E.g. if (x, y, z) is (0, 0, 0) at the start then it'll definitely still be (0, 0, 0) at the end. So the 3x3 matrix might rotate, scale, or do a bunch of other things, but can't be a translation.

(d, h, l) though, clearly is just a translation because it's just something you add on at the end. And the translation is what you want — it's how you get to the plane from the current camera position. So you can just pull it straight out.

How to keep ARKit SCNNode in place

There are two kinds of "moving around" that could be happening here.


One is that ARKit is continuously refining its estimate of how the device's position in the real world maps to the abstract coordinate space you're placing virtual content in. For example, suppose you put a virtual object at (0, 0, -0.5), and then move your device to the left by exactly 10 cm. The virtual object will appear to be anchored in physical space only if ARKit tracks the move precisely. But visual-inertial odometry isn't an exact science, so it's possible that ARKit thinks you moved to the left by 10.5 cm — in that case, your virtual object will appear to "slip" to the right by 5 mm, even though its position in the ARKit/SceneKit coordinate space remains constant.

You can't really do much about this, other than hope Apple makes devices with better sensors, better cameras, or better CPUs/GPUs and improves the science of world tracking. (In the fullness of time, that's probably a safe bet, though that probably doesn't help with your current project.)


Since you're also dealing with plane detection, there's another wrinkle. ARKit is continuously refining its estimates of where a detected plane is. So, even though the real-world position of the plane isn't changing, its position in ARKit/SceneKit coordinate space is.

This kind of movement is generally a good thing — if you want your virtual object to appear anchored to the real-world surface, you want to be sure of where that surface is. You'll see some movement as plane detection gets more sure of the surface's position, but after a short time, you should see less "slip" as you move the camera around for plan-anchored virtual objects than those that are just floating in world space.


In your code, though, you're not taking advantage of plane detection to make your custom content (from "PlayerModel.scn") stick to the plane anchor:

wrapperNode.position = SCNVector3.positionFromTransform(anchor.transform)
wrapperNode.addChildNode(Body)
scnView.scene.rootNode.addChildNode(wrapperNode)

This code uses the initial position of the plane anchor to position wrapperNode in world space (because you're making it a child of the root node). If you instead make wrapperNode a child of the plane anchor's node (the one you received in renderer(_:didAdd:for:)), it'll stay attached to the plane as ARKit refines its estimate of the plane's position. You'll get a little bit more movement initially, but as plane detection "settles", your virtual object will "slip" less.

(When you make the node a child of the plane, you don't need to set its position — a position of zero means it's right where the plane is. Inf anything, you need to set its position only relative to the plane — i.e. how far above/below/along it.)

ARKit only detect floor to place objects

The hit testing methods return multiple results, sorted by distance from the camera. If you're hit testing against existing planes with infinite extent, you should see at least two results in the situation you describe: first the table/desk/etc, then the floor.

If you specifically want the floor, there are a couple of ways to find it:

  • If you already know which ARPlaneAnchor is the floor from earlier in your session, search the array of hit test results for one whose anchor matches.

  • Assume the floor is always the plane farthest from the camera (the last in the array). Probably a safe assumption in most cases, but watch out for balconies, genkan, etc.

Unable to differentiate between plane detected by ARKit and a digital object to be placed using HitTest

Looking at your question you are already half way there.

The way to handle this in it's entirety, is to make use of the following HitTest functions within your UITapGestureRecognizer function:

(1) An ARSCNHitTest which:

Searches for real-world objects or AR anchors in the captured camera image corresponding to a point in the SceneKit view.

(2) AnSCNHitTest which:

Looks for SCNGeometry objects along the ray you specify. For each intersection between the ray and and a geometry, SceneKit creates a hit-test result to provide information about both the SCNNode object containing the geometry and the location of the intersection on the geometry’s surface.

Using your UITapGestureRecognizer as an example therefore, you can differentiate between an ARPlaneAnchor (detectedPlane) and any SCNNode within your scene like so:

@objc func handleTap(_ gesture: UITapGestureRecognizer){

//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)

//2. Perform An ARNSCNHitTest To See If We Have Hit An ARPlaneAnchor
if let planeHitTest = augmentedRealityView.hitTest(currentTouchLocation, types: .existingPlane).first,
let planeAnchor = planeHitTest.anchor as? ARPlaneAnchor{

print("User Has Tapped On An Existing Plane = \(planeAnchor.identifier)")
return
}

//3. Perform An SCNHitTest To See If We Have Hit An SCNNode
if let nodeHitTest = augmentedRealityView.hitTest(currentTouchLocation, options: nil).first {

let nodeTapped = nodeHitTest.node

print("An SCNNode Has Been Tapped = \(nodeTapped)")
return
}

}

If you make use of the name property for any of your SCNNode’s this will also help you further e.g:

if let name = nodeTapped.name{
print("An SCNNode Named \(name) Has Been Tapped")
}

Additionally, if you ONLY want to detect objects you have added e.g SCNNodes then you can simply remove part two of the getureRecognizer function.

Hope it helps...

Change Y-position of the Anchor to keep it grounded

Whether you're using RealityKit or ARKit, you definitely need to use plane detection feature. If your app detects a plane in RealityKit, it will automatically track a plane target. And I should say that a detected plane doesn't move (you're using World Tracking, aren't you), but it may be extended.

AnchorEntity(.plane([.any], classification: [.any], minimumBounds: [0.5, 0.5]))

In case you're using a plane detection feature in ARKit/SceneKit, you must additionally implement session(_:didAdd:) or renderer(_:didAdd:for:) delegate method (because ARKit can't do a job automatically):

let config = ARWorldTrackingConfiguration()
config.planeDetection = [.horizontal, .vertical]
sceneView.session.run(config)

func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
guard let planeAnchor = anchors.first as? ARPlaneAnchor else { return }
}

For manual object placement use raycasting.

Also, you have to correctly position model's pivot point in 3D authoring app.



Related Topics



Leave a reply



Submit