ARKit – Anchoring without Plane Detection and Hit-Testing
ARKit 4.0 solution
In the latest version of ARKit there are GPS location anchors called ARGeoAnchor (which works thanks to the implementation of CoreLocation
framework) that uses familiar initializer:
@nonobjc convenience init(coordinate: CLLocationCoordinate2D,
altitude: CLLocationDistance?)
ARKit 3.0 solution
Of course, you can place a model in AR scene without plane detection
and hit-testing
/ray-casting
, although this might cause some AR anomalies – your model might be placed below scene's grid or model might be placed at a wrong location what leads to incorrect parallax.
There's one thing your scene must always have – an anchor for your model. Without that anchor your model may float in a scene what leads to poor user experience. In plane detection's case you automatically get ARPlaneAnchor
tethered to invisible plane.
Thus, let's see what approaches you have to implement to get a robust AR experience, even if you do not use a plane detection and hit-testing:
- Use pre-tracked ARWorldMap. It's hard enough to make it for buildings
- Use Special Markers easily distinguishable in tracked environment (QR codes, road signs, etc)
- Use CoreLocation framework, Google Maps SDK and iBeacon navigation
- Use a Gravity-and-Heading compass alignment
- Use a surrounding Objects Detection AI algorithm
You can use these approaches separately or in combination.
But there is one unpleasant thing you must know about – a working distance for models in ARKit / SceneKit is up to 1000 meters. If your model goes beyond that limit you'll get flickering artifacts.
How to place a object without tapping on the screen
Although I suggest you place the object when a user taps & where s/he taps on the screen, what you're asking can be achieved like so. (this example is in Kotlin)
Before you begin placing an object, you need to create a ModelRenderable
. Declare one @Nullable
globally.
private var modelRenderable: ModelRenderable? = null
//Create the football renderable
ModelRenderable.builder()
//get the context of the ARFragment and pass the name of your .sfb file
.setSource(fragment.context, Uri.parse("FootBall.sfb"))
.build()
//I accepted the CompletableFuture using Async since I created my model on creation of the activity. You could simply use .thenAccept too.
//Use the returned modelRenderable and save it to a global variable of the same name
.thenAcceptAsync { modelRenderable -> this@MainActivity.modelRenderable = modelRenderable }
The major chunk of the programming has to be done on the frame's onUpdate
method. So you attach a listener for frame updates like so
fragment.arSceneView.scene.addOnUpdateListener(this@MainActivity) //You can do this anywhere. I do it on activity creation post inflating the fragment
now you handle adding an object on the listener.
override fun onUpdate(frameTime: FrameTime?) {
//get the frame from the scene for shorthand
val frame = fragment.arSceneView.arFrame
if (frame != null) {
//get the trackables to ensure planes are detected
val var3 = frame.getUpdatedTrackables(Plane::class.java).iterator()
while(var3.hasNext()) {
val plane = var3.next() as Plane
//If a plane has been detected & is being tracked by ARCore
if (plane.trackingState == TrackingState.TRACKING) {
//Hide the plane discovery helper animation
fragment.planeDiscoveryController.hide()
//Get all added anchors to the frame
val iterableAnchor = frame.updatedAnchors.iterator()
//place the first object only if no previous anchors were added
if(!iterableAnchor.hasNext()) {
//Perform a hit test at the center of the screen to place an object without tapping
val hitTest = frame.hitTest(frame.screenCenter().x, frame.screenCenter().y)
//iterate through all hits
val hitTestIterator = hitTest.iterator()
while(hitTestIterator.hasNext()) {
val hitResult = hitTestIterator.next()
//Create an anchor at the plane hit
val modelAnchor = plane.createAnchor(hitResult.hitPose)
//Attach a node to this anchor with the scene as the parent
val anchorNode = AnchorNode(modelAnchor)
anchorNode.setParent(fragment.arSceneView.scene)
//create a new TranformableNode that will carry our object
val transformableNode = TransformableNode(fragment.transformationSystem)
transformableNode.setParent(anchorNode)
transformableNode.renderable = this@MainActivity.modelRenderable
//Alter the real world position to ensure object renders on the table top. Not somewhere inside.
transformableNode.worldPosition = Vector3(modelAnchor.pose.tx(),
modelAnchor.pose.compose(Pose.makeTranslation(0f, 0.05f, 0f)).ty(),
modelAnchor.pose.tz())
}
}
}
}
}
}
I used one extension method
//A method to find the screen center. This is used while placing objects in the scene
private fun Frame.screenCenter(): Vector3 {
val vw = findViewById<View>(android.R.id.content)
return Vector3(vw.width / 2f, vw.height / 2f, 0f)
}
This is the end result
hitTest(_:options:) don't recognize nodes behind ARKit planes
I believe what you need to do is change your SCNHitTestSearchMode
parameter which allows you to set:
Possible values for the searchMode option used with hit-testing
methods.
static let searchMode: SCNHitTestOption
Whereby:
The value for this key is an NSNumber object containing the raw
integer value of an SCNHitTestSearchMode constant.
From the Apple Docs there are three possible options you can use here:
case all
The hit test should return all possible results, sorted from nearest
to farthest.
case any
The hit test should return only the first object found, regardless of
distance.
case closest
The hit test should return only the closes object found.
Based on your question therefore, you would likely need to to utilise the all case
.
As such your hitTest function
would probably need to look something like this (remembering that self.augmentedRealityView
refers to an ARSCNView
):
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
//1. Get The Current Touch Location
guard let currentTouchLocation = touches.first?.location(in: self.augmentedRealityView) else { return }
//2. Perform An SCNHitTest Setting The SearchMode To 1 (All) Which Returns A List Of Results Sorted From Nearest To Farthest
if #available(iOS 11.0, *) {
let hitTestResults = self.augmentedRealityView.hitTest(currentTouchLocation, options: [SCNHitTestOption.searchMode: 1])
//3. Loop Through The Results & Get The Nodes
for index in 0..<hitTestResults.count{
let node = hitTestResults[index]
print(node)
}
}
}
ARKit only detect floor to place objects
The hit testing methods return multiple results, sorted by distance from the camera. If you're hit testing against existing planes with infinite extent, you should see at least two results in the situation you describe: first the table/desk/etc, then the floor.
If you specifically want the floor, there are a couple of ways to find it:
If you already know which
ARPlaneAnchor
is the floor from earlier in your session, search the array of hit test results for one whoseanchor
matches.Assume the floor is always the plane farthest from the camera (the last in the array). Probably a safe assumption in most cases, but watch out for balconies, genkan, etc.
Placing objects automatically when ground plane detected with vuforia
The Vuforia PlaneFinderBehaviour
(see doc here) has the event OnAutomaticHitTest
which fires every frame a ground plane is detected.
So you can use it to automatically spawn an object.
You have to add your method in the On Automatic Hit Test
instead of the On Interactive Hit Test
list of the "Plane Finder":
ARKit hitTest(_:options:) to select placed 3d-objects not working
is there a better way to select placed 3D-models in ARKit
No, you are right. Use SCNSceneRenderer.hitTest(_:, options:)
when searching for SceneKit content and ARSCNView.hitTest(_:types:)
when searching for real objects recognised by ARKit.
What seems wrong here is that categoryBitMask
is, well, a bitmask. 5 has a binary representation of 101
. SceneKit then compares every bit with the bits on your objects and if any of them match, it includes the object in the results.
So when every other object has the default category of 1
, it is included in the result, because 101
and 001
have a matching bit.
What you can use is the OptionSet
protocol.
struct BodyType: OptionSet {
let rawValue: Int
static let `default` = BodyType(rawValue: 1)
static let userInteraction = BodyType(rawValue: 4)
static let model: BodyType = [.default, .userInteraction]
}
Your model gets the model
option, but in your hit-testing you only use .userInteraction
.
SCNKit: Hit test doesn't hit node's dynamically modified geometry
When you perform a hit-test search, SceneKit looks for SCNGeometry objects along the ray you specify. For each intersection between the ray and a geometry, SceneKit creates a hit-test result to provide information about both the SCNNode object containing the geometry and the location of the intersection on the geometry’s surface.
The problem in your case is that when you modify the buffer’s contents (MTLBuffer) at render time, SceneKit does not know about it, and therefore cannot update SCNGeometry object which is used for performing hit-test.
So the only way I can see to solve this issue is to recreate your SCNGeometry object.
Related Topics
Swift Enumeration Order and Comparison
Swiftui, Shadow Only for Container
How to Retrieve All Contacts Using Cncontact.Predicateforcontacts
How Do We Implement Wait/Notify in Swift
How to I Access Argv and Argc in Swift
How to Access Cfdictionary in Swift 3
"Use Default Container" Doesn't Show in Icloud Capabilities
Swift "Is" Operator with Type Stored in Variable
Swift Find Superview of Given Class with Generics
Adding a Custom Font to MACos App Using Swift
How to Remove Top Space of 'Form' in Swiftui
Swift Packages and Conflicting Dependencies
Swiftui 2 Pop to Root View with No Scene Delegate
How to Override Private Method and Call Super in Swift
Compiler Cannot Infer Return Type
How to Get the Kvc-String from Swift 4 Keypath
Uicollectionview Scrolltoitematindexpath, Not Loading Visible Cells Until Animation Complete