What Is the Real Benefit of Using Raycast in Arkit and Realitykit

What is the real benefit of using Raycast in ARKit and RealityKit?

Simple Ray-Casting, the same way as Hit-Testing, helps to locate a 3D point on a real-world surface by projecting an imaginary ray from a screen point onto detected plane. In Apple documentation (2019) there was the following definition of ray-casting:

Ray-casting is the preferred method for finding positions on surfaces in the real-world environment, but the hit-testing functions remain present for compatibility. With tracked ray-casting, ARKit and RealityKit continue to refine the results to increase the position accuracy of virtual content you place with a ray-cast.

When the user wants to place a virtual content onto detected surface, it's a good idea to have a tip for this. Many AR apps draw a focus circle or square that give the user visual confirmation of the shape and alignment of the surfaces that ARKit is aware of. So, to find out where to put a focus circle or a square in the real world, you may use an ARRaycastQuery to ask ARKit where any surfaces exist in the real world.


UIKit implementation

Here's an example where you can see how to implement the raycast(query) instance method:

import UIKit
import RealityKit

class ViewController: UIViewController {

@IBOutlet var arView: ARView!
let model = try! Entity.loadModel(named: "usdzModel")

override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
self.raycasting()
}

fileprivate func raycasting() {

guard let query = arView.makeRaycastQuery(from: arView.center,
allowing: .estimatedPlane,
alignment: .horizontal)
else { return }

guard let result = arView.session.raycast(query).first
else { return }

let raycastAnchor = AnchorEntity(world: result.worldTransform)
raycastAnchor.addChild(model)
arView.scene.anchors.append(raycastAnchor)
}
}

If you wanna know how to use a Convex-Ray-Casting in RealityKit, read this post.


If you wanna know how to use Hit-Testing in RealityKit, read this post.


SwiftUI implementation

Here's a sample code where you can find out how to implement a raycasting logic in SwiftUI:

import SwiftUI
import RealityKit

struct ContentView: View {

@State private var arView = ARView(frame: .zero)
var model = try! Entity.loadModel(named: "robot")

var body: some View {
ARViewContainer(arView: $arView)
.onTapGesture(count: 1) { self.raycasting() }
.ignoresSafeArea()
}

fileprivate func raycasting() {
guard let query = arView.makeRaycastQuery(from: arView.center,
allowing: .estimatedPlane,
alignment: .horizontal)
else { return }

guard let result = arView.session.raycast(query).first
else { return }

let raycastAnchor = AnchorEntity(world: result.worldTransform)
raycastAnchor.addChild(model)
arView.scene.anchors.append(raycastAnchor)
}
}

and then...

struct ARViewContainer: UIViewRepresentable {

@Binding var arView: ARView

func makeUIView(context: Context) -> ARView { return arView }
func updateUIView(_ uiView: ARView, context: Context) { }
}

P.S.

If you're building either of these two app variations from scratch (i.e. not using Xcode AR template), don't forget to enable the Privacy - Camera Usage Description key in the Info tab.

How to use Raycast methods in RealityKit?

Simple Ray-Casting

If you want to find out how to position a model made in Reality Composer into a RealityKit scene (that has a detected horizontal plane) using Ray-Casting method, use the following code:

import RealityKit
import ARKit

class ViewController: UIViewController {

@IBOutlet var arView: ARView!
let scene = try! Experience.loadScene()

@IBAction func onTap(_ sender: UITapGestureRecognizer) {

scene.steelBox!.name = "Parcel"

let tapLocation: CGPoint = sender.location(in: arView)
let estimatedPlane: ARRaycastQuery.Target = .estimatedPlane
let alignment: ARRaycastQuery.TargetAlignment = .horizontal

let result: [ARRaycastResult] = arView.raycast(from: tapLocation,
allowing: estimatedPlane,
alignment: alignment)

guard let rayCast: ARRaycastResult = result.first
else { return }

let anchor = AnchorEntity(world: rayCast.worldTransform)
anchor.addChild(scene)
arView.scene.anchors.append(anchor)

print(rayCast)
}
}

Pay attention to a class ARRaycastQuery. This class comes from ARKit, not from RealityKit.

Convex-Ray-Casting

A Convex-Ray-Casting methods like raycast(from:to:query:mask:relativeTo:) is the op of swiping a convex shapes along a straight line and stopping at the very first intersection with any of the collision shape in the scene. Scene raycast() method performs a hit-tests against all entities with collision shapes in the scene. Entities without a collision shape are ignored.

You can use the following code to perform a convex-ray-cast from start position to end:

import RealityKit

let startPosition: SIMD3<Float> = [0, 0, 0]
let endPosition: SIMD3<Float> = [5, 5, 5]
let query: CollisionCastQueryType = .all
let mask: CollisionGroup = .all

let raycasts: [CollisionCastHit] = arView.scene.raycast(from: startPosition,
to: endPosition,
query: query,
mask: mask,
relativeTo: nil)

guard let rayCast: CollisionCastHit = raycasts.first
else { return }

print(rayCast.distance) /* The distance from the ray origin to the hit */
print(rayCast.entity.name) /* The entity's name that was hit */

A CollisionCastHit structure is a hit result of a collision cast and it lives in RealityKit's scene.

P.S.

When you use raycast(from:to:query:mask:relativeTo:) method for measuring a distance from camera to entity it doesn't matter what an orientation of ARCamera is, it only matters what its position is in world coordinates.

RealityKit ARkit : find an anchor (or entity) from raycast - always nil

I finally manage to found a solution :

My ModelEntity (anchored) had to have a collision shape !

So after adding simply entity.generateCollisionShapes(recursive: true).

This is how I generate a simple box :


let box: MeshResource = .generateBox(width: width, height: height, depth: length)
var material = SimpleMaterial()
material.tintColor = color
let entity = ModelEntity(mesh: box, materials: [material])
entity.generateCollisionShapes(recursive: true) // Very important to active collisstion and hittesting !
return entity

and so after that we must tell the arView to listen to gestures :

arView.installGestures(.all, for: entity)

and finally :


@IBAction func onTap(_ sender: UITapGestureRecognizer){
let tapLocation = sender.location(in: arView)

if let hitEntity = arView.entity(
at: tapLocation
) {
print("touched")
print(hitEntity.name)
// touched !

return ;
}

}

RealityKit and Vision – How to call RayCast API

The issue was the image orientation. In my case, using iPad back camera in Portrait direction, I need to do .downMirrored (instead of .up).

let handler = VNImageRequestHandler(cvPixelBuffer: frame.capturedImage, orientation: .downMirrored, options: [:])

Once getting the orientation correct, the point values from image recognition could be DIRECTLY used raycast.

How to put an anchor on reconstructed mesh to get accuracy

It works with any LiDAR's reconstructed mesh, not only with planes:

@objc func tapped(_ sender: UITapGestureRecognizer) {

let tapLocation = sender.location(in: arView)

if let result: ARRaycastResult = arView.raycast(from: tapLocation,
allowing: .estimatedPlane,
alignment: .any).first {

let resultAnchor = AnchorEntity(world: result.worldTransform)

resultAnchor.addChild(self.sphereObject(0.05, .systemRed))

arView.scene.addAnchor(resultAnchor, removerAfter: 10.0)
}
}

How to convert an ARHitTestResult to an ARRaycastResult?

You needn't convert a deprecated ARHitTestResult class into ARRaycastResult because these classes are not interchangeable. Instead use the following methodology when using raycasting in your scene:

@IBAction func tapped(_ sender: UITapGestureRecognizer) {

let tapLocation: CGPoint = sender.location(in: arView)
let estimatedPlane: ARRaycastQuery.Target = .estimatedPlane
let alignment: ARRaycastQuery.TargetAlignment = .any

let result = arView.raycast(from: tapLocation,
allowing: estimatedPlane,
alignment: alignment)

guard let raycast: ARRaycastResult = result.first
else { return }

let anchor = AnchorEntity(world: raycast.worldTransform)
anchor.addChild(model)
arView.scene.anchors.append(anchor)

print(raycast.worldTransform.columns.3)
}

If you need more examples of raycasting, look at this SO post.

RealityKit and ARKit – What is AR project looking for when the app starts?

In the default Experience.rcproject the cube has an AnchoringComponent with a horizontal plane. So basically the cube will not display until the ARSession finds any horizontal plane in your scene (for example the floor or a table). Once it finds that the cube will appear.

If you want instead to create and anchor and set that as the target when catching a tap event, you could perform a raycast. Using the result of a raycast, you can grab the worldTransform and set the cube's AnchoringComponent to that transform:

Something like this:

boxAnchor.anchoring = AnchoringComponent(.world(transform: raycastResult.worldTransform))



Related Topics



Leave a reply



Submit