RealityKit – Add force to Entity at specific point
I think you need the following approach.
First of all, apply generateCollisionShapes(...)
instance method that activates collisions for given entities.
func generateCollisionShapes(recursive: Bool)
Secondly, use ray(...)
method returning an optional tuple:
@MainActor func ray(through screenPoint: CGPoint) -> (origin: SIMD3<Float>,
direction: SIMD3<Float>)?
Thirdly, use arView.scene.raycast(...)
method that returns CollisionCastHit collection:
func raycast(origin: SIMD3<Float>,
direction: SIMD3<Float>,
length: Float = 100,
query: CollisionCastQueryType = .all,
mask: CollisionGroup = .all,
relativeTo referenceEntity: Entity? = nil) -> [CollisionCastHit]
Instance of CollisionCastHit
can give you 4 subproperties:
CollisionCastHit().position
CollisionCastHit().entity
CollisionCastHit().distance
CollisionCastHit().normal
And at last, you already know the vector of force...
entity.physicsMotion?.linearVelocity = SIMD3<Float>()
Optionally, you might use an angular velocity of the body around the center of its mass.
entity.physicsMotion?.angularVelocity = SIMD3<Float>()
ARKit – Tap node with raycastQuery instead of hitTest, which is deprecated
About Hit-Testing
Official documentation says that only ARKit's hitTest(_:types:) instance method is deprecated in iOS 14. However, in iOS 15 you can still use it. ARKit's hit-testing method is supposed to be replaced with a raycasting methods.
Deprecated hit-testing:
let results: [ARHitTestResult] = sceneView.hitTest(sceneView.center,
types: .existingPlaneUsingGeometry)
Raycasting equivalent
let raycastQuery: ARRaycastQuery? = sceneView.raycastQuery(
from: sceneView.center,
allowing: .estimatedPlane,
alignment: .any)
let results: [ARRaycastResult] = sceneView.session.raycast(raycastQuery!)
If you prefer raycasting method for hitting a node (entity), use RealityKit module instead of SceneKit:
let arView = ARView(frame: .zero)
let query: CollisionCastQueryType = .nearest
let mask: CollisionGroup = .default
let raycasts: [CollisionCastHit] = arView.scene.raycast(from: [0, 0, 0],
to: [5, 6, 7],
query: query,
mask: mask,
relativeTo: nil)
guard let raycast: CollisionCastHit = raycasts.first else { return }
print(raycast.entity.name)
P.S.
There is no need to look for a replacement for the SceneKit's hitTest(_:options:) instance method returning [SCNHitTestResult], because it works fine and it's not a time to make it deprecated.
RealityKit – Grab children from USDZ file by name
Use findEntity(named:)
instance method to recursively get any descendant entity by its name.
func findEntity(named name: String) -> Entity?
.rcproject
scene
let boxAnchor = try! Experience.loadBox()
arView.scene.anchors.append(boxAnchor)
print(boxAnchor)
let entity = boxAnchor.findEntity(named: "Steel Box")
entity?.scale *= 7
.usdz
model
let model = try! ModelEntity.load(named: "model.usdz")
let anchor = AnchorEntity(world: [0,0,-1])
anchor.addChild(model)
arView.scene.anchors.append(anchor)
print(model)
let entity = model.findEntity(named: "teapot")
entity?.scale /= 33
P. S.
Unfortunately, not all entities in .usdz
or .rcproject
have names by default.
So, you must give all the required names to entities to use this method.
RealityKit ARkit : find an anchor (or entity) from raycast - always nil
I finally manage to found a solution :
My ModelEntity
(anchored) had to have a collision shape !
So after adding simply entity.generateCollisionShapes(recursive: true)
.
This is how I generate a simple box :
let box: MeshResource = .generateBox(width: width, height: height, depth: length)
var material = SimpleMaterial()
material.tintColor = color
let entity = ModelEntity(mesh: box, materials: [material])
entity.generateCollisionShapes(recursive: true) // Very important to active collisstion and hittesting !
return entity
and so after that we must tell the arView
to listen to gestures :
arView.installGestures(.all, for: entity)
and finally :
@IBAction func onTap(_ sender: UITapGestureRecognizer){
let tapLocation = sender.location(in: arView)
if let hitEntity = arView.entity(
at: tapLocation
) {
print("touched")
print(hitEntity.name)
// touched !
return ;
}
}
Swift: can't detect hit test on SCNNode? See if vector3 is contained in a node?
When you're working with ARSCNView
, there are two kinds of hit testing you can do, and they use entirely separate code paths.
- Use the ARKit method
hitTest(_:types:)
if you want to hit test against the real world (or at least, against ARKit's estimate of where real-world features are). This returnsARHitTestResult
objects, which tell you about real-world features like detected planes. In other words, use this method if you want to find a real object that anyone can see and touch without a device — like the table you're pointing your device at. - Use the SceneKit method
hitTest(_:options:)
if you want to hit test against SceneKit content; that is, to search for virtual 3D objects you've placed in the AR scene. This returnsSCNHitTestResult
objects, which tell you about things like nodes and geometry. Use this method if you want to find SceneKit nodes, the model (geometry) in a node, or the specific point on the geometry at a tap location.
In both cases the 3D position found by a hit test is the same, because ARSCNView
makes sure that the virtual "world coordinates" space matches real-world space.
It looks like you're using the former but expecting the latter. When you do a SceneKit hit test, you get results if and only if there's a node under the hit test point — you don't need any kind of bounding box test because it's already being done for you.
What is the real benefit of using Raycast in ARKit and RealityKit?
Simple Ray-Casting
, the same way as Hit-Testing
, helps to locate a 3D point on a real-world surface by projecting an imaginary ray from a screen point onto detected plane. In Apple documentation (2019) there was the following definition of ray-casting:
Ray-casting is the preferred method for finding positions on surfaces in the real-world environment, but the hit-testing functions remain present for compatibility. With
tracked ray-casting
, ARKit and RealityKit continue to refine the results to increase the position accuracy of virtual content you place with a ray-cast.
When the user wants to place a virtual content onto detected surface, it's a good idea to have a tip for this. Many AR apps draw a focus circle or square that give the user visual confirmation of the shape and alignment of the surfaces that ARKit is aware of. So, to find out where to put a focus circle or a square in the real world, you may use an ARRaycastQuery
to ask ARKit where any surfaces exist in the real world.
UIKit implementation
Here's an example where you can see how to implement the raycast(query)
instance method:
import UIKit
import RealityKit
class ViewController: UIViewController {
@IBOutlet var arView: ARView!
let model = try! Entity.loadModel(named: "usdzModel")
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
self.raycasting()
}
fileprivate func raycasting() {
guard let query = arView.makeRaycastQuery(from: arView.center,
allowing: .estimatedPlane,
alignment: .horizontal)
else { return }
guard let result = arView.session.raycast(query).first
else { return }
let raycastAnchor = AnchorEntity(world: result.worldTransform)
raycastAnchor.addChild(model)
arView.scene.anchors.append(raycastAnchor)
}
}
If you wanna know how to use a Convex-Ray-Casting
in RealityKit, read this post.
If you wanna know how to use Hit-Testing
in RealityKit, read this post.
SwiftUI implementation
Here's a sample code where you can find out how to implement a raycasting logic in SwiftUI:
import SwiftUI
import RealityKit
struct ContentView: View {
@State private var arView = ARView(frame: .zero)
var model = try! Entity.loadModel(named: "robot")
var body: some View {
ARViewContainer(arView: $arView)
.onTapGesture(count: 1) { self.raycasting() }
.ignoresSafeArea()
}
fileprivate func raycasting() {
guard let query = arView.makeRaycastQuery(from: arView.center,
allowing: .estimatedPlane,
alignment: .horizontal)
else { return }
guard let result = arView.session.raycast(query).first
else { return }
let raycastAnchor = AnchorEntity(world: result.worldTransform)
raycastAnchor.addChild(model)
arView.scene.anchors.append(raycastAnchor)
}
}
and then...
struct ARViewContainer: UIViewRepresentable {
@Binding var arView: ARView
func makeUIView(context: Context) -> ARView { return arView }
func updateUIView(_ uiView: ARView, context: Context) { }
}
P.S.
If you're building either of these two app variations from scratch (i.e. not using Xcode AR template), don't forget to enable the Privacy - Camera Usage Description
key in the Info
tab.
Related Topics
How to Add Documentation to Enum Associated Values in Swift
Define Struct That Is Treated Like a Class in Swift
Accessing Global Variable in Swift
Possible to Write Swift Println Logs into File Too
Why Doesn't Swift Force My Designated Initializer to Call Super
Value of Type 'Tags' Has No Member 'Lastused'
Swift: How to Fix Infinite Loop When Adding a Value to a Firebase Variable
How to Make Uicollectionview Reload Once It Receives Data from Firebase
Firestore - How to Get Around Array "Does-Not-Contain" Queries
Inferred to Have Type 'Anyclass', Which May Be Unexpected
How to Delete Item from Collection View
Swift Package Manager Unable to Compile Ncurses Installed Through Homebrew
How to Stop a Dispatchqueue in Swift
Passing a Variable Through a Segue? Xcode 8 Swift 3
Hide View Item of Nsstackview with Animation
Deep Copy of Cmimagebuffer or Cvimagebuffer
Swift iOS 9: Section Header Change Position After Reload Data