How to Access The Model Component of Reality Composer in Realitykit

RealityKit – How to access the property in a Scene programmatically?

Of course, you need to look for the required ModelEntity in the depths of the model's hierarchy.

Use this SwiftUI solution:

Sample Image

struct ARViewContainer: UIViewRepresentable {

func makeUIView(context: Context) -> ARView {

let arView = ARView(frame: .zero)
let pictureScene = try! Experience.loadPicture()
pictureScene.children[0].scale *= 4

print(pictureScene)

let edgingModel = pictureScene.masterpiece?.children[0] as! ModelEntity

edgingModel.model?.materials = [SimpleMaterial(color: .brown,
isMetallic: true)]

var mat = SimpleMaterial()

// Here's a great old approach for assigning a texture in iOS 14.5
mat.baseColor = try! .texture(.load(named: "MonaLisa", in: nil))

let imageModel = pictureScene.masterpiece?.children[0]
.children[0] as! ModelEntity
imageModel.model?.materials = [mat]

arView.scene.anchors.append(pictureScene)
return arView
}

func updateUIView(_ uiView: ARView, context: Context) { }
}

Sample Image

RealityKit and Reality Composer – Image Recognition

I had the same issue using iOS 13 beta. Updating to iOS 13.1 beta did the trick. I can only guess it is something related to the RealityKit on iOS.
Please note that updating to iOS 13.1 beta requires also updating XCode 11 beta 7 to support it.
Hope it would help you.

How do I load my own Reality Composer scene into RealityKit?

Hierarchy in RealityKit / Reality Composer

I think it's rather a "theoretical" question than practical. At first I should say that editing Experience file containing scenes with anchors and entities isn't good idea.

In RealityKit and Reality Composer there's quite definite hierarchy in case you created single object in default scene:

Scene –> AnchorEntity -> ModelEntity 
|
Physics
|
Animation
|
Audio

If you placed two 3D models in a scene they share the same anchor:

Scene –> AnchorEntity – – – -> – – – – – – – – ->
| |
ModelEntity01 ModelEntity02
| |
Physics Physics
| |
Animation Animation
| |
Audio Audio

AnchorEntity in RealityKit defines what properties of World Tracking config are running in current ARSession: horizontal/vertical plane detection and/or image detection, and/or body detection, etc.

Let's look at those parameters:

AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: [1, 1]))

AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [0.5, 0.5]))

AnchorEntity(.image(group: "Group", name: "model"))

Here you can read about Entity-Component-System paradigm.


Combining two scenes coming from Reality Composer

For this post I've prepared two scenes in Reality Composer – first scene (ConeAndBox) with a horizontal plane detection and a second scene (Sphere) with a vertical plane detection. If you combine these scenes in RealityKit into one bigger scene, you'll get two types of plane detection – horizontal and vertical.

Sample Image

Two cone and box are pinned to one anchor in this scene.

Sample Image

In RealityKit I can combine these scenes into one scene.

// Plane Detection with a Horizontal anchor
let coneAndBoxAnchor = try! Experience.loadConeAndBox()
coneAndBoxAnchor.children[0].anchor?.scale = [7, 7, 7]
coneAndBoxAnchor.goldenCone!.position.y = -0.1 //.children[0].children[0].children[0]
arView.scene.anchors.append(coneAndBoxAnchor)

coneAndBoxAnchor.name = "mySCENE"
coneAndBoxAnchor.children[0].name = "myANCHOR"
coneAndBoxAnchor.children[0].children[0].name = "myENTITIES"

print(coneAndBoxAnchor)

// Plane Detection with a Vertical anchor
let sphereAnchor = try! Experience.loadSphere()
sphereAnchor.steelSphere!.scale = [7, 7, 7]
arView.scene.anchors.append(sphereAnchor)

print(sphereAnchor)

Sample Image

In Xcode's console you can see ConeAndBox scene hierarchy with names given in RealityKit:

Sample Image

And you can see Sphere scene hierarchy with no names given:

Sample Image

And it's important to note that our combined scene now contains two scenes in an array. Use the following command to print this array:

print(arView.scene.anchors)

It prints:

[ 'mySCENE' : ConeAndBox, '' : Sphere ]


You can reassign a type of tracking via AnchoringComponent (instead of plane detection you can assign an image detection):

coneAndBoxAnchor.children[0].anchor!.anchoring = AnchoringComponent(.image(group: "AR Resources", 
name: "planets"))


Retrieving entities and connecting them to new AnchorEntity

For decomposing/reassembling an hierarchical structure of your scene, you need to retrieve all entities and pin them to a single anchor. Take into consideration – tracking one anchor is less intensive task than tracking several ones. And one anchor is much more stable – in terms of the relative positions of scene models – than, for instance, 20 anchors.

let coneEntity = coneAndBoxAnchor.goldenCone!
coneEntity.position.x = -0.2

let boxEntity = coneAndBoxAnchor.plasticBox!
boxEntity.position.x = 0.01

let sphereEntity = sphereAnchor.steelSphere!
sphereEntity.position.x = 0.2

let anchor = AnchorEntity(.image(group: "AR Resources", name: "planets")
anchor.addChild(coneEntity)
anchor.addChild(boxEntity)
anchor.addChild(sphereEntity)

arView.scene.anchors.append(anchor)


Useful links

Now you have a deeper understanding of how to construct scenes and retrieve entities from those scenes. If you need other examples look at THIS POST and THIS POST.


P.S.

Additional code showing how to upload scenes from ExperienceX.rcproject:

import ARKit
import RealityKit

class ViewController: UIViewController {

@IBOutlet var arView: ARView!

override func viewDidLoad() {
super.viewDidLoad()

// RC generated "loadGround()" method automatically
let groundArrowAnchor = try! ExperienceX.loadGround()
groundArrowAnchor.arrowFloor!.scale = [2,2,2]
arView.scene.anchors.append(groundArrowAnchor)

print(groundArrowAnchor)
}
}

Sample Image

Sample Image

Is there a way to programmatically change the material of an Entity that was created in Reality Composer?

Model entity is stored deeper in RealityKit's hierarchy, and as you said, it's Entity, not ModelEntity. So use downcasting to access mesh and materials:

import UIKit
import RealityKit

class ViewController: UIViewController {

@IBOutlet var arView: ARView!

override func viewDidLoad() {
super.viewDidLoad()

let boxScene = try! Experience.loadBox()
print(boxScene)

let modelEntity = boxScene.steelBox?.children[0] as! ModelEntity
let material = SimpleMaterial(color: .green, isMetallic: false)
modelEntity.model?.materials = [material]

let anchor = AnchorEntity()
anchor.scale = [5,5,5]
modelEntity.setParent(anchor)
arView.scene.anchors.append(anchor)
}
}

How to recolor all model's parts when using raycasting?

Separate-parts-model approach

You can easily retrieve all 3 models. But you have to specify this whole long hierarchical path:

let scene = try! Experience.loadFanfare()

// Fanfare – .children[0].children[0]
let fanfare = scene.children[0] ..... children[0].children[0] as! ModelEntity
fanfare.model?.materials[0] = UnlitMaterial(color: .darkGray)

// Flag – .children[1].children[0]
let flag = scene.children[0] ..... children[1].children[0] as! ModelEntity
flag.model?.materials[0] = UnlitMaterial(color: .darkGray)

// Star – .children[2].children[0]
let star = scene.children[0] ..... children[2].children[0] as! ModelEntity
star.model?.materials[0] = UnlitMaterial(color: .darkGray)

I don't see much difference when retrieving model entities from .rcproject, .reality or .usdz files. According to the printed diagram, all three model-entities are located at the same level of hierarchy, they are offsprings of the same entity. The condition in the if statement can be set to its simplest form – if a ray hits a collision shape of fanfare or (||) flag or (||) star, then all three models must be recolored.

Mono-model approach

The best solution for interacting with 3D models through raycasting is the mono-model approach. A mono-model is a solid 3D object that does not have separate parts – all parts are combined into a whole model. Textures for mono-models are always mapped in UV editors. The mono-model can be made in 3D authoring apps like Maya or Blender.


P.S.

All seasoned AR developers know that Wow! AR experience isn't about code but rather about 3D content. You understand that there is no "miracle pill" for an easy solution if your 3D model consists of many parts. Competently made AR model is 75% of success when working with code.



Related Topics



Leave a reply



Submit