How to Detect the 2D Images Using Arkit and Realitykit

How to detect the 2D images using ARKit and RealityKit

In ARKit/RealityKit project use the following code for session() instance methods:

import ARKit
import RealityKit

class ViewController: UIViewController, ARSessionDelegate {

func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {

guard let imageAnchor = anchors.first as? ARImageAnchor,
let _ = imageAnchor.referenceImage.name
else { return }

let anchor = AnchorEntity(anchor: imageAnchor)

// Add Model Entity to anchor
anchor.addChild(model)

arView.scene.anchors.append(anchor)
}

override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)

arView.session.delegate = self
resetTrackingConfig()
}

func resetTrackingConfig() {

guard let refImg = ARReferenceImage.referenceImages(inGroupNamed: "Sub",
bundle: nil)
else { return }

let config = ARWorldTrackingConfiguration()
config.detectionImages = refImg
config.maximumNumberOfTrackedImages = 1

let options = [ARSession.RunOptions.removeExistingAnchors,
ARSession.RunOptions.resetTracking]

arView.session.run(config, options: ARSession.RunOptions(options))
}
}

And take into consideration – a folder for reference images (in .png or .jpg format) must have an extension .arresourcegroup.

RealityKit – Image recognition and working with many scenes

Foreword

RealityKit's AnchorEntity(.image) coming from RC, matches ARKit's ARImageTrackingConfig. When iOS device recognises a reference image, it creates Image Anchor (that conforms to ARTrackable protocol) that tethers a corresponding 3D model. And, as you understand, you must show just one reference image at a time (in your particular case AR app can't operate normally when you give it two or more images simultaneously).


Code snippet showing how if condition logic might look like:

import SwiftUI
import RealityKit

struct ContentView : View {
var body: some View {
return ARViewContainer().edgesIgnoringSafeArea(.all)
}
}

struct ARViewContainer: UIViewRepresentable {

func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)

let id02Scene = try! Experience.loadID2()
print(id02Scene) // prints scene hierarchy

let anchor = id02Scene.children[0]
print(anchor.components[AnchoringComponent] as Any)

if anchor.components[AnchoringComponent] == AnchoringComponent(
.image(group: "Experience.reality",
name: "assets/MainID_4b51de84.jpeg")) {

arView.scene.anchors.removeAll()
print("LOAD SCENE")
arView.scene.anchors.append(id02Scene)
}
return arView
}

func updateUIView(_ uiView: ARView, context: Context) { }
}

ID2 scene hierarchy printed in console:

Sample Image

P.S.

You should implement SwiftUI Coordinator class (read about it here), and inside Coordinator use ARSessionDelegate's session(_:didUpdate:) instance method to update anchors properties at 60 fps.

Also you may use the following logic – if anchor of scene 1 is active or anchor of scene 3 is active, just delete all anchors from collection and load scene 2.

var arView = ARView(frame: .zero)

let id01Scene = try! Experience.loadID1()
let id02Scene = try! Experience.loadID2()
let id03Scene = try! Experience.loadID3()

func makeUIView(context: Context) -> ARView {
arView.session.delegate = context.coordinator

arView.scene.anchors.append(id01Scene)
arView.scene.anchors.append(id02Scene)
arView.scene.anchors.append(id03Scene)
return arView
}

...

func session(_ session: ARSession, didUpdate frame: ARFrame) {
if arView.scene.anchors[0].isActive || arView.scene.anchors[2].isActive {
arView.scene.anchors.removeAll()
arView.scene.anchors.append(id02Scene)
print("Load Scene Two")
}
}

ARKit – Image recognition with similar images but different colors

To recognize similar images with different colour scheme in ARKit or in RealityKit is a bad idea from the very beginning.

  • At first, please take into consideration that ARReferenceImage is a set. Swift's set is unordered collection of UNIQUE values. If Apple engineers wanted to create ARReferenceImage as array they would do it. But it's a SET in every sense of this word – in images' names and visually.

     func referenceImages(inGroupNamed name: String, 
    bundle: Bundle?) -> Set<ARReferenceImage>?


  • Secondly, when implementing ARTrackable protocol (remember, ARImageAnchor conforms to ARTrackable), you shouldn't track similar images or repetitive structures as Apple suggests.

     @available(iOS 11.3, *)
    open class ARImageAnchor : ARAnchor, ARTrackable {

    open var referenceImage: ARReferenceImage { get }

    @available(iOS 13.0, *)
    open var estimatedScaleFactor: CGFloat { get }
    }

    Watch ARKit WWDC 2018 video (Time 37:40) for details.


  • Thirdly, iPhone's neural engine perceives ARKit's and RealityKit's reference images in the black-and-white spectrum. I think it's made for two main reasons: one – Luma-contrast is more important than Chroma-contrast, and second – image recognition doesn't depend on environment light color – whether it's yellowish or blueish – the recognition's result should be unaltered.

    Do you guess what ARKit sees, looking at three similar images with different colour scheme?

Sample Image

The differences between green and cyan images are subtle.

RealityKit's equivalent of SceneKit's SCNShape

In RealityKit 2.0 you can generate a mesh using MeshDescriptor. There is no support for two-dimensional path at the moment, like it's implemented in SceneKit.

var descriptor = MeshDescriptor(name: "anything")
descriptor.primitives = .triangles(indices)
let mesh: MeshResource = try! .generate(from: [descriptor])

Apple Vision image recognition

As of ARKit 1.5 (coming with IOS 11.3 in the spring of 2018), a feature seems to be implemented directly on top of ARKit that solves this problem.

ARKit will fully support image recognition.
Upon recognition of an image, the 3d coordinates can be retrieved as an anchor, and therefore content can be placed onto them.



Related Topics



Leave a reply



Submit