Scanning Real-World Object and Generating 3D Mesh from It

Scanning Real-World Object and generating 3D Mesh from it

RealityKit 2.0 | Object Capture API

Object Capture API, announced at WWDC 2021, provides you with the long-awaited tools for photogrammetry. At the output we get USDZ model with a hi-res texture.

Read about photogrammetry HERE.

ARKit | Mesh Reconstruction

Using iOS device with LiDAR and ARKit 3.5/4.0/5.0 you can easily reconstruct a topological map of surrounding environment. Scene Reconstruction feature starts working immediately after launching a current ARSession.

Apple LiDAR works within 5 meters range. A scanner can help you improve a quality of ZDepth channel, and such features as People/Real World Objects Occlusion, Motion Tracking, Immediate Physics Contact Body and Raycasting.

Other awesome peculiarities of LiDAR scanner are:

  • you can use your device in a poorly lit room
  • you can track a pure white walls with no features at all
  • you can detect a planes almost instantaneously

Consider that a quality of a scanned object when you're using LiDAR isn't as good as you expect. Small details are not scanned. That's because a resolution of an Apple LiDAR isn't high enough.

LiDAR and RealityKit – Capture a Real World Texture for a Scanned Model

Scene Reconstruction

Pity but I am still unable to capture model's texture in realtime using the LiDAR scanning process. Neither at WWDC20 nor at WWDC22 Apple announced a native API for that (so texture capturing is only possible now using third-party APIs - don't ask me which ones :-) ).

However, there's good news – a new methodology has emerged at last. It will allow developers to create textured models from a series of shots.

Photogrammetry

Object Capture API, announced at WWDC 2021, provides developers with the long-awaited photogrammetry tool. At the output we get USDZ model with UV-mapped hi-rez texture. To implement Object Capture API you need macOS 12 and Xcode 13.

Sample Image

To create a USDZ model from a series of shots, submit all taken images to RealityKit's PhotogrammetrySession.

Here's a code snippet that spills a light on this process:

import RealityKit
import Combine

let pathToImages = URL(fileURLWithPath: "/path/to/my/images/")

let url = URL(fileURLWithPath: "model.usdz")

var request = PhotogrammetrySession.Request.modelFile(url: url,
detail: .medium)

var configuration = PhotogrammetrySession.Configuration()
configuration.sampleOverlap = .normal
configuration.sampleOrdering = .unordered
configuration.featureSensitivity = .normal
configuration.isObjectMaskingEnabled = false

guard let session = try PhotogrammetrySession(input: pathToImages,
configuration: configuration)
else { return 
}

var subscriptions = Set<AnyCancellable>()

session.output.receive(on: DispatchQueue.global())
.sink(receiveCompletion: { _ in
// errors
}, receiveValue: { _ in
// output
})
.store(in: &subscriptions)

session.process(requests: [request])

You can reconstruct USD and OBJ models with their corresponding UV-mapped textures.

Is it possible to use Reality Composer for detecting 3D assets in the real world?

Of course you can use iOS/iPadOS version of Reality Composer for creating .arobject and then recognizing real-world object based on data for AnchorEntity(.object). Look at these two images to find out how you can do that.

Sample Image

Take into consideration that you can't scan cylindrical and moving real-world objects!

Sample Image

Place 3d object on real object in Augmented Reality

First you have to generate a cad 3d model object which simulate your real model then from MTG application provided by vuforia create your model specific dataset after generating try to import it in unity and add to model target in your app. Now you can map your 3d model to real world object.



Related Topics



Leave a reply



Submit