Support Arkit in Lower End Devices

Support ARKit in lower end devices?

All of ARKit supports only A9 and up, including AROrientationTrackingConfiguration. As they say at the top of the ARKit docs...

Important
ARKit requires an iOS device with an A9 or later processor.

To make your app available only on devices supporting ARKit, use the arkit key in the UIRequiredDeviceCapabilities section of your app's Info.plist. If augmented reality is a secondary feature of your app, use the isSupported property to determine whether the current device supports the session configuration you want to use.

So you can indeed make an iOS 11 app that's available only on a subset of iOS 11 devices. Requiring something in UIRequiredDeviceCapabilities makes the App Store not offer your app on devices you don't support, and makes iTunes refuse to install the app.

(Yeah, it looks like Apple took a bit to get their story straight here. Back at WWDC it looked like they'd support 3DOF tracking on lesser devices, but now you need A9 even for that.)


So what's the point of AROrientationTrackingConfiguration now?

For some kinds of app, it might make sense to fall back from world tracking (6DOF) to orientation-only tracking (3DOF) when current conditions don't allow 6DOF. That doesn't make much sense for apps where you put virtual objects on tables, but if you're just putting space invaders in the air for your player to shoot at, or using AR to overlay constellations on the sky, losing 6DOF doesn't wreck the experience so much.

ARKit Unable to run the session, configuration is not supported on this device

Here is Apple's document for ARKit and device support

ARKit requires an iOS device with an A9 or later processor. To make
your app available only on devices supporting ARKit, use the arkit key
in the UIRequiredDeviceCapabilities section of your app's Info.plist.
If augmented reality is a secondary feature of your app, use the
isSupported property to determine whether the current device supports
the session configuration you want to use.

You can check ARKit support programatically at runtime using 'isSupported` property of ARConfiguration.

ARConfiguration.isSupported

if (ARConfiguration.isSupported) {
// ARKit is supported. You can work with ARKit
} else {
// ARKit is not supported. You cannot work with ARKit
}

Following iOS devices (with iOS 11 installed) are supporting ARKit:

  • iPhone X
  • iPhone 8 and 8 Plus
  • iPhone 6S and 6S Plus
  • iPhone 7 and 7 Plus
  • iPhone SE
  • iPad Pro (9.7, 10.5 or 12.9)
  • iPad (2017, 2018)

Here are some reference links related to ARKit Support & iOS device configurations:

ARKit runs on the Apple A9 and A10 & A11 Bionic chip processors.

iPhone Models - (Chip)

iPad Models - (Chip)

ARKit with multiple users

Now, after releasing ARKit 2.0 at WWDC 2018, it's possible to make games for 2....6 users.

For this, you need to use ARWorldMap class. By saving world maps and using them to start new sessions, your iOS application can now add new Augmented Reality capabilities: multiuser and persistent AR experiences.

AR Multiuser experiences. Now you may create a shared frame of a reference by sending archived ARWorldMap objects to a nearby iPhone or iPad. With several devices simultaneously tracking the same world map, you may build an experience where all users (up to 6) can share and see the same virtual 3D content (use Pixar's USDZ file format for 3D in Xcode 10+ and iOS 12+).

session.getCurrentWorldMap { worldMap, error in 
guard let worldMap = worldMap else {
showAlert(error)
return
}
}

let configuration = ARWorldTrackingConfiguration()
configuration.initialWorldMap = worldMap
session.run(configuration)

AR Persistent experiences. If you save a world map and then your iOS application becomes inactive, you can easily restore it in the next launch of app and in the same physical environment. You can use ARAnchors from the resumed world map to place the same virtual 3D content (in USDZ or DAE format) at the same positions from the previous saved session.

iOS ARKit: Large size object always appears to move with the change in the position of the device camera

I found out the root cause of the issue. The issue was related to the Model which I was using for AR. When, I replaced the model with the one provided in this link:- https://developer.apple.com/augmented-reality/quick-look/. I was not facing any issues. So, if anyone face such type of issues in future I would recommend to use any of the model provided by Apple to check if the issue persists with it or not.

How to improve People Occlusion in ARKit 3.0

Updated: July 06, 2022.


New Depth API

You can improve a quality of People Occlusion and Object Occlusion features in ARKit 3.5....6.0 thanks to a new Depth API with high-quality ZDepth channel that can be rendered at 60 fps. However, for this you need iPhone 12 Pro or iPad Pro with LiDAR scanner. In ARKit 3.0 you can't improve People Occlusion feature unless you use Metal or MetalKit (but it's not easy).

Tip: Consider that RealityKit and AR QuickLook frameworks support People Occlusion as well.


Why does this issue happen when you use People Occlusion

It's due to the nature of depth data. We all know that a rendered final image of 3D scene can contain 5 main channels for digital compositing – Red, Green, Blue, Alpha, and ZDepth.

Sample Image

There are, of course, other useful render passes (also known as AOVs) for compositing: Normals, MotionVectors, PointPosition, UVs, Disparity, etc. But here we're interested only in two main render sets – RGBA and ZDepth.


ZDepth channel has three serious drawbacks in ARKit 3.0.


Problem 1. Aliasing and Anti-aliasing of ZDepth.

Rendering ZDepth channel in any High-End software (like Nuke, Fusion, Maya or Houdini), by default results in jagged edges or so called aliased edges. There's no exception for game engines – SceneKit, RealityKit, Unity, Unreal, or Stingray have this issue too.

Of course, you could say that before rendering we must turn on a feature called Anti-aliasing. And, yes, it works fine for almost all the channels, but not for ZDepth. The problem of ZDepth is – borderline pixels of every foreground object (especially if it's transparent) are "transitioned" into background object, if anti-aliased. In other words, pixels of FG and BG are mixed on a margin of FG object.

Frankly speaking, today there's only one working solution in professional compositing industry for fixing depth issues – Nuke compositors use Deep channels instead of a ZDepth. But no one game engine supports it because Deep channel is dauntingly huge. So deep channel comp is neither for game engines, nor for ARKit / RealityKit. Alas!

Sample Image


Problem 2. Resolution of ZDepth.

Regular ZDepth channel must be rendered in 32-bit, even if RGB and Alpha channels are both 8-bit only. Depth-data's 32-bit files are a heavy burden for CPU and GPU. ARKit often merges several layers in viewport. For example, compositing of real-world foreground character over virtual model and over real-world background character. Don't you think it's too much for your device, even if these layers are composited at viewport resolution instead of real screen rez? However, rendering ZDepth channel in 16-bit or 8-bit compresses the depth of your real scene, lowering the quality of compositing.

To lessen a burden on CPU and GPU and to save battery life, Apple engineers decided to use a scaled-down ZDepth image at capture stage and then scale-up a rendered ZDepth image up to a Viewport Resolution and Stencil it using Alpha channel (a.k.a. segmentation) and then fix ZDepth channel's edges using Dilate compositing operation. Thus, this led us to such nasty artefacts that we can see at your picture (some sort of "trail").

Sample Image

Sample Image

Please, look at Presentation Slides pdf of Bringing People into AR here.


Problem 3. Frame rate of ZDepth.

Third problem stems from FPS. ARKit and RealityKit work at 60 fps. Scaling down ZDepth image resolution doesn't lessen a processing. So, the next logical step for ARKit 3.0's engineers was – to lower a ZDepth's frame rate to 15 fps. However, the latest versions of ARKit and RealityKit render ZDepth channel at 60 fps, what considerably improves a quality of People Occlusion and Objects Occlusion. But in ARKit 3.0 this exposed artifacts (some kind of "drop frame" for ZDepth channel which results in "trail" effect).

You can't change the quality of a resulted composited image when you use this type property:

static var personSegmentationWithDepth: ARConfiguration.FrameSemantics { get }

because it's a gettable property and there's no settings for ZDepth quality in ARKit 3.0.

And, of course, if you want to increase a frame rate of ZDepth channel in ARKit 3.0 you should implement a frame interpolation technique found in digital compositing (where in-between frames are computer-generated ones).

Sample Image

But this frame interpolation technique is CPU intensive, because we need to generate 45 additional 32-bit ZDepth-frames per every second (45 interpolated + 15 real = 60 frames per second).

I believe that someone might improve ZDepth compositing features in ARKit 3.0 using Metal but it's a real challenge for developers. Look at sample code of People Occlusion in Custom Renderers.


ARKit 6.0 and LiDAR scanner support

In ARKit 3.5....6.0 there's a support for LiDAR (Light Detection And Ranging scanner). LiDAR scanner improves the quality of People Occlusion feature, because the quality of ZDepth channel is higher, even if you're not physically moving when you're tracking a surrounding environment. LiDAR system can also help you map walls, ceiling, floor and furniture to quickly get a virtual mesh for real-world surfaces to dynamically interact with, or simply locate 3d objects on them (even partially occluded virtual objects). Gadgets having LiDAR can achieve matchless accuracy retrieving real-world surfaces' locations. By considering the mesh, ray-casts can intersect with nonplanar surfaces or surfaces with no-features-at-all, such as white walls or barely-lit walls.

To activate sceneReconstruction option use the following code:

let arView = ARView(frame: .zero)

arView.automaticallyConfigureSession = false

let config = ARWorldTrackingConfiguration()

config.sceneReconstruction = .meshWithClassification

arView.debugOptions.insert([.showSceneUnderstanding, .showAnchorGeometry])

arView.environment.sceneUnderstanding.options.insert([.occlusion,
.collision,
.physics])
arView.session.run(config)

Sample Image

But before using sceneReconstruction instance property in your code you need to check whether device has a LiDAR Scanner or not. You can do it in AppDelegate.swift file:

import ARKit

@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {

var window: UIWindow?

func application(_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {

guard ARWorldTrackingConfiguration.supportsSceneReconstruction(.meshWithClassification)
else {
fatalError("Scene reconstruction requires a device with a LiDAR Scanner.")
}
return true
}
}


RealityKit 2.0

When using RealityKit 2.0 app on iPhone Pro or iPad Pro with LiDAR you have several occlusion options – the same options are available in ARKit 6.0 – an improved People Occlusion, Object Occlusion (furniture or walls for instance) and Face Occlusion. To turn on occlusion in RealityKit 2.0 use the following code:

arView.environment.sceneUnderstanding.options.insert(.occlusion)


Related Topics



Leave a reply



Submit