Realitykit VS Scenekit VS Metal - High-Quality Rendering

Metal Ray Tracing – SceneKit or RealityKit

It took me some time to prepare example apps for you and anyone else interested in this topic. The apps are pure Swift with Metal and are currently designed to run on iOS platforms. (I personally recommend a fast device)

Here you can find the first "simple" default Apple RayTracer:

https://github.com/philvanza/SceneKit-RayTracing

And here the same app with an additional, more complex RayTracer that supports BSDF (Bidirectional Scattering Distribution Function) which allows you to RayTrace i.Ex. transparent glass:

https://github.com/philvanza/SceneKit-RayTracing-Advanced

Basically you need the following to achive RayTracing:

  • Extract the geometry data from the SceneKit node you want to RayTrace
  • Build a triangle acceleration structure (Metal specific)
  • Prepare the buffers to execute on the GPU
  • Send the whole stuff to the RayTracer
  • Accumulate the output image over time

The longer you let the RayTracer run, the better the result becomes.

It would take too much code to add here, so I prepared this repositories. Feel free to leave me a comment on whatever you think about the RayTracers.

Have fun!

Exaple Image from the Simple RayTracer

Exaple Image from the Advanced RayTracer

Using RealityKit and SceneKit together

SceneKit and RealityKit are incompatible due to a complete dissimilarity – difference in scenes' hierarchy, difference in renderer and physics engines, difference in component content. What's stopping you from using SceneKit + ARKit (ARSCNView class)?

ARKit 6.0 has a built-in Depth API (the same API is available in RealityKit) that uses a LiDAR scanner to more accurately determine distances in a surrounding environment, allowing us to use plane detection, raycasting and object occlusion more efficiently.

For that, use sceneReconstruction instance property and ARMeshAnchors.

import ARKit
import SceneKit

class ViewController: UIViewController {

@IBOutlet var sceneView: ARSCNView!

override func viewDidLoad() {
super.viewDidLoad()

sceneView.scene = SCNScene()
sceneView.delegate = self

let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
config.planeDetection = .horizontal
sceneView.session.run(config)
}
}

Delegate's method:

extension ViewController: ARSCNViewDelegate {

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode,
for anchor: ARAnchor) {

guard let meshAnchor = anchor as? ARMeshAnchor else { return }
let meshGeo = meshAnchor.geometry

// logic ...

node.addChildNode(someModel)
}
}

P. S.

This post will be helpful for you.

Using RealityKit and SceneKit together

SceneKit and RealityKit are incompatible due to a complete dissimilarity – difference in scenes' hierarchy, difference in renderer and physics engines, difference in component content. What's stopping you from using SceneKit + ARKit (ARSCNView class)?

ARKit 6.0 has a built-in Depth API (the same API is available in RealityKit) that uses a LiDAR scanner to more accurately determine distances in a surrounding environment, allowing us to use plane detection, raycasting and object occlusion more efficiently.

For that, use sceneReconstruction instance property and ARMeshAnchors.

import ARKit
import SceneKit

class ViewController: UIViewController {

@IBOutlet var sceneView: ARSCNView!

override func viewDidLoad() {
super.viewDidLoad()

sceneView.scene = SCNScene()
sceneView.delegate = self

let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
config.planeDetection = .horizontal
sceneView.session.run(config)
}
}

Delegate's method:

extension ViewController: ARSCNViewDelegate {

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode,
for anchor: ARAnchor) {

guard let meshAnchor = anchor as? ARMeshAnchor else { return }
let meshGeo = meshAnchor.geometry

// logic ...

node.addChildNode(someModel)
}
}

P. S.

This post will be helpful for you.



Related Topics



Leave a reply



Submit