Arkit - Place a Scnplane Between 2 Vector Points on a Plane in Swift 3

Scenekit shape between 4 points

So the answer, at least my solution is to use SCNGeometry triangles. I wanted a square (rather than a cube) to act as a wall in Augmented Reality so I simply built 2 triangles using the 4 nodes that mapped the 4 corners of a wall.

The class to build the triangle:

extension SCNGeometry {

class func triangleFrom(vector1: SCNVector3, vector2: SCNVector3, vector3: SCNVector3) -> SCNGeometry {

let indices: [Int32] = [0, 1, 2]

let source = SCNGeometrySource(vertices: [vector1, vector2, vector3])

let element = SCNGeometryElement(indices: indices, primitiveType: .triangles)

return SCNGeometry(sources: [source], elements: [element])
}
}

The points for the 2 triangles are [p1, p2, p4] and [p1, p3, p4] called using the following:

let thirdPoint = firstPoint.clone()
thirdPoint.position = SCNVector3Make(thirdPoint.position.x,
thirdPoint.position.y + Float(1.5), thirdPoint.position.z)
sceneView.scene.rootNode.addChildNode(thirdPoint)

let fourthPoint = secondPoint.clone()
fourthPoint.position = SCNVector3Make(fourthPoint.position.x,
fourthPoint.position.y + Float(1.5), fourthPoint.position.z)
sceneView.scene.rootNode.addChildNode(fourthPoint)

let triangle = SCNGeometry.triangleFrom(vector1: firstPoint.position, vector2: secondPoint.position, vector3: fourthPoint.position)
let triangleNode = SCNNode(geometry: triangle)
triangleNode.geometry?.firstMaterial?.diffuse.contents = UIColor.blue
triangleNode.geometry?.firstMaterial?.isDoubleSided = true
sceneView.scene.rootNode.addChildNode(triangleNode)

let triangle2 = SCNGeometry.triangleFrom(vector1: firstPoint.position, vector2: thirdPoint.position, vector3: fourthPoint.position)
let triangle2Node = SCNNode(geometry: triangle2)
triangle2Node.geometry?.firstMaterial?.diffuse.contents = UIColor.blue
triangle2Node.geometry?.firstMaterial?.isDoubleSided = true
sceneView.scene.rootNode.addChildNode(triangle2Node)

This is all based on creating 2 initial nodes by selecting the bottom 2 points of a wall in ARKit.

Hope that makes sense to anybody else searching for a similar answer.

EDIT: Adding a Material

Here's a slightly different extension and the code to add a material to it, the end result of a wall remains the same:

extension SCNGeometry {

class func Quad() -> SCNGeometry {

let verticesPosition = [
SCNVector3(x: -0.242548823, y: -0.188490361, z: -0.0887458622),
SCNVector3(x: -0.129298389, y: -0.188490361, z: -0.0820985138),
SCNVector3(x: -0.129298389, y: 0.2, z: -0.0820985138),
SCNVector3(x: -0.242548823, y: 0.2, z: -0.0887458622)
]

let textureCord = [
CGPoint(x: 1, y: 1),
CGPoint(x: 0, y: 1),
CGPoint(x: 0, y: 0),
CGPoint(x: 1, y: 0),
]

let indices: [CInt] = [
0, 2, 3,
0, 1, 2
]

let vertexSource = SCNGeometrySource(vertices: verticesPosition)
let srcTex = SCNGeometrySource(textureCoordinates: textureCord)
let date = NSData(bytes: indices, length: MemoryLayout<CInt>.size * indices.count)

let scngeometry = SCNGeometryElement(data: date as Data,
primitiveType: SCNGeometryPrimitiveType.triangles, primitiveCount: 2,
bytesPerIndex: MemoryLayout<CInt>.size)

let geometry = SCNGeometry(sources: [vertexSource,srcTex],
elements: [scngeometry])

return geometry

}

}

Then simply call it in viewDidLoad() and apply a material

let scene = SCNScene()

let quad = SCNGeometry.Quad()

let (min, max) = quad.boundingBox

let width = CGFloat(max.x - min.x)
let height = CGFloat(max.y - min.y)

quad.firstMaterial?.diffuse.contents = UIImage(named: "wallpaper.jpg")
quad.firstMaterial?.diffuse.contentsTransform = SCNMatrix4MakeScale(Float(width), Float(height), 1)
quad.firstMaterial?.diffuse.wrapS = SCNWrapMode.repeat
quad.firstMaterial?.diffuse.wrapT = SCNWrapMode.repeat

let node = SCNNode()
node.geometry = quad
scene.rootNode.addChildNode(node)

sceneView.scene = scene

SceneKit – Drawing a line between two points

There are lots of ways to do this.

As noted, your custom geometry approach has some disadvantages. You should be able to correct the problem of it being invisible from one side by giving its material the doubleSided property. You still may have issues with it being two-dimensional, though.

You could also modify your custom geometry to include more triangles, so you get a tube shape with three or more sides instead of a flat rectangle. Or just have two points in your geometry source, and use the SCNGeometryPrimitiveTypeLine geometry element type to have Scene Kit draw a line segment between them. (Though you won't get as much flexibility in rendering styles with line drawing as with shaded polygons.)

You can also use the SCNCylinder approach you mentioned (or any of the other built-in primitive shapes). Remember that geometries are defined in their own local (aka Model) coordinate space, which Scene Kit interprets relative to the coordinate space defined by a node. In other words, you can define a cylinder (or box or capsule or plane or whatever) that's 1.0 units wide in all dimensions, then use the rotation/scale/position or transform of the SCNNode containing that geometry to make it long, thin, and stretching between the two points you want. (Also note that since your line is going to be pretty thin, you can reduce the segmentCounts of whichever built-in geometry you're using, because that much detail won't be visible.)

Yet another option is the SCNShape class that lets you create an extruded 3D object from a 2D Bézier path. Working out the right transform to get a plane connecting two arbitrary points sounds like some fun math, but once you do it you could easily connect your points with any shape of line you choose.

How to calculate normal vector of a SCNPlane?

From the documentation we learn that

The surface is one-sided. Its surface normal vectors point in the positive z-axis direction of its local coordinate space, so it is only visible from that direction by default.

The normal of a SCNPlane is always (0, 0, 1) in local space and that cannot change.

When it is attached to a node, the orientation of that node determines the normal in an another coordinate system. You can use simdConvertVector:toNode: to convert between coordinate spaces:

// normal expressed in world space
let normal = simd_normalize(node.simdConvertVector(simd_float3(0, 0, 1), to: nil))

ARKit detecting intersection between planes

I think you were on the right lines using the hitTestWithSegment function to detect an intersection between the ARImageAnchor and the ARPlaneAnchor.

Rather than trying to explain each step of my attempt at an answer, I have provided code which is fully commented, so it should be fairly self explanatory.

My example works fairly well (although its certainly not perfect) and will definitely need some tweaking.

For example, you will need to look at determining more accurately the distance from the ARReferenceImage to the ARPlaneAnchor etc.

I can get the model (a Pokemon) to place at the correct level and fairly close to the front of the ARReferenceImage, although it will need tweaking.

Having said this, I think this will be a fairly good base for you to start refining the code and getting more accurate results.

Of note however, is that I have just enabled one ARPlaneAnchor to be detected (just for simplicities sake) and have assumed that you will be detecting a plane infront of your image marker.

I haven't taken into account rotation or anything like that. And of course, based on your proposed scenario; it also assumes your image would be on a desk or some other flat surface.

Anyway, here is my answer (hopefully it should be fairly self explanatory):

import UIKit
import ARKit

//-----------------------
//MARK: ARSCNViewDelegate
//-----------------------

extension ViewController: ARSCNViewDelegate{

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {

//1. If We Have Detected Our ImageTarget Then Create A Plane To Visualize It
if let currentImageAnchor = anchor as? ARImageAnchor {

createReferenceImagePlaneForNode(currentImageAnchor, node: node)
allowTracking = true

}

//2. If We Have Detected A Horizontal Plane Then Create One
if let currentPlaneAnchor = anchor as? ARPlaneAnchor{

if planeNode == nil && !createdModel{ createReferencePlaneForNode(currentPlaneAnchor, node: node) }
}

}

func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {

//1. Check To See Whether An ARPlaneAnchor Has Been Updated
guard let anchor = anchor as? ARPlaneAnchor,
//2. Check It Is Our PlaneNode
let existingPlane = planeNode,
//3. Get The Geometry Of The PlaneNode
let planeGeometry = existingPlane.geometry as? SCNPlane else { return }

//4. Adjust It's Size & Positions
planeGeometry.width = CGFloat(anchor.extent.x)
planeGeometry.height = CGFloat(anchor.extent.z)

planeNode?.position = SCNVector3Make(anchor.center.x, 0.01, anchor.center.z)
}

func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {

//1. Detect The Intersection Of The ARPlaneAnchor & ARImageAncho
if allowTracking { detectIntersetionOfImageTarget() }

}

}

//---------------------------------------
//MARK: Model Generation & Identification
//---------------------------------------

extension ViewController {

/// Detects If We Have Intersected A Valid Image Target
func detectIntersetionOfImageTarget(){

//If We Havent Created Our Model Then Check To See If We Have Detected An Existing Plane
if !createdModel{

//a. Perform A HitTest On The Center Of The Screen For AnyExisting Planes
guard let planeHitTest = self.augmentedRealityView.hitTest(screenCenter, types: .existingPlaneUsingExtent).first,
let planeAnchor = planeHitTest.anchor as? ARPlaneAnchor else { return }

//b. Get The Transform Of The ARPlane Anchor
let x = planeAnchor.transform.columns.3.x
let y = planeAnchor.transform.columns.3.y
let z = planeAnchor.transform.columns.3.z

//b. Create The Anchors Vector
let anchorVector = SCNVector3(x,y, z)

//Perform Another HitTest From The ImageAnchor Vector To The Anchors Vector
if let _ = self.augmentedRealityView.scene.rootNode.hitTestWithSegment(from: imageAnchorVector, to: anchorVector, options: nil).first?.node {

//a. If We Havent Created The Model Then Place It As Soon As An Intersection Occures
if createdModel == false{

//b. Load The Model

loadModelAtVector(SCNVector3(imageAnchorVector.x, y, imageAnchorVector.z))

createdModel = true

planeNode?.removeFromParentNode()

}
}
}
}

}

class ViewController: UIViewController {

//1. Reference To Our ImageTarget Bundle
let AR_BUNDLE = "AR Resources"

//2. Vector To Store The Position Of Our Detected Image
var imageAnchorVector: SCNVector3!

//3. Variables To Allow Tracking & To Determine Whether Our Model Has Been Placed
var allowTracking = false
var createdModel = false

//4. Create A Reference To Our ARSCNView In Our Storyboard Which Displays The Camera Feed
@IBOutlet weak var augmentedRealityView: ARSCNView!

//5. Create Our ARWorld Tracking Configuration
let configuration = ARWorldTrackingConfiguration()

//6. Create Our Session
let augmentedRealitySession = ARSession()

//7. ARReference Images
lazy var staticReferenceImages: Set<ARReferenceImage> = {

let images = ARReferenceImage.referenceImages(inGroupNamed: AR_BUNDLE, bundle: nil)
return images!

}()

//8. Scrren Center Reference
var screenCenter: CGPoint!

//9. PlaneNode
var planeNode: SCNNode?

//--------------------
//MARK: View LifeCycle
//--------------------

override func viewDidLoad() {

super.viewDidLoad()

//1. Get Reference To The Center Of The Screen For RayCasting
DispatchQueue.main.async { self.screenCenter = CGPoint(x: self.view.bounds.width/2, y: self.view.bounds.height/2) }

//2. Setup Our ARSession
setupARSessionWithStaticImages()

}

override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() }

//---------------------------------
//MARK: ARImageAnchor Vizualization
//---------------------------------

/// Creates An SCNPlane For Visualizing The Detected ARImageAnchor
///
/// - Parameters:
/// - imageAnchor: ARImageAnchor
/// - node: SCNNode
func createReferenceImagePlaneForNode(_ imageAnchor: ARImageAnchor, node: SCNNode){

//1. Get The Targets Width & Height
let width = imageAnchor.referenceImage.physicalSize.width
let height = imageAnchor.referenceImage.physicalSize.height

//2. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.5
planeNode.geometry = planeGeometry

//3. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2

//4. The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)

//5. Store The Vector Of The ARImageAnchor
imageAnchorVector = SCNVector3(imageAnchor.transform.columns.3.x, imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)

let fadeOutAction = SCNAction.fadeOut(duration: 5)
planeNode.runAction(fadeOutAction)

}

//-------------------------
//MARK: Plane Visualization
//-------------------------

/// Creates An SCNPlane For Visualizing The Detected ARAnchor
///
/// - Parameters:
/// - imageAnchor: ARAnchor
/// - node: SCNNode
func createReferencePlaneForNode(_ anchor: ARPlaneAnchor, node: SCNNode){

//1. Get The Anchors Width & Height
let width = CGFloat(anchor.extent.x)
let height = CGFloat(anchor.extent.z)

//2. Create A Plane Geometry To Cover The ARImageAnchor
planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode?.opacity = 0.5
planeNode?.geometry = planeGeometry

//3. Rotate The PlaneNode To Horizontal
planeNode?.eulerAngles.x = -.pi/2

//4. The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode!)

}

//-------------------
//MARK: Model Loading
//-------------------

/// Loads Our Model Based On The Resulting Vector Of Our ARAnchor
///
/// - Parameter worldVector: SCNVector3
func loadModelAtVector(_ worldVector: SCNVector3) {

let modelPath = "ARModels.scnassets/Scatterbug.scn"

//1. Get The Reference To Our SCNScene & Get The Model Root Node
guard let model = SCNScene(named: modelPath),
let pokemonModel = model.rootNode.childNode(withName: "RootNode", recursively: false) else { return }

//2.Add It To Our SCNView
augmentedRealityView.scene.rootNode.addChildNode(pokemonModel)

//3. Scale The Scatterbug
pokemonModel.scale = SCNVector3(0.003, 0.003, 0.003)

pokemonModel.position = worldVector

augmentedRealityView.scene.rootNode.addChildNode(pokemonModel)

}

//---------------
//MARK: ARSession
//---------------

/// Sets Up The AR Session With Static Or Dynamic AEImages
func setupARSessionWithStaticImages(){

//1. Set Our Configuration
configuration.detectionImages = staticReferenceImages
configuration.planeDetection = .horizontal

//2. Run The Configuration
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])

//3. Set The Session & Delegate
augmentedRealityView?.session = augmentedRealitySession
self.augmentedRealityView?.delegate = self

}

}

Hope it points you in the right direction...

Add plane nodes to ARKit scene vertically and horizontally

Solved! Here's how to make the view "parallel" to the camera at all times:

let yourNode = SCNNode()

let billboardConstraint = SCNBillboardConstraint()
billboardConstraint.freeAxes = [.X, .Y, .Z]
yourNode.constraints = [billboardConstraint]

Or

guard let currentFrame = sceneView.session.currentFrame else {return nil}
let camera = currentFrame.camera
let transform = camera.transform
var translationMatrix = matrix_identity_float4x4
translationMatrix.columns.3.z = -0.1
let modifiedMatrix = simd_mul(transform, translationMatrix)
let node = SCNNode(geometry: plane)
node.simdTransform = modifiedMatrix

How to color a scnplane with 2 different materials?

Per the SCNPlane docs:

The surface is one-sided. Its surface normal vectors point in the positive z-axis direction of its local coordinate space, so it is only visible from that direction by default. To render both sides of a plane, ether set the isDoubleSided property of its material to true or create two plane geometries and orient them back to back.

That implies a plane has only one material — isDoubleSided is a property of a material, letting that one material render on both sides of a surface, but there's nothing you can do to one material to turn it into two.

If you want a flat surface with two materials, you can arrange two planes back to back as the doc suggests. Make them both children of a containing node and you can then use that to move them together. Or you could perhaps make an SCNBox that's very thin in one dimension.



Related Topics



Leave a reply



Submit