Scnscene: Calculate Projected Size of an Object

SCNScene: Calculate projected size of an object

  1. Use getBoundingBoxMin(_:max:) to find the corners of the space your 3D element occupies in the SceneKit scene.

  2. Use projectPoint to convert those points from 3D scene coordinates to 2D view coordinates.

  3. Use convertPointFromView to convert from (UIKit) view coordinates to your SpriteKit scene's coordinate system.

From the points in step 3 you can construct a rectangle or other shape for your HUD elements to avoid.

Get Size of image in SCNNode / ARKit Swift

In order to do this I believe that first you need to get the size in Pixels of the UIImage by

multiplying the size values by the value in the scale property to get
the pixel dimensions of the image.

As such an example would be something like so:

guard let image = UIImage(named: "launchScreen") else { return }
let pixelWidth = image.size.width * image.scale
let pixelHeight = image.size.height * image.scale
print(pixelWidth, pixelHeight)

The size of my image when made in Adobe Illustrator was 3072 x 4099, and when I logged the results in the console the dimensions were also the same.

Now the tricky part here is calculating the pixels to a size we can use in ARKit, remembering that different devices have a different PPI (Pixels Per Inch) density.

In my example I am just going to use the PPI of an iPhone7Plus which is 401.

 //1. Get The PPI Of The iPhone7Plus
let iphone7PlusPixelsPerInch: CGFloat = 401

//2. To Get The Image Size In Inches We Need To Divide By The PPI
let inchWidth = pixelWidth/iphone7PlusPixelsPerInch
let inchHeight = pixelHeight/iphone7PlusPixelsPerInch

//3. Calculate The Size In Metres (There Are 2.54 Cm's In An Inch)
let widthInMetres = (inchWidth * 2.54) / 100
let heightInMeters = (inchHeight * 2.54) / 100

Now we have the size of our Image in Metres it is simple to create an SCNNode of that size e.g:

//1. Generate An SCNPlane With The Same Size As Our Image
let realScaleNode = SCNNode(geometry: SCNPlane(width: widthInMetres, height: heightInMeters))
realScaleNode.geometry?.firstMaterial?.diffuse.contents = image
realScaleNode.position = SCNVector3(0, 0, -1)

//2. Add It To Our Hierachy
self.augmentedRealityView.scene.rootNode.addChildNode(realScaleNode)

Hope it helps...

P.S: This may be useful for helping you get the PPI of the Screen (marchv/UIScreenExtension)

iOS SceneKit to UIView projection issue

The problem here is that CGRect you use to calculate the mid points is based on the projected coordinates of the bounding box. The two corner points of the bounding box are transformed using the model view projection matrix, to get the correct view space coordinates for the mid points you need to perform the same transformation.

Hopefully the code is a bit clearer.

//world coordinates
let v1w = sm.node.convertPosition(sm.node.boundingBox.min, to: self.sceneView.scene?.rootNode)
let v2w = sm.node.convertPosition(sm.node.boundingBox.max, to: self.sceneView.scene?.rootNode)

//calc center of BB in world coordinates
let center = SCNVector3Make(
(v1w.x + v2w.x)/2,
(v1w.y + v2w.y)/2,
(v1w.z + v2w.z)/2)

//calc each mid point
let mp1w = SCNVector3Make(v1w.x, center.y, center.z)
let mp2w = SCNVector3Make(center.x, v2w.y, center.z)
let mp3w = SCNVector3Make(v2w.x, center.y, center.z)
let mp4w = SCNVector3Make(center.x, v1w.y, center.z)

//projected coordinates
let mp1p = self.sceneView.projectPoint(mp1w)
let mp2p = self.sceneView.projectPoint(mp2w)
let mp3p = self.sceneView.projectPoint(mp3w)
let mp4p = self.sceneView.projectPoint(mp4w)

var frameOld = sm.marker.frame

switch sm.position
{
case .Top:
frameOld.origin.y = CGFloat(mp1p.y) - frameOld.size.height/2
frameOld.origin.x = CGFloat(mp1p.x) - frameOld.size.width/2
sm.marker.isHidden = (mp1p.z < 0 || mp1p.z > 1)
case .Bottom:
frameOld.origin.y = CGFloat(mp2p.y) - frameOld.size.height/2
frameOld.origin.x = CGFloat(mp2p.x) - frameOld.size.width/2
sm.marker.isHidden = (mp2p.z < 0 || mp2p.z > 1)
case .Left:
frameOld.origin.y = CGFloat(mp3p.y) - frameOld.size.height/2
frameOld.origin.x = CGFloat(mp3p.x) - frameOld.size.width/2
sm.marker.isHidden = (mp3p.z < 0 || mp3p.z > 1)
case .Right:
frameOld.origin.y = CGFloat(mp4p.y) - frameOld.size.height/2
frameOld.origin.x = CGFloat(mp4p.x) - frameOld.size.width/2
sm.marker.isHidden = (mp4p.z < 0 || mp4p.z > 1)
}

It's a cool little sample project!

Points line up on box edges

Update on z-clipping issue

The projectPoint method returns a 3D SCNVector, the x any y coords as we know are the screen coordinates. The z coordinate tells us the location of the point relative to the far and near clipping planes (z = 0 near clipping plane, z = 1 far clipping plane). If you set a negative value for your near clipping plane, objects behind the camera would be rendered. We don't have a negative near clipping plane, but we also don't have any logic to say what happens if those projected point locations fall outside the z far and z near range.

I've updated the code above to include this zNear and zFar check and toggle the UIView visibility accordingly.

tl;dr

The markers visible when the camera was rotated 180deg are behind the camera, but they were still projected onto the view plane. And as we weren't checking if they we behind the camera, they were still displayed.

How do I position an SCNCamera such that an object just fits in view?

Ok - posting it here - hope that's ok.

I used strafe just for a couple of big maps, but eventually took them out. I wanted all of the maps to fit - it looks very similar to yours. So yeah, I worked backwards. I put the camera where I wanted it, then fiddled with the map and panel size, so I was only dealing with one thing at a time.

I had panels for triangles, quads, and hex shapes. It was a tower defense game, so the attackers could move various ways depending on the type of panel.

class Camera
{
var data = Data.sharedInstance
var util = Util.sharedInstance
var gameDefaults = Defaults()

var cameraEye = SCNNode()
var cameraFocus = SCNNode()

var centerX: Int = 100
var strafeDelta: Float = 0.8
var zoomLevel: Int = 35
var zoomLevelMax: Int = 35 // Max number of zoom levels

//********************************************************************
init()
{
cameraEye.name = "Camera Eye"
cameraFocus.name = "Camera Focus"

cameraFocus.isHidden = true
cameraFocus.position = SCNVector3(x: 0, y: 0, z: 0)

cameraEye.camera = SCNCamera()
cameraEye.constraints = []
cameraEye.position = SCNVector3(x: 0, y: 15, z: 0.1)

let vConstraint = SCNLookAtConstraint(target: cameraFocus)
vConstraint.isGimbalLockEnabled = true
cameraEye.constraints = [vConstraint]
}
//********************************************************************
func reset()
{
centerX = 100
cameraFocus.position = SCNVector3(x: 0, y: 0, z: 0)
cameraEye.constraints = []
cameraEye.position = SCNVector3(x: 0, y: 32, z: 0.1)
cameraFocus.position = SCNVector3Make(0, 0, 0)

let vConstraint = SCNLookAtConstraint(target: cameraFocus)
vConstraint.isGimbalLockEnabled = true
cameraEye.constraints = [vConstraint]
}
//********************************************************************
func strafeRight()
{
if(centerX + 1 < 112)
{
centerX += 1
cameraEye.position.x += strafeDelta
cameraFocus.position.x += strafeDelta
}
}
//********************************************************************
func strafeLeft()
{
if(centerX - 1 > 90)
{
centerX -= 1
cameraEye.position.x -= strafeDelta
cameraFocus.position.x -= strafeDelta
}
}
//********************************************************************
}

I used GKGraph like this:

var graphNodes: [GKPanelNode] = [] // All active graph nodes with connections
var myGraph = GKGraph() // declaring the Graph

Then your typical panel load, probably similar to yours

func getQuadPanelNode(vPanelType: panelTypes) -> SCNNode
{
let plane = SCNBox(width: 1.4, height: 0.001, length: 1.4, chamferRadius: 0)

plane.materials = []
plane.materials = setQuadPanelTextures(vPanelType: vPanelType)
plane.firstMaterial?.isDoubleSided = false
return SCNNode(geometry: plane)
}

Then load the panels from the map I created.

func loadPanels()
{
removePanelNodes()
for vPanel in mapsDetail.getDetail(vMap: data.mapSelected)
{
let vPanel = Panel.init(vName: "Panel:" + vPanel.name, vPanelType: vPanel.type, vPosition: vPanel.pos, vRotation: vPanel.up)
gridPanels[vPanel.panelName] = vPanel

if(vPanel.type == .entry) { entryPanelName = vPanel.panelName }
if(vPanel.type == .exit) { exitPanelName = vPanel.panelName }
}
}

ARKit ImageDetection - get reference image when tapping 3D object

Since your ARReferenceImage is stored within the Assets.xcassets catalogue you can simply load your image using the following initialization method of UIImage:

init?(named name: String)

For your information:

if this is the first time the image is being
loaded, the method looks for an image with the specified name in the
application’s main bundle. For PNG images, you may omit the filename
extension. For all other file formats, always include the filename
extension.

In my example I have an ARReferenceImage named TargetCard:

Sample Image

So to load it as a UIImage and then apply it as an SCNNode or display it in screenSpace you could so something like so:

//1. Load The Image Onto An SCNPlaneGeometry
if let image = UIImage(named: "TargetCard"){

let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: 1, height: 1)
planeGeometry.firstMaterial?.diffuse.contents = image
planeNode.geometry = planeGeometry
planeNode.position = SCNVector3(0, 0, -1.5)
self.augmentedRealityView.scene.rootNode.addChildNode(planeNode)
}

//2. Load The Image Into A UIImageView
if let image = UIImage(named: "TargetCard"){

let imageView = UIImageView(frame: CGRect(x: 10, y: 10, width: 300, height: 150))
imageView.image = image
imageView.contentMode = .scaleAspectFill
self.view.addSubview(imageView)
}

In your context:

Each SCNNode has a name property:

var name: String? { get set }

As such I suggest that when you create content in regard to your ARImageAnchor you provide it with the name of your ARReferenceImage e.g:

//---------------------------
// MARK: - ARSCNViewDelegate
//---------------------------

extension ViewController: ARSCNViewDelegate{

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {

//1. Check We Have Detected An ARImageAnchor & Check It's The One We Want
guard let validImageAnchor = anchor as? ARImageAnchor,
let targetName = validImageAnchor.referenceImage.name else { return}

//2. Create An SCNNode With An SCNPlaneGeometry
let nodeToAdd = SCNNode()
let planeGeometry = SCNPlane(width: 1, height: 1)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
nodeToAdd.geometry = planeGeometry

//3. Set It's Name To That Of Our ARReferenceImage
nodeToAdd.name = targetName

//4. Add It To The Hierachy
node.addChildNode(nodeToAdd)

}
}

Then it is easy to get a reference to the Image later e.g:

/// Checks To See If We Have Hit A Named SCNNode
///
/// - Parameter gesture: UITapGestureRecognizer
@objc func handleTap(_ gesture: UITapGestureRecognizer){

//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)

//2. Perform An SCNHitTest To See If We Have Tapped A Valid SCNNode & See If It Is Named
guard let hitTestForNode = self.augmentedRealityView.hitTest(currentTouchLocation, options: nil).first?.node,
let nodeName = hitTestForNode.name else { return }

//3. Load The Reference Image
self.loadReferenceImage(nodeName, inAR: true)
}

/// Loads A Matching Image For The Identified ARReferenceImage Name
///
/// - Parameters:
/// - fileName: String
/// - inAR: Bool
func loadReferenceImage(_ fileName: String, inAR: Bool){

if inAR{

//1. Load The Image Onto An SCNPlaneGeometry
if let image = UIImage(named: fileName){

let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: 1, height: 1)
planeGeometry.firstMaterial?.diffuse.contents = image
planeNode.geometry = planeGeometry
planeNode.position = SCNVector3(0, 0, -1.5)
self.augmentedRealityView.scene.rootNode.addChildNode(planeNode)
}

}else{

//2. Load The Image Into A UIImageView
if let image = UIImage(named: fileName){

let imageView = UIImageView(frame: CGRect(x: 10, y: 10, width: 300, height: 150))
imageView.image = image
imageView.contentMode = .scaleAspectFill
self.view.addSubview(imageView)
}
}

}

Important:

One thing I have just discovered is that if we load the the ARReferenceImage e.g:

let image = UIImage(named: "TargetCard")

Then the image is displayed is in GrayScale, which is properly what you dont want!

As such what you probably need to do is to copy the ARReferenceImage into the Assets Catalogue and give it a prefix e.g. ColourTargetCard...

Then you would need to change the function slightly by naming your nodes using a prefix e.g:

nodeToAdd.name = "Colour\(targetName)"

Sample Image

Hope it helps...



Related Topics



Leave a reply



Submit