Arkit - Apply Cifilter to a Specific Vertices of Arfaceanchor

How to keep ARKit SCNNode in place

There are two kinds of "moving around" that could be happening here.


One is that ARKit is continuously refining its estimate of how the device's position in the real world maps to the abstract coordinate space you're placing virtual content in. For example, suppose you put a virtual object at (0, 0, -0.5), and then move your device to the left by exactly 10 cm. The virtual object will appear to be anchored in physical space only if ARKit tracks the move precisely. But visual-inertial odometry isn't an exact science, so it's possible that ARKit thinks you moved to the left by 10.5 cm — in that case, your virtual object will appear to "slip" to the right by 5 mm, even though its position in the ARKit/SceneKit coordinate space remains constant.

You can't really do much about this, other than hope Apple makes devices with better sensors, better cameras, or better CPUs/GPUs and improves the science of world tracking. (In the fullness of time, that's probably a safe bet, though that probably doesn't help with your current project.)


Since you're also dealing with plane detection, there's another wrinkle. ARKit is continuously refining its estimates of where a detected plane is. So, even though the real-world position of the plane isn't changing, its position in ARKit/SceneKit coordinate space is.

This kind of movement is generally a good thing — if you want your virtual object to appear anchored to the real-world surface, you want to be sure of where that surface is. You'll see some movement as plane detection gets more sure of the surface's position, but after a short time, you should see less "slip" as you move the camera around for plan-anchored virtual objects than those that are just floating in world space.


In your code, though, you're not taking advantage of plane detection to make your custom content (from "PlayerModel.scn") stick to the plane anchor:

wrapperNode.position = SCNVector3.positionFromTransform(anchor.transform)
wrapperNode.addChildNode(Body)
scnView.scene.rootNode.addChildNode(wrapperNode)

This code uses the initial position of the plane anchor to position wrapperNode in world space (because you're making it a child of the root node). If you instead make wrapperNode a child of the plane anchor's node (the one you received in renderer(_:didAdd:for:)), it'll stay attached to the plane as ARKit refines its estimate of the plane's position. You'll get a little bit more movement initially, but as plane detection "settles", your virtual object will "slip" less.

(When you make the node a child of the plane, you don't need to set its position — a position of zero means it's right where the plane is. Inf anything, you need to set its position only relative to the plane — i.e. how far above/below/along it.)

Custom SceneKit Geometry in Swift on iOS not working but equivalent Objective C code does

Well, the two pieces of code doesn't translate exactly to one another. The int in C is not the same as Int in Swift. It's actually called CInt in Swift:

/// The C 'int' type.
typealias CInt = Int32

If you change both occurrences to use CInt instead, the error message that you previously got goes away (at least for me in an OS X Playground. However, it still doesn't render anything for me.

I don't think sizeofValue is used to return the size of an array. It looks to me like it's returning the size of the pointer:

let indexes: CInt[] = [0, 1, 2]
sizeofValue(indexes) // is 8
sizeof(CInt) // is 4
sizeof(CInt) * countElements(indexes) // is 12

// compare to other CInt[]
let empty: CInt[] = []
let large: CInt[] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

sizeofValue(indexes) // is 8 (your array of indices again)
sizeofValue(empty) // is 8
sizeofValue(large) // is 8

So, for me the following code works (I've put the arguments on different lines to make it easier to point out my changes):

let src = SCNGeometrySource(vertices: &verts, count: 3)
let indexes: CInt[] = [0, 1, 2] // Changed to CInt

let dat = NSData(
bytes: indexes,
length: sizeof(CInt) * countElements(indexes) // Changed to size of CInt * count
)
let ele = SCNGeometryElement(
data: dat,
primitiveType: .Triangles,
primitiveCount: 1,
bytesPerIndex: sizeof(CInt) // Changed to CInt
)
let geo = SCNGeometry(sources: [src], elements: [ele])

let nd = SCNNode(geometry: geo)
scene.rootNode.addChildNode(nd)

With this result:

Sample Image

Scaling a SCNNode in ARKit

Here is how I scale my nodes:

/// Scales An SCNNode
///
/// - Parameter gesture: UIPinchGestureRecognizer
@objc func scaleObject(gesture: UIPinchGestureRecognizer) {

let location = gesture.location(in: sceneView)
let hitTestResults = sceneView.hitTest(location)
guard let nodeToScale = hitTestResults.first?.node else {
return
}

if gesture.state == .changed {

let pinchScaleX: CGFloat = gesture.scale * CGFloat((nodeToScale.scale.x))
let pinchScaleY: CGFloat = gesture.scale * CGFloat((nodeToScale.scale.y))
let pinchScaleZ: CGFloat = gesture.scale * CGFloat((nodeToScale.scale.z))
nodeToScale.scale = SCNVector3Make(Float(pinchScaleX), Float(pinchScaleY), Float(pinchScaleZ))
gesture.scale = 1

}
if gesture.state == .ended { }

}

In my example current node refers to an SCNNode, although you can set this however you like.

ARKit place models in the real world randomly

To make it you'll need use session(_:didUpdate:) delegate method:

func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let cameraTransform = session.currentFrame?.camera.transform else { return }
let cameraPosition = SCNVector3(
/* At this moment you could be sure, that camera properly oriented in world coordinates */
cameraTransform.columns.3.x,
cameraTransform.columns.3.y,
cameraTransform.columns.3.z
)
/* Now you have cameraPosition with x,y,z coordinates and you can calculate distance between those to points */
let randomPoint = CGPoint(
/* Here you can make random point for hitTest. */
x: CGFloat(arc4random()) / CGFloat(UInt32.max),
y: CGFloat(arc4random()) / CGFloat(UInt32.max)
)
guard let testResult = frame.hitTest(randomPoint, types: .featurePoint).first else { return }
let objectPoint = SCNVector3(
/* Converting 4x4 matrix into x,y,z point */
testResult.worldTransform.columns.3.x,
testResult.worldTransform.columns.3.y,
testResult.worldTransform.columns.3.z
)
/* do whatever you need with this object point */
}

It'll allows you place object whenever camera position updates:

Implement this method if you provide your own display for rendering an
AR experience. The provided ARFrame object contains the latest image
captured from the device camera, which you can render as a scene
background, as well as information about camera parameters and anchor
transforms you can use for rendering virtual content on top of the
camera image.

Really important here, that you're randomly choosing point for hitTest method, and this point always will be in front of camera.

Don't forget to use from 0 to 1.0 coordinate system for CGPoint in hitTest method:

A point in normalized image coordinate space. (The point (0,0)
represents the top left corner of the image, and the point (1,1)
represents the bottom right corner.)

If you want to place object each 10 meters, you can save camera position (in session(_:didUpdate:) method) and check that x+z coordinates was changed for far enough, to place new object.

NOTE:

I'm assuming that you're using world tracking session:

let configuration = ARWorldTrackingSessionConfiguration()
session.run(configuration, options: [.resetTracking, .removeExistingAnchors])


Related Topics



Leave a reply



Submit