Color Keying Video with Gpuimage on a Scnplane in Arkit

Color keying video with GPUImage on a SCNPlane in ARKit

try clearing the background and set the scalemode with

backgroundColor = .clear 
scaleMode = .aspectFit

ChromaKey video in ARKit

Figured this out. I was setting my color to key out incorrectly (and even in the wrong place facepalm) and there seems to be a bug that prevents the video from playing unless you delay it a bit. That bug was supposedly fixed but it seems to not be the case.

Here is my corrected and cleaned up code if anyone is interested (EDITED TO INCLUDE TIP FROM @mnuages) :

// Get Video URL and create AV Player
let filePath = Bundle.main.path(forResource: "VIDEO_FILE_NAME", ofType: "VIDEO_FILE_EXTENSION")
let videoURL = NSURL(fileURLWithPath: filePath!)
let player = AVPlayer(url: videoURL as URL)

// Create SceneKit videoNode to hold the spritekit scene.
let videoNode = SCNNode()

// Set geometry of the SceneKit node to be a plane, and rotate it to be flat with the image
videoNode.geometry = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width,
height: imageAnchor.referenceImage.physicalSize.height)
videoNode.eulerAngles = SCNVector3(-Float.pi/2, 0, 0)

//Set the video AVPlayer as the contents of the video node's material.
videoNode.geometry?.firstMaterial?.diffuse.contents = player
videoNode.geometry?.firstMaterial?.isDoubleSided = true

// Alpha transparancy stuff
let chromaKeyMaterial = ChromaKeyMaterial()
chromaKeyMaterial.diffuse.contents = player
videoNode.geometry!.materials = [chromaKeyMaterial]

//video does not start without delaying the player
//playing the video before just results in [SceneKit] Error: Cannot get pixel buffer (CVPixelBufferRef)
DispatchQueue.main.asyncAfter(deadline: .now() + 0.001) {
player.seek(to:CMTimeMakeWithSeconds(1, 1000))
player.play()
}
// Loop video
NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: player.currentItem, queue: .main) { _ in
player.seek(to: kCMTimeZero)
player.play()
}

// Add videoNode to ARAnchor
node.addChildNode(videoNode)

// Add ARAnchor node to the root node of the scene
self.imageDetectView.scene.rootNode.addChildNode(node)

And here is the chrome key material

import SceneKit

public class ChromaKeyMaterial: SCNMaterial {

public var backgroundColor: UIColor {
didSet { didSetBackgroundColor() }
}

public var thresholdSensitivity: Float {
didSet { didSetThresholdSensitivity() }
}

public var smoothing: Float {
didSet { didSetSmoothing() }
}

public init(backgroundColor: UIColor = .green, thresholdSensitivity: Float = 0.50, smoothing: Float = 0.001) {

self.backgroundColor = backgroundColor
self.thresholdSensitivity = thresholdSensitivity
self.smoothing = smoothing

super.init()

didSetBackgroundColor()
didSetThresholdSensitivity()
didSetSmoothing()

// chroma key shader is based on GPUImage
// https://github.com/BradLarson/GPUImage/blob/master/framework/Source/GPUImageChromaKeyFilter.m

let surfaceShader =
"""
uniform vec3 c_colorToReplace;
uniform float c_thresholdSensitivity;
uniform float c_smoothing;

#pragma transparent
#pragma body

vec3 textureColor = _surface.diffuse.rgb;

float maskY = 0.2989 * c_colorToReplace.r + 0.5866 * c_colorToReplace.g + 0.1145 * c_colorToReplace.b;
float maskCr = 0.7132 * (c_colorToReplace.r - maskY);
float maskCb = 0.5647 * (c_colorToReplace.b - maskY);

float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);

float blendValue = smoothstep(c_thresholdSensitivity, c_thresholdSensitivity + c_smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));

float a = blendValue;
_surface.transparent.a = a;
"""

//_surface.transparent.a = a;

shaderModifiers = [
.surface: surfaceShader,
]
}

required public init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}

//setting background color to be keyed out
private func didSetBackgroundColor() {
//getting pixel from background color
//let rgb = backgroundColor.cgColor.components!.map{Float($0)}
//let vector = SCNVector3(x: rgb[0], y: rgb[1], z: rgb[2])
let vector = SCNVector3(x: 0.0, y: 1.0, z: 0.0)
setValue(vector, forKey: "c_colorToReplace")
}

private func didSetSmoothing() {
setValue(smoothing, forKey: "c_smoothing")
}

private func didSetThresholdSensitivity() {
setValue(thresholdSensitivity, forKey: "c_thresholdSensitivity")
}
}

Using AVCaptureDevice as SCNScene background content

Edit

This bug seems to be fixed in iOS 11.2


Original answer

this appears to be a bug in SceneKit.

If that works for you a workaround would be to use an ARSCNView. It gives you access to all the SceneKit APIs, and it automatically draws the video feed as the scene's background.

How do you play a video with alpha channel using AVFoundation?

I've came up with two ways of making this possible. Both utilize surface shader modifiers. Detailed information on shader modifiers can be found in Apple Developer Documentation.

Here's an example project I've created.


1. Masking

  1. You would need to create another video that represents a transparency mask. In that video black = fully opaque, white = fully transparent (or any other way you would like to represent transparency, you would just need to tinker the surface shader).

  2. Create a SKScene with this video just like you do in the code you provided and put it into material.transparent.contents (the same material that you put diffuse video contents into)

    let spriteKitOpaqueScene = SKScene(...)
    let spriteKitMaskScene = SKScene(...)
    ... // creating SKVideoNodes and AVPlayers for each video etc

    let material = SCNMaterial()
    material.diffuse.contents = spriteKitOpaqueScene
    material.transparent.contents = spriteKitMaskScene

    let background = SCNPlane(...)
    background.materials = [material]
  3. Add a surface shader modifier to the material. It is going to "convert" black color from the mask video (well, actually red color, since we only need one color component) into alpha.

    let surfaceShader = "_surface.transparent.a = 1 - _surface.transparent.r;"
    material.shaderModifiers = [ .surface: surfaceShader ]

That's it! Now the white color on the masking video is going to be transparent on the plane.

However you would have to take extra care of syncronizing these two videos since AVPlayers will probably get out of sync. Sadly I didn't have time to address that in my example project (yet, I will get back to it when I have time). Look into this question for a possible solution.

Pros:

  • No artifacts (if syncronized)
  • Precise

Cons:

  • Requires two videos instead of one
  • Requires synchronisation of the AVPlayers

2. Chroma keying

  1. You would need a video that has a vibrant color as a background that would represent parts that should be transparent. Usually green or magenta are used.

  2. Create a SKScene for this video like you normally would and put it into material.diffuse.contents.

  3. Add a chroma key surface shader modifier which will cut out the color of your choice and make these areas transparent. I've lent this shader from GPUImage and I don't really know how it actually works. But it seems to be explained in this answer.

     let surfaceShader =
    """
    uniform vec3 c_colorToReplace = vec3(0, 1, 0);
    uniform float c_thresholdSensitivity = 0.05;
    uniform float c_smoothing = 0.0;

    #pragma transparent
    #pragma body

    vec3 textureColor = _surface.diffuse.rgb;

    float maskY = 0.2989 * c_colorToReplace.r + 0.5866 * c_colorToReplace.g + 0.1145 * c_colorToReplace.b;
    float maskCr = 0.7132 * (c_colorToReplace.r - maskY);
    float maskCb = 0.5647 * (c_colorToReplace.b - maskY);

    float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
    float Cr = 0.7132 * (textureColor.r - Y);
    float Cb = 0.5647 * (textureColor.b - Y);

    float blendValue = smoothstep(c_thresholdSensitivity, c_thresholdSensitivity + c_smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));

    float a = blendValue;
    _surface.transparent.a = a;
    """

    shaderModifiers = [ .surface: surfaceShader ]

    To set uniforms use setValue(:forKey:) method.

    let vector = SCNVector3(x: 0, y: 1, z: 0) // represents float RGB components
    setValue(vector, forKey: "c_colorToReplace")
    setValue(0.3 as Float, forKey: "c_smoothing")
    setValue(0.1 as Float, forKey: "c_thresholdSensitivity")

    The as Float part is important, otherwise Swift is going to cast the value as Double and shader will not be able to use it.

    But to get a precise masking from this you would have to really tinker with the c_smoothing and c_thresholdSensitivity uniforms. In my example project I ended up having a little green rim around the shape, but maybe I just didn't use the right values.

Pros:

  • only one video required
  • simple setup

Cons:

  • possible artifacts (green rim around the border)


Related Topics



Leave a reply



Submit