Extracting 3D Coordinates Given 2D Image Points, Depth Map and Camera Calibration Matrices

Extracting 3D coordinates given 2D image points, depth map and camera calibration matrices

Nicolas Burrus has created a great tutorial for Depth Sensors like Kinect.

http://nicolas.burrus.name/index.php/Research/KinectCalibration

I'll copy & paste the most important parts:

Mapping depth pixels with color pixels

The first step is to undistort rgb and depth images using the
estimated distortion coefficients. Then, using the depth camera
intrinsics, each pixel (x_d,y_d) of the depth camera can be projected
to metric 3D space using the following formula:

P3D.x = (x_d - cx_d) * depth(x_d,y_d) / fx_d
P3D.y = (y_d - cy_d) * depth(x_d,y_d) / fy_d
P3D.z = depth(x_d,y_d)

with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera.

If you are further interested in stereo mapping (values for kinect):

We can then reproject each 3D point on the color image and get its
color:

P3D' = R.P3D + T 
P2D_rgb.x = (P3D'.x * fx_rgb / P3D'.z) + cx_rgb
P2D_rgb.y = (P3D'.y * fy_rgb / P3D'.z) + cy_rgb

with R and T the rotation and translation parameters estimated during
the stereo calibration.

The parameters I could estimate for my Kinect are:

Color

fx_rgb 5.2921508098293293e+02 
fy_rgb 5.2556393630057437e+02 
cx_rgb 3.2894272028759258e+02
cy_rgb 2.6748068171871557e+02
k1_rgb 2.6451622333009589e-01
k2_rgb -8.3990749424620825e-01
p1_rgb -1.9922302173693159e-03
p2_rgb 1.4371995932897616e-03
k3_rgb 9.1192465078713847e-01

Depth

fx_d 5.9421434211923247e+02 
fy_d 5.9104053696870778e+02 
cx_d 3.3930780975300314e+02
cy_d 2.4273913761751615e+02
k1_d -2.6386489753128833e-01
k2_d 9.9966832163729757e-01
p1_d -7.6275862143610667e-04
p2_d 5.0350940090814270e-03
k3_d -1.3053628089976321e+00

Relative transform between the sensors (in meters)

R [ 9.9984628826577793e-01, 1.2635359098409581e-03, -1.7487233004436643e-02, 
-1.4779096108364480e-03, 9.9992385683542895e-01, -1.2251380107679535e-02,
1.7470421412464927e-02, 1.2275341476520762e-02, 9.9977202419716948e-01 ]

T [ 1.9985242312092553e-02, -7.4423738761617583e-04, -1.0916736334336222e-02 ]

reconstructing position from depth using opencv

Your question seems related to this answer
[Extracting 3D coordinates given 2D image points, depth map and camera calibration matrices

Here, you have a direct correspondence from 2D and 3D (actually 2.5D) values. By undistorting both, the depth and the 2D image, you can use the focal length and the measured depth to do the inverse mapping from 2D pixels to 3D world coordinates.

Get 3D coordinates from 2D image pixel if extrinsic and intrinsic parameters are known

If you got extrinsic parameters then you got everything. That means that you can have Homography from the extrinsics (also called CameraPose). Pose is a 3x4 matrix, homography is a 3x3 matrix, H defined as

                   H = K*[r1, r2, t],       //eqn 8.1, Hartley and Zisserman

with K being the camera intrinsic matrix, r1 and r2 being the first two columns of the rotation matrix, R; t is the translation vector.

Then normalize dividing everything by t3.

What happens to column r3, don't we use it? No, because it is redundant as it is the cross-product of the 2 first columns of pose.

Now that you have homography, project the points. Your 2d points are x,y. Add them a z=1, so they are now 3d. Project them as follows:

        p          = [x y 1];
projection = H * p; //project
projnorm = projection / p(z); //normalize

Hope this helps.

How do I derive the world coordinate of a point, given it's 2d image plane location, and depth value?

The camera matrix is:

[[fx 0  cx]
[0 fy cy]
[0 0 1 ]]

where fx and fy are the focal length in pixels. Since fx and fy are in pixels you can just use similar triangles to get the x and y coordinates:

x / z = (x_pixel - cx) / fx
y / z = (y_pixel - cy) / fy

or

x = (x_pixel - cx)/fx * z
y = (y_pixel - cy)/fy * z

You may have to multiply by -1 depending on how your coordinate systems are defined.



Related Topics



Leave a reply



Submit