Get 3D Coordinates from 2D Image Pixel If Extrinsic and Intrinsic Parameters Are Known

Get 3D coordinates from 2D image pixel if extrinsic and intrinsic parameters are known

If you got extrinsic parameters then you got everything. That means that you can have Homography from the extrinsics (also called CameraPose). Pose is a 3x4 matrix, homography is a 3x3 matrix, H defined as

                   H = K*[r1, r2, t],       //eqn 8.1, Hartley and Zisserman

with K being the camera intrinsic matrix, r1 and r2 being the first two columns of the rotation matrix, R; t is the translation vector.

Then normalize dividing everything by t3.

What happens to column r3, don't we use it? No, because it is redundant as it is the cross-product of the 2 first columns of pose.

Now that you have homography, project the points. Your 2d points are x,y. Add them a z=1, so they are now 3d. Project them as follows:

        p          = [x y 1];
projection = H * p; //project
projnorm = projection / p(z); //normalize

Hope this helps.

Reconstruct 3D-Coordinates in Camera Coordinate System from 2D - Pixels with side condition

You cannot reverse the step in general: depth and scale information is lost when 3D points are projected onto a 2D image. However if, as you indicate, all your 3D points are on the Z=0 plane, then getting them back from their projections is trivial: compute the inverse Ki = K^-1 of the camera matrix, and apply it to the image points in homogeneous coordinates.

P_camera = Ki * [u, v, 1]'

where [u, v] are the image coordinates, and the apostrophe denotes transposition.
The 3D points you want lie on the rays from the camera centre to the P_camera's. Express both in world coordinates:

P_world = [R|t]_camera_to_world * [P_camera, 1]'

C_world = [R|t]_camera_to_world * [0, 0, 0, 1]'

where [R|t] is the 4x4 coordinate transform.
Now, the set of points on each ray is expressed as

P = C_world + lambda * P_world;

where lambda is a scalar (the coordinate along the ray). You can now impose the condition that P(3) = 0 to find the value of lambda that places your points on the Z = 0 plane.

Extracting 3D coordinates given 2D image points, depth map and camera calibration matrices

Nicolas Burrus has created a great tutorial for Depth Sensors like Kinect.

http://nicolas.burrus.name/index.php/Research/KinectCalibration

I'll copy & paste the most important parts:

Mapping depth pixels with color pixels

The first step is to undistort rgb and depth images using the
estimated distortion coefficients. Then, using the depth camera
intrinsics, each pixel (x_d,y_d) of the depth camera can be projected
to metric 3D space using the following formula:

P3D.x = (x_d - cx_d) * depth(x_d,y_d) / fx_d
P3D.y = (y_d - cy_d) * depth(x_d,y_d) / fy_d
P3D.z = depth(x_d,y_d)

with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera.

If you are further interested in stereo mapping (values for kinect):

We can then reproject each 3D point on the color image and get its
color:

P3D' = R.P3D + T 
P2D_rgb.x = (P3D'.x * fx_rgb / P3D'.z) + cx_rgb
P2D_rgb.y = (P3D'.y * fy_rgb / P3D'.z) + cy_rgb

with R and T the rotation and translation parameters estimated during
the stereo calibration.

The parameters I could estimate for my Kinect are:

Color

fx_rgb 5.2921508098293293e+02 
fy_rgb 5.2556393630057437e+02 
cx_rgb 3.2894272028759258e+02
cy_rgb 2.6748068171871557e+02
k1_rgb 2.6451622333009589e-01
k2_rgb -8.3990749424620825e-01
p1_rgb -1.9922302173693159e-03
p2_rgb 1.4371995932897616e-03
k3_rgb 9.1192465078713847e-01

Depth

fx_d 5.9421434211923247e+02 
fy_d 5.9104053696870778e+02 
cx_d 3.3930780975300314e+02
cy_d 2.4273913761751615e+02
k1_d -2.6386489753128833e-01
k2_d 9.9966832163729757e-01
p1_d -7.6275862143610667e-04
p2_d 5.0350940090814270e-03
k3_d -1.3053628089976321e+00

Relative transform between the sensors (in meters)

R [ 9.9984628826577793e-01, 1.2635359098409581e-03, -1.7487233004436643e-02, 
-1.4779096108364480e-03, 9.9992385683542895e-01, -1.2251380107679535e-02,
1.7470421412464927e-02, 1.2275341476520762e-02, 9.9977202419716948e-01 ]

T [ 1.9985242312092553e-02, -7.4423738761617583e-04, -1.0916736334336222e-02 ]

Deprojection from 2D to 3D is wrong regardless of method

It's just UE4 being odd about the rotation matrix. I modeled the problem in Blender 2.79 and everything works perfectly if you rotate the camera with ZYX (ypr) and add a 180 degree offset to the roll (so 180-reported_angle) in the code.

Upshot is know your engine and how it does or does not conform to published standards!

Convert 2D point in image(with perspective) to 3D world coordinate

I found a solution to the answer here: https://dsp.stackexchange.com/a/46591/46122

I also successfully solved it by using the method mentioned here: Computing x,y coordinate (3D) from image point



Related Topics



Leave a reply



Submit