How to Correctly Use Cv::Triangulatepoints()

How to correctly use cv::triangulatePoints()

1.- The method used is Least Squares. There are more complex algorithms than this one. Still it is the most common one, as the other methods may fail in some cases (i.e. some others fails if points are on plane or on infinite).

The method can be found in Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman (p312)

2.-The usage:

cv::Mat pnts3D(1,N,CV_64FC4);
cv::Mat cam0pnts(1,N,CV_64FC2);
cv::Mat cam1pnts(1,N,CV_64FC2);

Fill the 2 chanel point Matrices with the points in images.

cam0 and cam1 are Mat3x4 camera matrices (intrinsic and extrinsic parameters). You can construct them by multiplying A*RT, where A is the intrinsic parameter matrix and RT the rotation translation 3x4 pose matrix.

cv::triangulatePoints(cam0,cam1,cam0pnts,cam1pnts,pnts3D);

NOTE: pnts3D NEEDs to be a 4 channel 1xN cv::Mat when defined, throws exception if not, but the result is a cv::Mat(4,N,cv_64FC1) matrix. Really confusing, but it is the only way I didn't got an exception.


UPDATE: As of version 3.0 or possibly earlier, this is no longer true, and pnts3D can also be of type Mat(4,N,CV_64FC1) or may be left completely empty (as usual, it is created inside the function).

How to properly define arguments to use in triangulatePoints (opencv)?

Finally this works:

cv::Mat pointsMat1(2, 1, CV_64F);
cv::Mat pointsMat2(2, 1, CV_64F);

int size0 = m_history.getHistorySize();

for(int i = 0; i < size0; i++)
{
cv::Point pt1 = m_history.getOriginalPoint(0, i);
cv::Point pt2 = m_history.getOriginalPoint(1, i);

pointsMat1.at<double>(0,0) = pt1.x;
pointsMat1.at<double>(1,0) = pt1.y;
pointsMat2.at<double>(0,0) = pt2.x;
pointsMat2.at<double>(1,0) = pt2.y;

cv::Mat pnts3D(4, 1, CV_64F);

cv::triangulatePoints(m_projectionMat1, m_projectionMat2, pointsMat1, pointsMat2, pnts3D);
}

Convert OpenCV triangulatePoints output for use in perspectiveTransform using Python

This helped me to figure it out:

http://answers.opencv.org/question/252/cv2perspectivetransform-with-python/

Here is how I implemented it:

    Tpoints1 = cv2.triangulatePoints(P_1, P_2, normRshp1, normRshp2)         
#Grab the first three columns from the results to make a 3-N array
Tpt1 = np.array([Tpoints1[0],Tpoints1[1],Tpoints1[2]])
#Reshape to make a N-3 array
Tpt1 = Tpt1.reshape(-1,3)
#Make an array from the array resulting in [1,N,3]
Tpt1 = np.array([Tpt1])
#Reshape camera matrix to a 4x4 for input into perspectiveTransform
#we'll just add zeros and a one to the last row.
# from the TestTriangualtion function @,
#https://github.com/MasteringOpenCV/code/blob/master/Chapter4_StructureFromMotion/FindCameraMatrices.cpp
P_24x4 = resize(P_2,(4,4))
P_24x4[3,0] = 0
P_24x4[3,1] = 0
P_24x4[3,2] = 0
P_24x4[3,3] = 1
Tpoints1_projected = cv2.perspectiveTransform(Tpt1, P_24x4)


Related Topics



Leave a reply



Submit