Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Inverse stereo vision (based on CalTech camera calibration toolbox)

Hello everyone,

 

I have a stereo vision application that I am building in Labview, and part of this process has been implementing algorithms from the open source CalTech camera calibration toolbox in LabView.

 

Specifically, I am trying to invert the stereo triangulation operation found in the CalTech toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/); given 3D world coordinates, I need to find the corresponding 2D image coordinates.

 

The problem is that current the algorithm (2D image --> 3D world) employs a significant amount of inner product algebra, and I've been unable to decompose these steps when going in the inverse direction (3D world --> 2D image). My question to anyone who has experience with this: Is this an ill-posed problem (i.e. finding inverse of dot product leads to an infinite number of solutions)? If so, is there a work-around?

 

Below is the Matlab code from the toolbox that I'm refering to.

 

Thank you in advance for your time,

Alvin Chen

 

 

% --- Known inputs from calibration:
om: 3D rotation matrix (extrinsic parameter)
T: translation matrix (extrinsic parameter)
R: R = rodrigues(om)
xt: normalized left image coordinate
xtt: normalized right image coordinate

 

% --- Stereo triangulation
u = R * xt;

n_xt2 = dot(xt,xt);
n_xtt2 = dot(xtt,xtt);

T_vect = repmat(T, [1 N]);

 

DD = n_xt2 .* n_xtt2 - dot(u,xtt).^2;

dot_uT = dot(u,T_vect);
dot_xttT = dot(xtt,T_vect);
dot_xttu = dot(u,xtt);

 

NN1 = dot_xttu.*dot_xttT - n_xtt2 .* dot_uT;
NN2 = n_xt2.*dot_xttT - dot_uT.*dot_xttu;

 

Zt = NN1./DD;
Ztt = NN2./DD;

 

X1 = xt .* repmat(Zt,[3 1]);
X2 = R'*(xtt.*repmat(Ztt,[3,1]) - T_vect);

 

% --- Left world coordinates:
XL = 1/2 * (X1 + X2);

 

% --- Right worldcoordinates:
XR = R*XL + T_vect;

Download All
0 Kudos
Message 1 of 4
(4,071 Views)

I would like to know if you had solved your problem.

0 Kudos
Message 2 of 4
(3,733 Views)

Hi Alvin6688,

 

Here in Labmetro-UFSC (Brazil) we work on an algoritm for inverse triangulation.

 

Basically, for a user defined X,Y, several diferent Zs (3D points [X,Y,Z(1...n) ]) are projected onto image planes with the classical projection matrix.  Two image planes if stereo vision, but any number of cameras can be used. The projection matrix and distortion coeficients can be determined with the CalTech camera calibration toolbox.

 

The information extracted from images are analised to find the Z for these X and Y and the homologous points simultaneously. The process is repeated for all X,Y defined by user, resulting in a regular and organized mesh in world coordinates.

 

Information extracted and compared from images can be phase values from phase maps, spatial correlation and temporal correlation with band-limited pattern projection, for exemple.

 

More details can be find on this article:

http://www.sciencedirect.com/science/article/pii/S0143816612000759

 

Best regards

 

0 Kudos
Message 3 of 4
(3,522 Views)
0 Kudos
Message 4 of 4
(3,521 Views)