Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Move the origin of a vision system in labview

Hi

 

Im currently working with a 3d visionsystem( Kinect ), and i want to merge the coordinates from my manipulator arm with the kinect pixel coordinates.

 

can this be done with labview only? 

 

The basic idea is to do as this picture, just to simply move the kinects origin down to the manipulator base, or the other way(manipulator -> kinect origin)


I know i have to translate and rotate the image to where i need it to be, but im uncertain how to do this with real metric values.

Ive read up on the details but struggle abit with the big picture..

 

The idea seems fairly simple to just move the camera origin ,for instance 1 m down(-z), 2 m(x) in distance with  X rad angle from where the camera is placed..

 

How can i do what i suggested? Please help 🙂

ps. I dont have much programming background other than labview...

 

Message 1 of 3
(5,468 Views)

Hello,

 

the math part is simple. One of the possible solutions is to use a homogeneous transformation matrix (consists of rotation matrix and translation vector). The order of  the sequential transformations is importatnt and defines the values of the transformation matrix.

 

Take a look at one of the comments at the following link (4-th comment) for a little bit information:

 

https://decibel.ni.com/content/blogs/kl3m3n/2013/09/20/iterative-closest-point-for-3d-alignment-in-l...

 

There is also more in-depth information regarding this - books, google, papers...

 

First, you need to calibrate your system (obtain the rotation angles and translation values that relate one coordinate system to the other).

How are you planning to do this?

 

Best regards,

K

 

 


https://decibel.ni.com/content/blogs/kl3m3n



"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
Message 2 of 3
(5,455 Views)

Thanks for the fast reply 🙂 

 

Ill do it by moving the kinect origin down to the manipulator origin as it seems the easiest for now. Just as the picture shows from the original post

 

 

First i need perspective calibration by placing a chessboard on the same surface as the manipulator, since the camera is not perpendicular with the manipulator surface.

with a program already made ( ex.  RGBDemo -http://labs.manctl.com/rgbdemo/index.php/Documentation/Calibration)

 

By doing this i'll get the intrinsic parameters.

 

Not sure how ill relate the coordinate systems, but i guess theres a way to use fixed calibration points for the end effector(gripper) together with the chessboard ? 

Or is it automatic with the chessboard calibration ?

 

After that i can use the  homogeneous transformation matrix you mentioned in your forumlink to move the origin down,( does it find the extrinsic parameters itself?)then use ICP from

PCL( http://pointclouds.org/documentation/tutorials/interactive_icp.php#interactive-icp) to tune the precicion.. 

 

ICP is more of combining 2 pictures rather than 1 picture and 1 RL system with polar coordinates. So i guess i have to find a way to calibrate with fixed positions from the RL system.

 

 

Best regards

Kadmir 

 

0 Kudos
Message 3 of 3
(5,412 Views)