08-04-2016 03:35 AM
Hi,
we have a 3D-vision system that generates a 3D image of an object. We would like to transform the 3D pixel values (x,y,z)
to real world coordinates, so that a robot can go to these positions.
We have an array of corresponding 3D points (vision coordinates and robot coordinates) to define the transformation. They are given in different coordinate systems (different in rotation, translation and scale), and they also contain noise so there's no "perfect" fit between them. What would be the best way to find a transformation between them?
08-04-2016 07:02 AM
Hi Thomas,
I didn't work on 3D-vision systems, but i did on 2D-vision systems.
So i thought i might share some info on these:
-Please find the stereo Vision concepts here:
http://zone.ni.com/reference/en-XX/help/372916T-01/TOC28.htm
-You need to calibrate both the cameras to get disparity image.
By using one of the methods specified in http://zone.ni.com/reference/en-XX/help/372916T-01/nivisionconcepts/stereo_image_correspondence/
-Use IMAQ Convert Pixel To 3D Coordinates VI to convert pixel values to real world 3d coordinates.
You have all the vis here: http://zone.ni.com/reference/en-XX/help/370281AA-01/imaqvision/stereo_pal/
08-04-2016 07:19 AM
Hi Uday,
Thanks for your reply. Unfortunately I am not working with stereo vision to create a 3D image. We are using laser-based 3D triangulation. I already found the VI's that you referred to, however they are specific for stereo vision.
I am looking just for the math where we can convert a 3D vision coordinate to a 3D world coordinate, based on an array of corresponding 3D points in camera and world coordinates.
It would be nice that I can give as many calibration points as I want, where more points give better results (more points for regression).