02-10-2015 08:16 AM
Hi,
I do not know whether or not I could use Point Set Registration: Coherent Point Drift (CPD) for correcting the error of my 3d positioning using an overhead camera.
I have already done the intrinstic and extrinsic camera calibration but I still have errors between the measuremed position of my tracking object and it's true location. ( I track a board with 4 blobs which should give me the x,y,z of the tracking board)
I would like to find the mapping between my measurements and true positions for those measurements which I can then use for new points to decrease the positioning error.
I have been playing around in matlab with the cpd code here https://sites.google.com/site/myronenko/research/cpd
I can find the transform but I dont know how to use it for new data points. It seems like the transformation matrice is a set of weights for each data point that it has been given. I added new points to the Y set and I get a new position for those points which seems to be ok but I really have no idea how the method deals with the unseen points or how the transformation can be used for your new data.
Thanks a lot for your help 🙂
02-19-2015 02:49 AM
Hi zeinab.t,
I am not quite sure what you exactly want to know.
From your explanations, I think to root issue is a not correct calibration of the vision system. To go a bit deeper:
Calibration is used most frequently for stereo vision systems to calibrate two camera to get a correct 3D image in the acquisition. You wrote, that you are using only one camera. So how do you calibrate the camera? Which functions do you use? How is your vision application setup? A small draft may be helpful. Waht software do you use?
About the tool you used in the link, I can not say anything, since I do not know, how this works in the background.
I think we should keep our eyes to the root cause problem, so you do not need any transformation things that could potentially bring errors into the measurements.
Best regards,
Melanie