Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

coordinate measurement (using NI 1776C smart camera)

 
I am using NI smart camera 1776C to track (to find coordinate of a point marked in plain paper) a point. Then the paper is shifted(by some means) from its original position and i have to measure the distance(displacement) between this two points(the marked point before and after the change in position of the paper). 
 
Also i have to acquire the continuos online data(pixels or mm) so that it is possible to control (PID controller) the position of the marked point.
 
I am using Labview plattform. 
Is it possible to acquire data from the camera using NI 9219/NI 9223.
I kindly request you to give some idea how to proceed. 
 
 
0 Kudos
Message 1 of 2
(4,034 Views)

Hello,

 

first, if you want to measure the distance with only one camera, you need to have and maintain a constant distance between the camera and the measured surface - the point. If you move either, the measurements will not be valid anymore, since there will be a change in the magnification.

 

For passing your pixels position to the controller, you will need to calibrate the system - you need to find the relation between pixels shift and the controller. This should be straightforward - after setting up your system, you need to move the point with the cotroller and note the pixels shift (in both axes). You could do some statistical analysis of the repeatability of your motion.

Before this, it is also good to cancel out the lens distortion, because it is not linear and will affect your measurements when moving the point through the camera's field of view. You can use the NI calibration training interface. Try to cover the entire field of view of your camera with the calibration grid.

 

About the method to use in order to obtain the marked point coordinates you could simply (auto) threshold your image. Try to have as much as contrast as possible between the background and the point (if the the paper is white and the point is black, you should have no problem. If your point is for example red, use the green channel, etc...). In the first iteration, you could mark the area where this point is positioned or you could make some other estimation. In every next iteration, you can just threshold the image inside a specified ROI, that is determined with the position of the point. This ROI should be adaptive, so that in every subsequent iteration it "moves" with the point. I have attached a simple example of getting the (center of mass - assuming constant density) coordinates of the point that can be used as a starting point.

 

Alternatively, you could also look at the pattern and geometric matching methods. If you have LV2013, you could also try the new tracking library - you can get the example code (used in real-time) from one of the posts here: https://decibel.ni.com/content/blogs/kl3m3n

 

Hope this helps.

 

Best regards,

K

 

 

   


https://decibel.ni.com/content/blogs/kl3m3n



"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
0 Kudos
Message 2 of 2
(4,021 Views)