Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

edge detection at high frame rates

Hi,

 

I am currently looking into buying a USB 3.0 camera to perform on-line metrology. In particular I want to track the distance between two edges (black and white contrast) with reasonable speed. There are many different cameras out there on the market with various frame rates. I was wondering whether it makes sense to buy, for instance, a 9 MP camera with a maximum frame rate of 20Hz. Can LabVIEW process the images at this speed?

 

 

Also, it is difficult to judge which makes more sense for my particular application:

- acquire high resolution images (say >9MP) with low frame rate (5 Hz or so)

- acquire low resolution images (e.g. 4MP) at very high frame rates (100Hz), detect the edges and then average the resulting distances.

 

How to judge which of these two solutions will provide me with the best resolution for my length measurement?

 

Thanks for your feedback!

 

Ture

 

 

0 Kudos
Message 1 of 2
(3,826 Views)

I would recommend benchmarking your algorithm on synthetic images saved on disk to see what type your're getting on your specific hardware.

Vision Assistant features a performance meter that you can use to get benchmarks of the algorithm running on your PC.

If you need to deploy your solution on a real-time target, you will need to generate the code and run it on the target to benchmark it.

 

> How to judge which of these two solutions will provide me with the best resolution for my length measurement?

 

For a given field of view, the higher resolution camera will give you the best resolution.

Averaging results that don't give you as much resolution will not increase the resolution.

 

To determine the resolution of the camera, you need to know the resolution you want for your measurement, related to your entire field of view.

Say you want your distance measurement at a resolution of 0.1mm, then you want at least 2 pixels to represent these 0.1mm in the resulting image.  Then multiply that by the size of your object and you'll get the number of pixels needed for your camera to make the measurement at the resolution you need. The algorithm will return results sub-pixel accuracy, but it is better if you satisfy the 2 pixel per smallest feature you want to measure in the first place.

Resolution will also be impacted by the quality of the lens you'll choose.

 

This article explain imaging resolution very well.

http://www.edmundoptics.com/technical-resources-center/imaging/resolution/

 

Hope this helps.

 

Christophe

0 Kudos
Message 2 of 2
(3,810 Views)