09-28-2020 11:45 AM
Hello to all.
I would like to know the level of detail of a defect that my application can inspect.
My application has a camera with 3.2MP I would like to know (aprox) how much defect level can I inspect. For example, if the material has a defect at least two millimeters the application can inspect. Below two millimeters it cannot. i would like to determine this.
How can I do that roughly??
Thanks a lot.
09-28-2020 12:15 PM
How is this related to LabVIEW?
09-28-2020 12:20 PM
09-28-2020 01:06 PM
@Alvaro.S wrote:
My application has been built in labview and its tools.
Thanks
Sounds like that doesn't even matter yet. You probably need to perform some "back of the envelope" calculations and answer that yourself before you even begin to program it. Your question has nothing to do with programming - yet.
09-28-2020 01:10 PM
Thanks a lot for your replying.
I have programmed already it.
Could you give me some indications or website where I can found information about doing those calculations?
Thanks.
09-28-2020 01:27 PM
You'd probably have to look at some camera specifications.
Probably also some optics calculations as relating to field of view, distance, focus, and how much space a pixel would represent in an image.
09-29-2020 03:07 AM - edited 09-29-2020 03:08 AM
While back since I worked with visual inspection dev.
Between your XX MP picture and the object of interest you usually have ??? Yes ... an optical system 🙂
And you need to qualify the optical system incl. your camera (and distance to object) to relate a pixel to distance/area with some uncertainty...
09-29-2020 03:49 AM - edited 09-29-2020 03:52 AM
Very simply put, the size of the subject (width*height) divided by the pixels (width*height) gives you the size of a pixel.
That should give some indication of the resolution per pixel.
However, depending on the problem, you might be able to use sub pixel accuracy. Or you might need at least 50 (or 10 or 500) pixels to distinguish a defect...
Especially at close distances, the lens will distort at the edges. This is simple perspective, the distance from lens to the corners is higher that from the lens to the center... So the edges might be blurred. Telecentric lenses don't have this at all, but are quite expensive and have an image size fixed to the lens opening...
Also, B&W cameras are often sharper as RGB cameras, and have lower SNR. This is because for B&W, each pixel has one sensor, while for RGB each pixel has 3 (or 4) sensors at the same surface space. Less light per sensor, means more amplification...
The trick is to make a system that stays well within the requirements. Because if you get near the edge, you might fall off.
EDIT: And as usual, lightning is neglected completely. Good lightning is the most important aspect of a vision system! But OT.
09-29-2020 05:49 AM
And for very fine imaging, don't forget about the effect of vibration or mechanical movements on the precision of what you're doing.
We design systems for sub-atomic resolution. Obviously not optical, but a lot of the same outside interferences apply.
09-29-2020 07:02 AM
wiebe@CARYA wrote:
EDIT: And as usual, lightning is neglected completely. Good lightning is the most important aspect of a vision system! But OT.
Amen!