Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

What works well for calibrating pixels into real world measurements?

I am trying to convert my images from pixels into real world measurements and I was wondering if anyone had any good advice on how to do this? I am using a IR camera and an IR light source. I was thinking of simply making a piece of paper with 2 vertical lines a known distance from each other, and then making sure it was in the same image. I am not sure how well the IMAQ Convert Pixel to Real World works? Should the lines have to be narrow or broad? Is there a better way to do this? Thank you in advance for any advice.
0 Kudos
Message 1 of 3
(3,352 Views)
Hey KBaker,
The IMAQ Convert Pixel to Real World VI works well for what you are trying to accomplish. This specific VI transforms pixel coordinates to real-world coordinates, according to the calibration information contained in the image. Calibration information is attached to this image by IMAQ Learn Calibration Template, IMAQ Set Simple Calibration, or IMAQ Set Calibration Info. Once you have set up the calibration for your images, you can simply apply this calibration information to every image from there on out. You can use the two lines you mentioned, or something else you know the real world measurements to, and then set this as your calibration info. Then, in your images you won't have to have that sheet of paper in your image with the lines because you can just apply that information that you took earlier. What happens is that you give a grid descriptor stating the pixel to real world value ratio. This creates a map for the whole image that way you can get the corresponding real world values which allows for the calculation of the real world distance between any two points. A good example program that does this can be found in the NI Example Finder in the following directory: Examples\Vision\2. Functions\Calibration\Perspective Calibration Example.vi.

I hope this helps. Please let me know if you have any further questions regarding this calibration or converting to real world measurements. Thanks and have a great day.

Regards,
DJ
Applications Engineer
National Instruments
Message 2 of 3
(3,323 Views)
I'm still new to LabVIEW, but I can make recommendations about real-world calibration and about IR cameras:

1. If you want to calibrate an entire 2D field, a target with an array of dots is preferable to a simple 1D (point-to-point) calibration. Use non-linear calibration when possible because optical distortions are nonlinear; this is especially true of lenses with short focal lengths.

2. If by IR you mean thermal IR (?), then consider using materials of different emissivity to create your calibration target.

3. Assuming a calibration routine can handle either light-on-dark or dark-on-light image polarity, your calibration target can be lit from the back (silhouette) or lit from the front (reflected light). Backlighting is generally preferred. For highest accuracy you'll want a NIST-traceable target and a collimated light source.

The time and effort required for calibration generally depends on the accuracy you require. If the object you need to measure appears in the same area of the image and if you're only making a point-to-point measuring in one direction, then a simple conversion pixels:inches or pixels:millimeters may be sufficient. If the object isn't registered (located) in the same region of the image, or if it can appear slightly rotated, then consider using a proper 2D calibration with a dot target.
Message 3 of 3
(3,297 Views)