Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Problems in Calibrating Distortion

Solved!
Go to solution

I am using VDM 2011 SP1 and LabVIEW 2011 SP1 to write a program calibrating a webcam. However, the internal parameters I gained from the program were quite different from the correct that I got from the Camera Calibration Toolbox (Written in MATLAB), a calibration program written in OpenCV, and even the NI Vision Calibration Training Interface. So I want to know how to get an accurate calibration result by using VMD?

 

As we know, multiple grid images are needed to calibrate the distortion of a webcam. So I first snap several images containing calibration grid in grayscale. Then using Local Threshold VI and Particle Filter VI to find all dots in the images, and passing the results to Calibration Target to Points VI, I rechecked this step by returning the size of the reference points (array) and the images after local threshold. So, I am pretty sure that the program had found all dots needed. Finally, I added all reference points from first (n-1) images to Learn Camera Model VI. But after adding the last images, and finishing the process of learning camera model, the internal parameters I got were quite ridiculous. cx and cy were quite robust (cx = 488, cy = 290), but quite far (roughly 70 pixels) from the results (cx = 484, cy = 359) I got from other three methods. fx and fy seemed to become crazy. They always ranged from 1000 to 5000 according to different sets of images. But from the other three methods, the results should be fx = 921, fy = 920.

 

I have tested the same sets of images in both the NI Vision Calibration Training Interface and the program I write. And the results showed that there was no need to be suspicious on the correctness of the results gotten from the other three results.

 

Is there anything wrong in my calibration process? Would the internal parameters work inside the Convert Pixel to Real World VI? Is there any sample codes like this program?

 

Thanks!

 

BayernFans

0 Kudos
Message 1 of 11
(6,520 Views)

Additionally, the Internal Parameters always showed to be Insufficient Data. The amount of images is more than 5. The angle ranges of grid are more than 20 degrees. So, why it still showed to be insufficient Data? Will it affects the internal parameters?

0 Kudos
Message 2 of 11
(6,511 Views)

Was it showing insufficient data in the Calibration Training Interface or in your LabVIEW implementation?

Typicallly, the VI should not return the internal parameters if there is insufficient data, so it seems a little weird to me that you would get both.

You mentioned that you're using VDM 2011 SP1. In VDM 2012, we improved the computation of the internal parameters, which might give you better results. I would recommend you to check it out and see if you get better results with the latest version.

One last point: although our algorithm feature an outlier rejection method, it currently does not have a automatic way of locating the grid in the image. What I would recommend that might provide better results when you're using the Calibration training interface, is to make sure you draw a rotated rectangle and adjust it around the grid, FOR EACH IMAGE in the step where you select the threshold parameters. Start with the first image, adjust the threshold parameters and draw an ROI around the grid, then increase the image # by 1, verify the threshold parameters and readjust the ROI, and so on for all the images.

If you're still having issue, please attach your set of calibration grids and we'll look more closely into it.

 

Hope this helps. Please let me know if the suggestions worked.

 

Best regards,

 

-Christophe

Message 3 of 11
(6,497 Views)

Hi Christophe,

 

The Calibration Training Interface didn’t show any insufficient data warning. And the internal parameters results indeed were more accurate in Calibration Training Interface 2012 than in 2011 SP1, even without defining any ROI. Though the calibration training interface is accurate and friendly, to keep the system as compacted as enough, I have to write this part of program by using the Vis in VDM.

 

I may have found where the program was wrong. But the proper solutions were still not found by me. You said that it was weird to get the internal parameters from data containing insufficient data. So I debugged the program. By tracking every step of the calibration process, I found that every after the implementation of Learn Camera Model VI, the Insufficient Data returned TRUE, which meant that EVERY IMAGE I ADDED TO LEARNED WAS INSUFFICIENT. And the fact was that after the program processed the last image, while the ‘Add Points and Learn?’ was turned to be TRUE, the Learn Camera Model VI did return the internal parameters (weird and inaccurate results). Then I tried to CHANGE THE ORDER OF THE IMAGES added to the Learn Camera Model VI. With the same set of images, I got different internal parameters, when the order was different. I doubt that if all images were used to calibrate the distortion effectively.

 

To help you understand my spaghetti codes more easily, I would like to explain the program briefly, expecting that you can find the mistakes. First, I saved all images needed into the disk. Then I used a FOR LOOP to reload each images into the program, find all reference points in each images, and added reference points into the Learn Camera Model VI. When the FOR LOOP was executing the last iteration, ‘Add Points and Learn?’ was turned to be TRUE. And the internal parameters results came out.

 

Can this method works correctly?

 

Thanks again!

 

BayernFans

 

P.S: I have attached part of calibration grids, internal parameter results during debugging, and segment of the codes.

0 Kudos
Message 4 of 11
(6,479 Views)

Hi Christophe,

 

The problems may lie in the Learn Camera Model VI. I found that only the last image would affect the internal parameters results. So the results from a set of calibration grids were the same as the results from the last image (Truly, this VI can really return results when the input data is insufficient). It meant that the reference points from the first n-1 images doesn’t work in the Learn Camera Model VI, though I added them into the VI with the port, ‘Add Points and Learn?’, kept to be FALSE.

 

Obviously, there are only two possible causations of the problems. First, I may use the Learn Camera Model VI in an inaccurate way. Or, there are bugs inside this VI.

 

Cordially,

 

BayernFans

0 Kudos
Message 5 of 11
(6,414 Views)
Solution
Accepted by BayernFans

Hello BayernFans,

 

You are using different images for every iteration while learning camera model. The template image supplied to Learn Camera Model must be same for all the iteration so that it can store all the grid points supplied during each iteration and use them for learning calibration model. Since you are using different image for each iteration, only single set of grid points are used for learning camera model so it shows insufficient data.

 

The pseudo example code would look like

 

Thanks,

Antony

Message 6 of 11
(6,335 Views)

Hi Antony,

 

Thanks for your reply. It's really helpful! The problems were solved when I used a single image to store the all calibration information from all grids. The results of internal parameters were perfectly correct. And hope this posts can help more people.

 

BayernFans

0 Kudos
Message 7 of 11
(6,320 Views)

Hello BayernFans.

 

What exactly did you do when you "used a single image to store the all calibration information from all grids"? 

 

 

- Morten

0 Kudos
Message 8 of 11
(5,658 Views)

Can anyone reply? I am stuck on this.

I am using the Vision Assistant Calibration. When we use multiple images for calibration should we use them all in one image or as multiple images?

0 Kudos
Message 9 of 11
(4,906 Views)

Please use multiple images.

0 Kudos
Message 10 of 11
(4,899 Views)