Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Lens Distortion Non-linear Distortion and Perspective Distortion

Hello everyone, I'm a member of the vision team in the Intelligent Ground Vehicle team in Cal State Northridge, an autonomous robot that uses real time feed and laser data as sensors.

So we need a fully undistorted video feed for us to do image analysis with the lowest processing time.

 

We have a distortion situation with our current vision system where we have to fix Non-linear Distortion due to the use of a wide lens ( About 144 Deg ) and also Perspective Distortion of the projected image. The camera is mounted on top of our robot with an angle with the ground (about 45 Deg).

I have been using labview for about 6 months, and have been trying to correct this distortion using Vision Assistant program to generate VI's.

 

First I'll list my attempts here and then I will direct some questions.

First attempt, a picture of a dot grid was taken with our camera and that picture was correct by Vision Assistant's nonlinear image calibration. Then a VI was generated by Vision Assistant, and we changed in the input from images to video.

This method worked but not accurately.

 

Second attempt, we tried to remap the image pixels using a correction function generated through Excel, this method did not work accurately either. I'm not even sure how to do this again since it was done in the past.

 

So my questions are here:

1) what's the best approach to correct the distortion in our camera? and how please.

2) what are we doing wrong/right in general?

3) Generally, how would I improve any vision VI to reach faster processing times?

 

 

Everyone's help is very much appreciated,

 

Omar

Cal State Northridge

The Intelligent Ground Vehicle team

 

abojabreal@gmail.com 

 

0 Kudos
Message 1 of 7
(5,990 Views)

Generating an undistorted image is nice and looks pretty, but there isn't much benefit to doing it.  It is time consuming and unnecessary.  What you really need to do is be able to select any pixel in your distorted image and know what its real world coordinates are.  This way, you can find objects and details in your distorted image, then figure out where it would be in your undistorted image.

 

The NI method uses linear splines to interpolate between the grid points.  This is essentially taking each square and stretching it to connect the four points in the new position.  The sides of the square stay straight.  To get a more continuous mapping than the NI routines generate, you can use a cubic spline grid or another interpolation method.  Use the NI method to determine the correct points for the grid.  Remember the NI calibration is undefined outside the grid.  Use the points to generate the interpolation coefficients.  If you use every other grid point as a base point for a spline, you should have plenty of points to do a best fit.  You might be able to extend slightly beyond the grid as well.  Unless you have severe distortion, this method works pretty well.

 

Remember grid calibration assumes a single flat surface has been distorted by the camera.  I will assume you are mapping the floor as your surface.  As soon as you have 3D objects on top of your flat surface, everything changes.

 

Bruce

Bruce Ammons
Ammons Engineering
Message 2 of 7
(5,987 Views)

One of the ways the non-linear distortion correction can be applied is by giving it a set of two points: pixel positions, and corresponding real-world (or pixel) positions.  You can generate these points with the dot pattern, enter them manually, or generate them through equations that define the distortion (if available).  The last two of those three approaches should give you the most accuracy.

 

I would probably apply a second perspective distortion correction on top of the non-linear correction.

 

In terms of speed, it depends on what you are doing to the image.  Sometimes you can get away with applying the distortion correction to the measurements as opposed to the entire image.  If you have to correct the entire image, there's not really an "efficient" way to do it.

 

Check out this example (in Robotics) that does some perspective correction on a video feed and then does some image processing: https://decibel.ni.com/content/community/zone/labviewrobotics/blog/2011/06/10/robots-reading-signs

 

 

I'm not sure what kind of camera you have, but you could consider a wide-angle linear lens.  This would eliminate the lens distortion.  We used one in this example: https://decibel.ni.com/content/docs/DOC-13051

 

 

 

 

Message 3 of 7
(5,977 Views)

I'd like to correct the information provided by Bruce. It was correct up til Vision Development Module 2010. You need to update, Bruce 😉

In Vision Development Module 2011, we improved the accuracy and precision of our grid calibration, by computing a lens distortion model (Division or Polynomial Model), that take into account radial and tangential non-linear distortion of the lens. The new algorithm also allows to compute the internal parameters of the lens (focal length and optical center), that you can use to determine the relationship of the camera to the object under inspection.

The microplane algorithm that Bruce described is still availale, as it can be used to solve applications in which the object you're trying to calibrate is non planar, and can be calibrated by wrapping the grid around the object.

 

A calibration will be valid for a specific perspective plane, and will need to be recomputed if the camera moves or object is in a different plane.

Typically, the calibration process is an offline process. For performance reasons, you don't necessarily want to correct the entire image (although VDM provides you with that function). As Bruce mentioned, most algorithms, like edge detection, pattern matching etc, return their data both in pixels and real-word data, talking into account the calibration information that you learned. For the few algorithms that only provide pixel results, you can use VIs to convert Pixels to Real-World. This operation is fast.

 

What type of features are you trying to measure in the image? Can it be solved in 2D, or do you need 3D information? How many cameras do you have on the robot?

 

-Christophe

Message 4 of 7
(5,965 Views)

Hello again, first of all I'd like to thank all of you Bruce Ammons, roboticsME and ChristopherC for the quick responce and help.

 

I understand what all of you have suggested and I will get back with sample VI to continue the discussion to show if I was succsseful or not.

 

 

More info about the camera and lens

 

Camera box:  http://www.aegis-elec.com/products/foculus-FO124TB-TC.html

 

Lens :  http://computarganz.com/product_view.cfm?product_id=510

 


 

Processing time wise: we need the algorthem to be as fas as possible to run concurently along with information taken from a laser range finder. Disregarding whatever we do with the image after, I am only interested in making the distortion correction as fast as it could be.

 

I thank you again and I will reply again in a couple of days with results.

 

Omar

Vision Group

The Intelligent Ground Vehicle Team

Cal State Northridge

0 Kudos
Message 5 of 7
(5,875 Views)

Christophe,

 

I just installed LV 2011 a couple of days ago.  I haven't found all the new features yet.  The new calibration routines sound great, though.  I will try them out soon.

 

Bruce

Bruce Ammons
Ammons Engineering
0 Kudos
Message 6 of 7
(5,869 Views)

We created an example that compares the different calibration models. You can find it here:

<LabVIEW>\examples\Vision\2. Functions\Calibration\Calibration Models Comparison.vi

0 Kudos
Message 7 of 7
(5,867 Views)