Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Advanced Golden Matching

Solved!
Go to solution

Hi Experts! 🙂

 

I have a great problem and less time to solve it alone, so I need some help. My problem is the following: I have an image, and It has a nonlinear distorsion maybe this image is made by a scanner. I would like to compare the scanned image and the real one by using the golden matching.

The result image is always full of with bright and dark defects. Of course the the golden matching position is determined by pattern matching.

 

Is there any way to correct his distorsion for the matching phase? Or could you suggest me any way to reach my purpose?

Thank you very much!

 

---
+++ In God we believe, in Trance we Trust +++
[Hungary]
0 Kudos
Message 1 of 8
(4,029 Views)

Hi Durnek!

 

sorry for the late reply. I would like to ask you for some more details. Could you please send the pictures you are comparing and the result image? Additionally, could you provide the code you are using or maybe explain the code in more detail? Are you using LabVIEW or another tool?

 

Thanks!

 

Looking forward to hearing from you.

Zenon Kuder
Applications Engineering
National Instruments
0 Kudos
Message 2 of 8
(3,973 Views)
Solution
Accepted by topic author D60

Hi,

 

Unfortunately, unlike the Geometric Matching algorithm, the Golden Template Comparison algorithm doesn't. To be able to use the algorithm on distorted images, or images that have been acquired in different condition, you need to calibrate both images and correct them before using the algorithm.

 

Christophe

0 Kudos
Message 3 of 8
(3,961 Views)

Maybe this is the solution! Is there any procedure to "learn" the distorsion and apply that on the golden template?

---
+++ In God we believe, in Trance we Trust +++
[Hungary]
0 Kudos
Message 4 of 8
(3,958 Views)

Yes. The Vision Development Module contains calibration functions you can use for this purpose, including functions to learn for non-linear distortion. They're located in the Vision Utilities>>Calibration palette.

You can associate a set of points in the image to real-world coordinates and use that to calibrate the image. (IMAQ Learn Calibration Template). Once the image is calibrated, a lot of algorithms will return data in both pixel and real-worl data (like for example Pattern Matching or Edge Detection VIs). In some cases, where the operations or algorithms work on points only, you don't need to correct the image, as it is only necessary to transform the coordinates of the points of interest. But to be able to return accurare real-worl information that involves for example area, or in your case using the golden template comparison, you need to correct the image before being able to apply the algorithm. Use IMAQ Correct Calibrate Image for that effect.

Calibration examples are located here:

National Instruments\LabVIEW\examples\Vision\2. Functions\Calibration\Nonlinear Calibration Example.llb\Nonlinear Calibration Example.vi

 

Hope this helps.

 

Christophe

0 Kudos
Message 5 of 8
(3,956 Views)

As I understand, calibration can be used for my purpose:

The reason why the golden comparison result many fake defects is my inpect-image has a nonlinear distorsion. (I would like to compare printed text to the original one)

So first, I have to calibrate or I would say use learned calibration on my template image in order to has same distorsion.

After this step, the golden matching won't provide fake defects.

Am I right?

----

To learn the distorsion, I have to print a calibration sheet, and learn it.?

---
+++ In God we believe, in Trance we Trust +++
[Hungary]
0 Kudos
Message 6 of 8
(3,954 Views)

If both images were not taken with the same camera, or if the camera setting changed (you mentioned one was distorted, not the other), then I would say both images need to be calibrated and corrected before you pass them to the algorithm.

Vision Development Module provides 3 ways of learning a calibration:

1) Simple Calibration, during which you just specify a correspondance between pixels and real-world distances. i.e 10 pixels represent x millimeters. This simple method does not take into account distortion. This would not work in your case. This would work, say if you acquire images with a telecentric lens and you know there are no distortion.

 

2) If you still have the settings to acquire your images, then yes, printing a calibration grid and using it to acquire an image of it in the same condition your original image was acquired, and using the grid of point to calibrate the image is the method that will give you the most accurate results.

To learn the calibration using this method, use IMAQ Learn Calibration Template, provide the grid of points image (which should be distorted the same way as your image) as the input image.

Provide the Grid Descriptor, which defines the real-world distance between the points, and threshold parameters to threshold the grid.

In addition, the VI lets you define a Calibration Axis.

Once the grid image has been calibrated with that VI, use IMAQ Set Calibration Info to apply that calibration to other images acquired under the same conditions.

 

3) If you can't reproduce the acquisition conditions in which the distorted image was acquired, you can still calibrate that non linear image, if you can locate multiple points in the image and can associate them with real-world coordinates. You do that by providing "Reference Points" to IMAQ Learn Calibration Template. (In this case, don't connect Grid Descriptor).

 

I hope this clarifies things.

 

Christophe

 

 

0 Kudos
Message 7 of 8
(3,952 Views)

Thank you very much! These infos very useful! but now I have to go.

I hope we can continue our discussion soon!

 

Thank you again!

---
+++ In God we believe, in Trance we Trust +++
[Hungary]
0 Kudos
Message 8 of 8
(3,950 Views)