10-27-2008 01:08 PM
I suspect that I won't get a very useful answer from NI to a direct question about the algorithm used in the nonlinear image calibration sets of VIs (as I have experienced when asking about the algorithm used in Template Matching...), so I'll present my question indirectly.
I repeated (unknowingly) the experiment made by Zealous a while back, that is, display the Error Map of my calibration attempts. I am attaching a typical example shown with two different color palettes. One is binary and the other is gradient.
Besides their esthetical value, they show something annoying about the algorithm: it is local (that's OK) and discontinuous (that's not optimal). In my particular case, I am using something close to the standard dot grid template recommended for this algorithm, and each square visible in the images represents, as far as I can tell, a kind of Voronoi cell around each dot. Since the dots are on a (slightly deformed) square grid, the correction is applied such that the dot is perfectly mapped to its counterpart in the real world, and as one goes radially away from it, the mapping becomes increasingly erroneous (note that the typical error seems to be ~ 1% of the grid pitch, which is not too bad). In the case of a smooth mapping procedure, one would expect that the error at the boundary of two cells would be continuous, but this is obviously not the case, since all squares have markedly different colors. Notice also what happens on the border cells: here the correction is better towards the center of the grid than it is around the node itself (which is more or less in the center of the rectangular border cells)! This is obviously wrong...
Any comments?
X.
10-27-2008 08:23 PM
Xavier,
I will agree that NI's image mapping is not perfect. It is fast, which is what most people need, and it is good enough for most user cases.
It sounds like you need a more advanced calibration/mapping algorithm. I developed one for a customer a few years ago that worked pretty well. We had to calibrate to an image of a grid that was distorted due to camera optics. We knew the exact spacing of the grid and needed to be able to accurately calculate the position of any pixel in the image. My algorithm is based on fitting splines to the dot grid. I also have an image interpolation algorithm that determines the values between pixels. It is not super fast, but it is reasonably quick.
Unfortunately, it took me way too much time to develop the algorithms to be able to give them away for free. I would be willing to discuss selling them, though. I could try them out on some of your images to see if they work in your case.
Bruce
10-27-2008 09:07 PM
Hi Bruce,
thanks for your feedback. I guess we are thinking along the same lines. My email address ends up in edu, so I really don't have a budget for custom software (besides my own salary) 😞
I am just wondering in general why NI does not want to disclose their algorithms in more detail. Are they afraid of competition? Do they realize that they (also) have scientists as customers, who NEED to understand what the algorithms do to simply accept using them? For instance, I gave up using the pattern matching algorithm, because I could not guess when it would work and when it would not (in some of my applications, it fails). And when it would, how good it was doing it. Same thing now for the calibration set of VIs.
My point is the following: if somebody asks NI how an algorithm works, he/she is most likey capable of designing a better one. And if he/she is using LV (IMAQ), what kind of competitor would it be anyway? I just can't fathom the rationale of this secrecy...
Sorry, I had to vent out this minor source of frustration...
Best,
X.
10-28-2008 01:56 PM
Hi Xavier,
I am sorry that the calibration is not working for you. As you obviously already suspect I cannot give any insight on how the algorithm works. The inner working of the algorithms are proprietary information. I am sure you are already familiar with the Vision Concepts Manual (Start>All Programs> National Instruments> Vision> Documentation> Vision) but if not I would suggest taking a look at it. This is the lowest level look at the workings of the Vision algorithms.