10-17-2008 01:17 PM
Hi,
still exploring the potential and limits of the Calibration VIs...
I am wondering whether it is possible to use those VIs to do the following: having a set of points in one image (image 1) corresponding to an identical set of points in another image (image 2), both images being deformed by unknown nonlinear processes, I want to find, for each points in image 1, its corresponding point in image 2. Note: the reference points used for calibration will not be necessarily on a grid, and the points I may be interested in will of course NOT be those calibration points. It seems to me that the IMAQ Learn Calibration Template does not exclude this situation. Of course I could try and see whether what I described above works. However, if anybody has evidence that this is not possible, I'd rather not waste my time.
Thanks for your feedback.
X.
10-20-2008 02:16 PM
Hi Xavier,
I think I need to understand a little better what exactly you are trying to do. This is how I understand the problem. You have a camera taking images of a shape that is deforming. This shape has some points on it that you would like to use to calibrate your image with. These points are part of what you are taking an image of. They will be distorted between images in a non-linear process. You then want to calibrate your images off of each other. Is this correct? How are they distorted? Are they moving relative to the camera or is the actual shape changing?
10-20-2008 04:53 PM
Hi Adam,
you perfectly understand the question. The deformation in both cases are non-linear.
Notice that I made some progress and implemented this calibration. One problem I have is that the "real world coordinates" I obtain, even thoug they seem to reproduce the deformation from image 1 to image 2, do so by introducing an offset. In other words, if my "real world" is an image in which the deformed points are on the right of the image, the "real world" coordinates I obtain are usually putting this point on the left of the image. I haven't figure out yet how this offset is calculated but I suspect it has to do with the region on interest which is chosen to define where the calibration applies.
Sorry, it is not an easy problem to describe without a picture...
But I guess the answer to my question is therefore no, or at least not without some extra work.
X.
10-21-2008 04:19 PM
Hi Xavier,
Beyond that the coordinates are on the wrong side of the image I am not sure I really understand the problem. Do you think you could try a little more of an explanation or attach an image or something so that I can better understand the problem you are having.
10-22-2008 05:45 PM
Hi Adam,
thanks for offering to take a look. I am still working on the issue (the more so, the more subtleties I discover).
I am attaching 3 images. One (Mapping Points Example.png) shows a typical case: a set of dots on one side of the image(red ROI) and the same set imaged differently on the left side (green ROI). I can locate those points accurately and therefore provide their coordinates to the calibration algorithm. I show the diagram of the calibration step in (Learn Calibration Diagram.jpg).The units in the real world are "undefined" but in fact they are pixel units, in the original image.
Now if I use this calibration information (with the Vi shown in Map Points Diagram.jpg) to verify that a point from ROI 1 is mapped onto the corresponding point in ROI 2, I get the result shown on the first image. There you can se, in addition to a useless grid, pink, red and green dots. The pink dots correspond to the location of my imaged dots. The red dots are pixels I chose interactively in ROI1 (in general on top or close to one of the original dots), and the green ones are there "mapped" counterpart. As you can see, they all end up next to the red ones, which means that the calibration does not return "real world" coordinates as expected, but something offset by an amount whose origin I am still trying to figure out.
Let me know whether this clarifies the question.
Thanks,
X.
PS: the subVI "Associate ROI Points" does just that, that is get the coordinates of the pink dots in ROI1 (pixel coordinates) and those of the associated pink dots in ROI2 (real world) to build an array of reference points.
10-23-2008 06:11 PM
Hi Xavier,
There should not be an offset as the way I understand your problem. I did notice that you have a corrosion dot on the input of your Convert Pixel to Real World Value.vi. Have you looked into what is causing this corrosion dot. Since I cannot see what is going on in VI labeled ROI I can’t tell what it is doing. What is the data type in those wires? They should be 32bit SGLs.
10-24-2008 01:31 PM
Hi Adam,
sorry about the "corrosion" dot, but it's not my doing! The "ROI" Vi is the IMAQ Convert ROI to Point.vi to be found in the Vision Utilities>>Region of Interest>>Region of Interest Conversion subpalette. If you take these two VIs and connect the output of IMAQ Convert ROI to Point.vi to the "Pixel Coordinates" input of IMAQ Convert Pixel to Real World.vi, that's what you'll get... It does not make much sense, except maybe that the first one is linked to a typedef, whereas the second is not. Otherwise, they are both clusters of SGL, so I don't think that's the problem.
The advanced vision concept manual says that when a set of user defined points is used for calibration, the origin of the new coordinates system is taken as the point with the lowest x, then the lowest y. I don't understand that, since none of the points I used for calibration (the red dots on the image) is mapped to (0,0) (none of the green dots end up on the top left corned of the image)...
Still working on it in my spare time though...
Best,
X.
10-27-2008 01:49 PM
Hi Xavier,
When the concepts manual says the lowest x and y values it means the lowest you entered. So if you defined a list of say 4 coordinates set then the origin would be placed at the coordinate nearest the top left of the image.
Also let me clarify how I understand the problem as it is still a little unclear. When you do your calibration the calibration dots (red) appear where you would expect. However the dots that are calibrated to the red dots (green) are calibrated with this offset that can be seen in the image you posted. Is this correct? It might be helpful if I could see a sample image so that I can take a look myself.
10-27-2008 03:23 PM
Hi Adam,
thanks for the first part of the answer. I had eventually reached the same conclusion, but I am questioning the relevance of this definition in the case where the image and real world coordinates are provided by the user...
Anyhow, coming back to my not-so-clear question: I am attaching a binary image showing the two regions of the image I want to map to one another. The red dots are (for illustration purpose) the "image pixel coordinates" ones, while the green ones are the "real world coordinates" ones. Let say the upper left red dot has coordinates (50, 50) and the upper left green dot has coordinates (300, 100). I would expect that when mapping the red dots to the green dots, and asking for converting (50, 50) to real world coordinates, I get (300, 100). Instead, as suggested by the image I posted earlier, the result is something of the order of (50,50). So the calibration algorithm totally ignores my real world coordinates instructions...
Hope this clarifies the question.
X.
10-27-2008 03:26 PM