10-18-2008 11:45 AM
Here is a technique that will work as long as the scaling factor is fairly small:
Find two items in the original image that are distinctive and not next to each other. Train a pattern for each item, then locate it in the new image. From the XY coordinates of the items in the old and new images, you can determine shift, scaling and rotation. You could either work with the coordinates directly or write an equation that represents the mapping and fit the coefficients to the data.
Bruce
10-22-2008 04:15 PM
Could you use the learn Calibration functions? (machine vision module 8.5) This allows you to enter an image and a reference description...or two arrays containg points in both coordinate systems. You can select whether you want to use simple or more complex (rotation, translation, scaling (non linear)...there is a good example or too also.
Hope that helps.