Sorry, that isn't what I need to know. Let me describe in detail what I'm referring to.
When using IMAQ Learn Calibration, an option 'Learn Error Map?' is available. When this is set TRUE, it outputs a 2-D array the size of the image used for the calibration, with -1 for uncalibrated regions, otherwise the value appears to vary from 0 to 1, but I can't verify this. According to the help documentation for NI Vision for Labview, "The image error map reflects error bounds on the calibration transformation".
Also, in the help under 'Perspective and Nonlinear Distortion Calibration', it again refers to the Error map and says the following:
"An error map helps you gauge the quality of your complete system. The error map returns an estimated error range to expect when a pixel coordinate is transformed into a real-world coordinate...."
This statement is where my question comes from. What are the units of the error map data? Is it real world units, pixels, percent error, something else?
Thanks for the help!