I am reading live image data from a firewire camera and displaying using the picture display. I change binning on my camera, therefor the resolution changes. I want to programmaticlly scale the image to a set picture display size so that when the resolution changes, the image fits entirely in my display. I have scaled the zoom factor property of my picture display by the resolution/display size and am getting bad results...
When the zoom factor becomes less than 1 (the image is larger than the display), the image shows horrible noise, it's like there's bad decimation or something going on, my image is gray-scale and the spots and distortions are unacceptable. Is there a better way I can scale image for my picture display? I have also played around displaying my image as an array in the intensity graph. Can anyone describe the advantages/disadvantages between using the picture display and the intensity graph. I need to display live video from imaq image data type. I do not have the vision toolbox.