Signal Conditioning

cancel
Showing results for 
Search instead for 
Did you mean: 

why do 16 bit greyscale images look significantly worse than 32 bit?

Howdy,
 
I am trying to display a greyscale image from a camera. Parts of the program are written by someone else, and the image comes in as a U16 array. I am using IMAQ create and IMAQ arraytoimage. The image displays perfectly if I use the 'float' inputs on the above to sub vi's, but if I use 16 bit or 8 bit, the image quality is terrible. I would like to be able to display the image as 16bit.
 
If I save the 16 bit image, and reopen it with another program, it's still just as bad. I have also tried converting the array to I16, and U8, but it makes no difference to the image quality.
 
From what I understand, there should be very little visible difference between 8, 16 and 32 bit greyscale images. Does anyone have any ideas where the problem might be? my next guess is the camera settings, but I'd love it to be something in my code..
 
Cheers,
Andy
 
 
0 Kudos
Message 1 of 3
(3,570 Views)
Andy,
Thank you for contacting National Instruments.  The key thing to note is that the image data type that LabVIEW uses is a signed interpretation so you need to do some more conversion to get an Unsigned 16-bit array to display properly as an NI-IMAQ image.  Refer to the Knowledge Base: 16-bit Images in NI-Vision for more information on how to do this.  Thanks and have a great day.

Regards,
Mark T
Applications Engineering | National Instruments
0 Kudos
Message 2 of 3
(3,551 Views)

Hi Mark,

Thanks for that.

Cheers,

Andy.

0 Kudos
Message 3 of 3
(3,532 Views)