02-14-2007 04:26 PM
02-15-2007 03:30 AM
If you have a 16 bit variable storing a bit pattern of, say, 1111111110100010, the interpretation of this bit pattern depends on the type of the variable. If it is unsigned, the pattern reads as 0xFFA2; if it is signed, it reads as -94. In either event, if you simply copy this value to an 8 bit variable by using a simple assignment statement, only the lower 8 bits will be copied. So how the resulting bit pattern 10100010 is now interpreted again depends on the type of the 8 bit variable. The default CVI char is signed, so it will have the value of -94: if you have an unsigned char it will show as 0xA2 or 162.
JR