Instrument Control (GPIB, Serial, VISA, IVI)

cancel
Showing results for 
Search instead for 
Did you mean: 

byte to decimal conversion


@johnsold wrote:

Byte 2 (LSB): decimal 232 = hex E8 = binary 11101000. Byte 1 (MSB): Decimal and hex 3 = binary 11. Combined: hex 3E8 = binary 1111101000 = decimal 1000.


From that, it sounds like it should simply be combine the two bytes into an I16 and then divide by 100.  I am assuming 2's compliment here, but I have seen this done many times.



There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 11 of 18
(3,355 Views)

Agreed. I did not want to guess because there are so many different ways equipment manufacturers map the data.

 

Lynn

0 Kudos
Message 12 of 18
(3,353 Views)

I was wondering...

 

if I know I should get a 16 bit number, which could be negative or poitive - but a real number.

 

I don't get the converstion correct because only 16 bit of the double variable are used.

 

is it possible to know which bits are used in the convesion and than mask them?

 

for example:

 

 

??????110103040 * 00000111111 =  000000110103040

0 Kudos
Message 13 of 18
(3,304 Views)

Masking is easy. Knowing which bits to mask you can only get from the instrument manual or someone from the manufacturer who knows how they encoded the information. There is nothing in the data itself which tells you that.

 

Lynn

0 Kudos
Message 14 of 18
(3,300 Views)

the data is output in 16 bits always (2 bytes).

 

the type casting to double is done in Labiew to 64 bit.

 

I can mask it if the casting places the bits at the first \ last or other known bits.

 

this is a labview question.

0 Kudos
Message 15 of 18
(3,295 Views)
The casting is done into 16 bits.
From that point on it is conversion not casting anymore!
If you have to mask bits do it on the 16 bits
greetings from the Netherlands
0 Kudos
Message 16 of 18
(3,286 Views)

Read the Detailed Help for the Type Cast function:

 

"

Type Cast Details

Effects of Mismatching the Sizes of X and Type

This function can generate unexpected data if x and type are not the same size. If x requires more bits of storage than type, this function uses the upper bytes of x and discards the remaining lower bytes. If xis of a smaller data type than type, this function moves the data in x to the upper bytes of type and fills the remaining bytes with zeros. For example, an 8-bit unsigned integer with value 1 type cast to a 16-bit unsigned integer results in a value of 256."

 

 

LabVIEW stores doubles like this:

 

"

Double

Double-precision floating-point numbers have a 64-bit IEEE double-precision format.

bit 63 is sign. bits 62-52 are exponent. bits 51 to 0 are mantissa.

 

Type casting from 16 bits to double will likely NOT produce the results you expect.  Look at the attached VI for some insight into what might happen.

 

Lynn

0 Kudos
Message 17 of 18
(3,283 Views)

Thanks Lynn

 

The teacher in you is explaining in a much better way.

greetings from the Netherlands
0 Kudos
Message 18 of 18
(3,274 Views)