LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Labview type conversions

I'm a bit new to Labview and sure this is a simple question, but I'm reading 12 bit values from an ADC and fitting the result into a u16. The values are greater than the 0 to 4096 range I would expect. So, what happens when Labview converts from a 12 to a 16 bit number?

 

Thanks,

0 Kudos
Message 1 of 6
(1,810 Views)

Hi DeepSpace,

 

I don't know which vi you are using for doing the conversion but I guess I would use the Boolean Array To Number Function.
As explained in the link, this function can adapt to the input, but also you can right click into it and then click properties, and in the second tab you can configure a little bit more about the datat type to output.

If you are using LabVIEW FPGA there are some other considerations.

Could you give more information on how you are receiving the data?

 

I hope this helps.

Guillermo Oviedo
R&D Software Engineer
CLA | CTD
0 Kudos
Message 2 of 6
(1,800 Views)

LabVIEW never converts a 12 bit number to a 16 bit directly, because there is no 12 bit data type.

The crucial question is how do you read the 12 bit value and what manipulation you do to adapt it. Please post your code.

Paolo
-------------------
LV 7.1, 2011, 2017, 2019, 2021
0 Kudos
Message 3 of 6
(1,759 Views)

As has been said, your description is way too ambiguous to give an answer.

 

  • How are you reading 12 bit values? What is the raw datatype?
  • How are you "fitting it into a U16". Since you have four more bits, it will fit just fine. 😄
  • If the values are larger than 4096, it is possible that you are filling the 12 high order bits instead of  LSB.
  • LabVIEW does not natively convert 12 bit to 16 bits. Your program does.
  • We cannot tell what you are doing wrong until you show is what you are doing. 😄
0 Kudos
Message 4 of 6
(1,724 Views)

Thanks to all of you who posted. I have been able to resolve the issue. The encoder documentation neglected some of the header information and only showed 12 bits. Sorry I did not reply sooner. thanks again to all of you who tried to guide me here.

 

Cheers!

0 Kudos
Message 5 of 6
(1,627 Views)

I apologize for not noticing (and responding) to this question earlier.  Just in case someone else, new to LabVIEW, comes across this interesting question and gets to the end and asks "So what's the answer, already?", I have some observations to share.

 

I suspect the Original Poster may be attempting to get data from a 12-bit A/D Chip (such as the Maxim MAX11634) that outputs its data as a 12-bit quantity, communicated to the computer as a 16-bit quantity.  The chip I've been using (see previous sentence) communicates with the computer using the SPI protocol.  My Engineering friends tell me that if the Chip is not programmed properly, its output pins will be "tri-stated", meaning they instead of reading a 12-bit value as the low 12 bits of a 16-bit number having the upper bits all zero (so valid data are in the range 0 .. 4095, or 0x0FFF), the chip will return a 1 in every bit, or 65535 (in hex, 0xFFFF).

 

What I do is read the two bytes from the Chip as a U16.  If its value is 0xFFFF, I know something went wrong, the data shows the A/D conversion didn't take place.  But we're not done ...

 

A/D data can be signed (representing, say, -10 v to +10 v) or unsigned (0 to +10 v).  The Maxim chip I'm using uses the high bit (bit 11, with bit 0 being the lowest bit).  If your data is unsigned, then the high bit just represents the quantity 2048 (2^11), and it won't matter if you treat the resulting 16-bit quantity as a U16 or I16, it will represent numbers from 0 to 4095.

 

However, if the data are signed, then bit 11 is the "sign" bit -- if it is set, then the quantity represents a negative voltage, suggesting you should represent it as an I16 (a signed integer).  But how to get the I16 sign-bit (bit 15) set?  What you want to do is to set the upper "nibble" (4-bit quantity) to 0xF, so that 0x0FFF (from the A/D converter chip) becomes 0xFFFF (or -1, if considered as an I16).

 

I won't even get started on whether the bits come in Most- or Least- significant bit first -- you might need to bit-reverse the 12-bit quantity before making sense of it.  Fortunately, the Data Sheet (and some experienced colleagues) can help you make sense of things.  Also, some experimentation (put known voltages into the A/D and see what values come out when you read -- if you put in a very small voltage and increase it just a smidge, but see the values from the chip go from, say, 0 to 2048 to 1024 to 3672, you are looking at a "bit-reversed" version of 0, 1, 2, 3.  (Exercise for the reader -- is this chip looking at signed or unsigned voltage?).

 

Bob Schor

0 Kudos
Message 6 of 6
(1,619 Views)