LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

changing bit resolution

Hi all,

 

Is anyone familiar with changing the bit resolution of a binary file that outputted data from a NI Analog to Digital Converter?  It was originally setting the data on 24 bits, but I want to convert it down to ie 12 or 16 bits..

 

Would there be a routine for LabView to read back those binary files, somehow make it back into an analog signal, and resample with a lower bit resolution? 

 

Thank you very much. Smiley Indifferent 

0 Kudos
Message 1 of 15
(4,377 Views)
There is definitely no need to take the detour over analog. You can simply take your 24 bit data and rescale it to the desired final bit resolution. How many of the 24 bits were actually used?
Message 2 of 15
(4,364 Views)

I'm not really sure how much of the 24 bits were used.. That's why I wanted to test out the data with different, lower bit resolutions to see if it was still good.  What are my options to rescale it down to final bit resolution? 

 

Thank you for giving me hope! Smiley Very Happy

0 Kudos
Message 3 of 15
(4,351 Views)
Are the binary values integers?  You could use the logical shift function in the Numeric, Numeric Conversion palette to shift your bits so that the least significant bits would fall off and the resulting number would the value rescaled in a fewer number of bits.
Message 4 of 15
(4,335 Views)

They were floating point.  Is there a way to do this so that you know how much bits you are converting it to? 

 

Thanks!!!

Jud~

0 Kudos
Message 5 of 15
(4,271 Views)

You can find the max and min value of your data, then scale it accordingly.


Jud~ wrote:

Is there a way to do this so that you know how much bits you are converting it to? 


I think you have this backwards! 🙂 How many bits you want is your choice, so you have to make that decision.

0 Kudos
Message 6 of 15
(4,267 Views)
I think I understand.  Thanks! Smiley Happy
0 Kudos
Message 7 of 15
(4,261 Views)
I tried to put that logical shift together, with y=8 and x=data array read from binary file at 24 bits.  The data out of the int 16bits (blue lines) aren't graphing as they previously had..  Does anyone know why?  I changed the digits of precision to be high enough on the graphs already..  Thank you.
0 Kudos
Message 8 of 15
(4,237 Views)

I have no idea what you mean by y=8 or logical shift, and your program does not really help much without having any of the datafiles.

 

For simplicity, could we focus on a single data file, attach the data file and a simplified VI on how you are trying to convert it.


Jud~ wrote:
I changed the digits of precision to be high enough on the graphs already.

Changing the digits on the graph is a purely cosmetic property and will not change the underlying data or the display.

Message Edited by altenbach on 09-03-2008 02:39 PM
0 Kudos
Message 9 of 15
(4,233 Views)

Sorry for being vague, I meant to say the logical shift function, http://zone.ni.com/reference/en-XX/help/371361D-01/glang/logical_shift/, and the x and y pertaining to the function.

 

I reattached a simpler .vi and a bin file.  Hopefully that helps!  Edit:  Attachement here:
http://www.megaupload.com/?d=TGVWSIU4

The bin file is too big for this forum 😞

 

Thanks~ Smiley Tongue

0 Kudos
Message 10 of 15
(4,227 Views)