08-20-2009 02:40 AM
Hi,
I have the following piece of code that I have to implement in LABVIEW
CRC_REG = CRC_REG * 2; // one bit left shift
IF (CRC_REG >= 0x1000) // achieving 12bit restriction
CRC_ REG = CRC_REG - 0x1000 + 1
where CRC_REG is a 16-Bit-Register. On the attached pic is my implementation where CRC_REG is simply a control variable.
Since I am not getting a different values from the ones that I am supposed to get, it would be nice if someone can take a look.
Thanks,
Iliya
Solved! Go to Solution.
08-20-2009 03:15 AM
08-20-2009 03:18 AM - edited 08-20-2009 03:20 AM
The hex string to number function you use has a "default" input. Wire a U16 constant there to define the representation of the output value.
Do you really need to convert the value so many times between string and number? Why not work with a number all the way (you can display it in hex representation on the front panel)?
08-20-2009 03:22 AM
Hi,
thx for the reply. U mean just to connect U16 constant no matter what its value is?
08-20-2009 03:23 AM
Yes. The value only matters if the conversion fails, then the output will have the default value.
08-20-2009 03:25 AM
08-20-2009 03:27 AM
08-20-2009 03:28 AM - edited 08-20-2009 03:30 AM
I don't see a number to boolean array in your code fragment.
But when you use a U16 input to this function it will return a boolean array of size 16.
08-20-2009 03:33 AM
08-20-2009 03:44 AM
Wow, this code is really hard to read. Way too many locals and sequence structures... Any way too many conversions between strings and numbers.