09-15-2009 09:15 PM
I have a 12-bit signed value read from hardware. I'm reading it into an I16 and I need to sign-extend the 12-bit value to 16 bits so that downstream arithmetic will interpret the I16 correctly. Is there an elegant way to do this? So far I can think only of using two for loops shifting left then right again, or convert to boolean array and back to I16 (with bit 12 mapped to all of bits 15:12). Is there a better way?
For example, if the value read from hardware is 0x800 (-2048) when I put that in the I16 I get 0x0800 (+2048) but what I actually need is 0xF800 (still -2048).
Thanks!
-- J.
09-15-2009 10:22 PM
It depends on the type of math you are trying to do with it and whether there are any operation that could cause an issue.
But you could use the logical shift function on the Numeric/Data Manipulation palette to rotate the number 4 bits one way when you bring the number in, then rotate it 4 bits back the other way.
You could do things like you are doing now, if they are working for you, but instead of converting to numeric arrays, you can use boolean functions on the number. Mask off to determine what bit 11 is, turn off bit 11 and turn on bit 15 depending on the result of the mask operation (assuming that bit numbering begins with 0). Do that math in I16 and do the reverse operations to get the sign bit back to 11.
It should work, but the operations still are a little awkward. Why is the hardware working with 12 bits? Doesn't it have a mode to give you the data in a more normal 16 bit pattern?
One other possibility is that the numeric palette has a Fixed Point palette. Perhaps some of those functions can do the conversion for you?