01-29-2012 11:57 AM - edited 01-29-2012 12:05 PM
Hi, guys. Let's say I have an array of 8 bytes and cast it to an I64, I can do it like this:
Now, I have a 12 bytes array and want to cast it to at least a 12 bytes literal, how do I do that? This could be either integer or unsigned integer. Is there such thing as a 12 bytes or 16 bytes literal in LV?
Also, quick question ....an I16 takes 2 bytes. Does that mean a U16 take a 2 bytes of memory also?
01-29-2012 12:19 PM
Hi lavalava,
"an I16 takes 2 bytes. Does that mean a U16 take a 2 bytes of memory also?"
Yes. As is written in the LabVIEW documentation...
"Is there such thing as a 12 bytes or 16 bytes literal in LV?"
Well, one could mention a timestamp consisting of 128 bits (or 16 bytes), but does it count for your purpose? I would opt for answer "No"...
01-29-2012 12:21 PM
There are no 12 or 16 byte data types in LabVIEW. (Thought there is one exception. The timestamp is 128-bit or 16 bytes)
Yes. A U16 and an I16 both take up 2 bytes.
01-29-2012 12:36 PM
I guess I'll settle for 8 bytes (I64) then. Oh well. Yes, I do need it as an integer.
01-30-2012 08:31 AM - edited 01-30-2012 08:32 AM
You could simply make your own. This is essentially what was done prior to the availability of native 64-bit integers. For example, for the 16-byte case create a cluster that consists of the "high" part (8 bytes) and the "low" part (8 bytes). Or you could create a "high", "middle", and "low" (e.g., using I32 for the 12-byte case with 3 I32s).