LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Type Casting Array Quirk?

Solved!
Go to solution

I am to be given an array of bytes, from 1 to 8 bytes, that needs to be converted into a U64. I am feeding the received array into Type Cast (see diagram below), but I'm confused about how the conversion is happening. I would expect that giving it two bytes, as shown in the example, would join those bytes and put them into a container of type U64, but it is putting the bytes into the most significant end rather than the least significant end.

 

My current workaround is to rotate the resulting number to put the bytes into the least significant position. 

 

Is this how Type Cast should be working or am I missing something?

 

AustinLee23_0-1608066770611.png

 

0 Kudos
Message 1 of 5
(1,656 Views)

Make the array the proper amount of bytes.  (Read the help for typecast - it tells you exactly what happens if you don't.)

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
Message 2 of 5
(1,646 Views)

Looking at the help I found this.

 

If x is of a smaller data type than type, this function moves the data in x to the upper bytes of type and fills the remaining bytes with zeros. 

Now I understand why that is happening (thanks), but I don't understand why it was implemented that way. What discipline would ever convert a number in that manner? I cannot imagine that being a useful way to convert numbers for anyone in any field, with the possible exception of dealing with IPv6 addresses.

 

Given the requirements of what I need to do, it seems that counting the bytes and rotating is the workaround. I will not know how many bytes there will be in the array at any given moment.

0 Kudos
Message 3 of 5
(1,635 Views)
Solution
Accepted by topic author AustinLee23

Type Cast is an ancient function, and the only discipline that was considered is LabVIEW itself.  A true coercion like the label reflects would give you the underlying bytes in memory.  This would depend on the endianness of the host.  NI decided that you should be able to Type Cast a value on one machine, transport that string to another machine, and get the correct answer when Type Casting back regardless of the local host order.  Big Endian was the dominant host order back in the day, so that is what persists.  It is what it is.

 

Some alternatives:

 

ByteArrayToU64.png

Message 4 of 5
(1,619 Views)

Thanks, that's a great explanation, and the example workarounds gives me something to benchmark my workaround against.

 

Much appreciated.

0 Kudos
Message 5 of 5
(1,608 Views)