LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Type cast to an enum with U8 representation fails

When casting an U32 integer to an enumeration type with an internal U8 representation nothing happens (the output enum will not change its value!) if the U32 is not explicitely mapped to an U8 before type casting! See attached example. Bug or feature?
Download All
0 Kudos
Message 1 of 7
(6,633 Views)
I believe its necessary to cast the U32 to U8 before type casting to enum with U8 representation. You always have to match the representation first.

Regards,
André (CLA, CLED)
0 Kudos
Message 2 of 7
(6,627 Views)


jumpinkiwi a écrit:
Bug or feature?

Feature ! Although I agree this behaviour is confusing.
 
The Help says :
Type Cast Details
This function can generate unexpected data if x and type are not the same size. If x requires more bits of storage than type, this function uses the upper bytes of x and discards the remaining lower bytes. If x is of a smaller data type than type, this function moves the data in x to the upper bytes of type and fills the remaining bytes with zeros. For example, a U8 with value 1 type cast to a U16 will result in a value of 256.

Chilly Charly    (aka CC)
0 Kudos
Message 3 of 7
(6,625 Views)
So why (and this is a big why) doesn't LabVIEW give me a hint (e.g. by a red bubble as usual for implicit type casts) that the input to this type cast is not really legal? I do not have always in mind how the representation of all my target enums is! That is a ugly pitfall. Furthermore - why the upper bytes and not the lower bytes ... as e.g. in ANSI C? And why does an implicit casting operator do the right thing?
0 Kudos
Message 4 of 7
(6,620 Views)
I agree with the point you make on showing a hint, I have this problem regularly. It isn't difficult but it would help if there  is a hint.

Regards,
André (CLA, CLED)
0 Kudos
Message 5 of 7
(6,614 Views)

The only persons capable of answering "Why ..." questions are members of LabVIEW R&D who were present at the design metings or Rolf Kalbermatter.

I'll guess at some of them.

The type cast is an advanced function that is burried deep in the palettes.

It and the other functions on the data manipulation palette are powerful and if misused could be dangerous.

I have to guess that NI hopes users will first familiarize themselves with the details of data representation in LV before they use them. ( I broke this rule Smiley Wink ).

Since enums can be used to control case structures, and in that case determine program flow, all of the "cases" have to be covered. An U32 with its upper bits set could cause some bad things when the case code executes.

If you show the "Help" window and switch to the wiring tool, when you float over the wire, the window will show you what your wires data type is.

I think I am done guessing now.

I hope this helps out a bit!

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 6 of 7
(6,594 Views)
Since the typecast function is polymorphic, there is no way for it to determine if the programmer has done something which will yield undesirable results.  Typecast takes the raw binary representation of the data and applies it as a new data type.  So, with a U32, you get 4 bytes, and only one of them is needed for the U8, and I believe it takes the first byte, in the case the highest order byte (bits 24-31).

The VI would have to look at two inputs and determine that the byte sizes are different at compile time and generate the broken wires.

As far as to why the upper bytes, it probably has to do with the fact that LabVIEW uses big endian format for its binary numbers, while most Windows programs use little endian format. LabVIEW uses big endian, because that is what the MacOS uses, and LabVIEW was created first for the Macintosh. (from here).

It could also be a decision made at some point in time.  NI has shown in the past that they don't necessarily conform to ANSI.  The typecast just takes the raw data and from the beginning and processes until it's used the bytes it needs.  The numeric casting functions know exactly what you are trying to do and can therefore handle the byte dropping properly.

If you're trying to get your enum to a U32, then you just need the to U32 conversion.  If your going from U32 to enum, you need the to U8 and then the typecast to avoid coercion dots.


Message 7 of 7
(6,571 Views)