01-01-2026 11:41 AM
Using Windows on a PC, the byte order is Little Endian.
When I run "vi.lib\Palette API\ResMgr\private\Is Big Endian.vi", it indicates that the system is Little Endian.
However when I run this:
I get this:
Which looks like Big Endian to me.
What am I missing?
01-01-2026 12:20 PM
LabVIEW is always big endian, while the system is little endian in this case.
01-01-2026 03:07 PM - edited 01-01-2026 03:10 PM
To extend on Altenbach's answer: LabVIEW's default Endianess is Big Endian. This is basically because the Macintosh which used a 68000 CPU was a Big Endian machine and LabVIEW had to maintain that as binary stream byte order for backwards compatibility of existing routines. Also there is no inherent advantage of either Endianess over the other, important is that both sides use the same one. And the so called network byte order used by many internet network protocols is also Big Endian. That the entire world eventually standardized on Intel x86 CPUs, which use Little Endian is not something that can be used to justify changing the standards after the fact. And ARM CPUs allow both, as did the PowerPC. PowerPC was often used in Big Endian mode. This made the conversion from the 68000 to PowerPC on Macintosh a little easier. Conversely most ARM CPUs are used in Little Endian mode nowadays, since that makes conversion of software from Intel to ARM CPUs a little easier.
The Typecast function always uses Big Endian standard when converting data to a byte stream or vice versa. The Flatten to String and Unflatten from String have a format selector with which you can select what Endianess the function uses. The default when not connected is also Big Endian as the function properly documents.
01-02-2026 09:13 AM
So I made this
01-02-2026 09:38 AM - edited 01-02-2026 09:39 AM
01-02-2026 10:38 AM
@paul_a_cardinale wrote:
So I made this
It's definitely not exactly doing what you think it does. This converts binary data from native Endian to Big Endian, and then forcefully inverts Endianess. Assuming that your input data is known Little Endian format, it will work as you intended on an Intel machine but would fail on a Big Endian machine, of which admittingly there is none supported by any LabVIEW version since 2019. But it will force two Endian inversions on Intel machine (one on Big Endian machines as the Typecast is on these platforms an NOP in terms of changing the byte order of the data). LabVIEW Realtime for PowerPC platforms (VxWorks) targets was the last Big Endian hardware.
But what do you have against the Unflatten from String function? It does exactly what your vim is doing except in a more consistent way and more efficient depending on the platform (no byte endianess swapping on little Endian machines at all if the intended format was specified as Little Endian).
01-02-2026 11:41 AM
@rolfk wrote:
@paul_a_cardinale wrote:
So I made this
It's definitely not exactly doing what you think it does. This converts binary data from native Endian to Big Endian, and then forcefully inverts Endianess. Assuming that your input data is known Little Endian format, it will work as you intended on an Intel machine but would fail on a Big Endian machine, of which admittingly there is none supported by any LabVIEW version since 2019. But it will force two Endian inversions on Intel machine (one on Big Endian machines as the Typecast is on these platforms an NOP in terms of changing the byte order of the data). LabVIEW Realtime for PowerPC platforms (VxWorks) targets was the last Big Endian hardware.
But what do you have against the Unflatten from String function? It does exactly what your vim is doing except in a more consistent way and more efficient depending on the platform (no byte endianess swapping on little Endian machines at all if the intended format was specified as Little Endian).
That's not consistent with what you wrote earlier: "The Typecast function always uses Big Endian standard".
01-02-2026 11:54 AM - edited 01-02-2026 11:57 AM
@paul_a_cardinale wrote:
That's not consistent with what you wrote earlier: "The Typecast function always uses Big Endian standard".
Of course it is. It means that the Typecast function ALWAYS converts to or from Big Endian on the byte stream side. That is why you have to do the Inverse string voodoo before the Typecast to reverse the reversion that the Typecast function will do. And that reversion will go really wrong if you try to convert anything but a single scalar number. Try to do it on an array of floats for instance, your array will be reversed too!
Rather than trying to outsmart Endianess conversion, which is a lot more complicated than you think, just use the right function to begin with. Unflatten from String already does exactly what you want to do, more clearly, with all possible options you may ever want (treating the bytestream input either as big endian (default), native (whatever the current hardware uses), or little endian.) No second guessing about what hardware you run the code on, or if you need to do a byte reversal or not to reverse the byte reversal of the Typecast function. (That byte reversal would need to be done in a conditional compilation structure to be correct on all possible platforms, although as explained before this is currently only really relevant for LabVIEW versions 2019 and earlier as all still supported platforms since then are little endian machines.
Still why insist on a less then perfect self made solution if the build in function is doing the same, in a simpler and more efficient way that does not require breaking your brain cells about the complications of multiplatform development as much?