04-23-2021 02:54 AM - edited 04-23-2021 03:11 AM
Hi I have a simple basic question.
When using LabViews TCP, what endian and bit numbering does it use?
I never had to worry about it, since I was communication from LV to LV, but now I have to communicate with a customer and he asks what I use.
I was suprised that I wasnt able to find a sufficcent answer with google...
Solved! Go to Solution.
04-23-2021 03:08 AM
Hi Bow,
@LabviewBow wrote:
When using LabViews TCP, what endian and bit numbering does it use?
I never had to worry about it, since I was communication from LV to LV, but now I have to receive messages from a customer and he asks what I use.
How is TCP involved in answering that question?
There is a sender (your customer sending messages) and a receiver (you). You receive a message defined by the customer. You need to ask the customer about the message data formatting, not the TCP provider…
04-23-2021 03:27 AM
hmm I guess he wants me to define it. So I have to tell him what I want to use.
I have a byte array, use Flatten to String (with big-endian) and send it to TCP.
So its safe to say that I use big endian.
But what Bit Numbering does LabView use at this level?
04-23-2021 04:27 AM
Hi Bow,
@LabviewBow wrote:
I have a byte array, use Flatten to String (with big-endian) and send it to TCP.
So its safe to say that I use big endian.
When you have a byte (aka U8) array then all you need is U8ArrayToString. No need for TypeCast or FlattenToString…
@LabviewBow wrote:But what Bit Numbering does LabView use at this level?
This is irrelevant: The smallest entitiy you can send via TCP is a byte!
In general in a byte the lowest bit (LSB) is considered bit0, the highest bit (MSB) is bit7. The value encoded by each bit is 2^(bitnumber)…
04-23-2021 05:38 AM - edited 04-23-2021 05:46 AM
LabVIEW TCP/IP nodes are byte oriented (yes they use strings as data input and output because in LabVIEW a byte array and a string used to be synonymous, and while that should have been changed years ago, it wasn't because of fears of backwards incompatibilities).
As such the TCP/IP (and UDP) nodes are totally endianess unaware and don't care at all about that. The endianess is defined when you convert binary data to the LabVIEW bytestream string or vice versa.
So the important thing is how you do that. Three functions are important in that respect.
1) Typecast, always uses Big Endian format on the binary stream side and native endianess on the binary side. However for your byte array there is no endianess at play since a byte is the same size as string character element in the LabVIEW bytestream string. So there is nothing to swap. You could use the Typecast function to convert the byte array to the necessary LabVIEW string, but I prefer the explicit Byte Array to String node which is a NO_OP operation at runtime and simply changes the wire datatype.
2) Flatten to String: Since LabVIEW 8.0 it has a selector that allows you to define which endianess it should use. Default is Big Endian but you can select whatever your remote side requires
3) Unflatten from String: The same as under 2) applies here.
Bit order is defined by TCP/IP and can not be changed in any way. The network card drivers are supposed to do that translation from the serial bitstream to whatever the native bit order is on a machine.