LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Labview TCP Bit numbering Endianness

Solved!
Go to solution

Hi I have a simple basic question.

 

When using LabViews TCP, what endian and bit numbering does it use?

I never had to worry about it, since I was communication from LV to LV, but now I have to communicate with a customer and he asks what I use. 

 

I was suprised that I wasnt able to find a sufficcent answer with google...

0 Kudos
Message 1 of 5
(1,700 Views)

Hi Bow,

 


@LabviewBow wrote:

When using LabViews TCP, what endian and bit numbering does it use?

I never had to worry about it, since I was communication from LV to LV, but now I have to receive messages from a customer and he asks what I use. 


How is TCP involved in answering that question?

There is a sender (your customer sending messages) and a receiver (you). You receive a message defined by the customer. You need to ask the customer about the message data formatting, not the TCP provider…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 5
(1,688 Views)

hmm I guess he wants me to define it. So I have to tell him what I want to use.

 

I have a byte array, use Flatten to String (with big-endian) and send it to TCP.

So its safe to say that I use big endian.

But what Bit Numbering does LabView use at this level?

0 Kudos
Message 3 of 5
(1,679 Views)
Solution
Accepted by topic author LabviewBow

Hi Bow,

 


@LabviewBow wrote:

I have a byte array, use Flatten to String (with big-endian) and send it to TCP.

So its safe to say that I use big endian.


When you have a byte (aka U8) array then all you need is U8ArrayToString. No need for TypeCast or FlattenToString…

 


@LabviewBow wrote:

But what Bit Numbering does LabView use at this level?


This is irrelevant: The smallest entitiy you can send via TCP is a byte!

 

In general in a byte the lowest bit (LSB) is considered bit0, the highest bit (MSB) is bit7. The value encoded by each bit is 2^(bitnumber)…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 4 of 5
(1,644 Views)
Solution
Accepted by topic author LabviewBow

LabVIEW TCP/IP nodes are byte oriented (yes they use strings as data input and output because in LabVIEW a byte array and a string used to be synonymous, and while that should have been changed years ago, it wasn't because of fears of backwards incompatibilities).

 

As such the TCP/IP (and UDP) nodes are totally endianess unaware and don't care at all about that. The endianess is defined when you convert binary data to the LabVIEW bytestream string or vice versa.

 

So the important thing is how you do that. Three functions are important in that respect.

 

1) Typecast, always uses Big Endian format on the binary stream side and native endianess on the binary side. However for your byte array there is no endianess at play since a byte is the same size as string character element in the LabVIEW bytestream string. So there is nothing to swap. You could use the Typecast function to convert the byte array to the necessary LabVIEW string, but I prefer the explicit Byte Array to String node which is a NO_OP operation at runtime and simply changes the wire datatype.

 

2) Flatten to String: Since LabVIEW 8.0 it has a selector that allows you to define which endianess it should use. Default is Big Endian but you can select whatever your remote side requires

 

3) Unflatten from String: The same as under 2) applies here.

 

Bit order is defined by TCP/IP and can not be changed in any way. The network card drivers are supposed to do that translation from the serial bitstream to whatever the native bit order is on a machine.

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
Message 5 of 5
(1,630 Views)