LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TCP/IP: Read and Write U32. Sample rate up to 150 Khz.

Solved!
Go to solution

Hi All.

 

I have an application that collects data from a sensor with 150 Khz. This is done in a RIO target, where the samples are transferred from the FPGA target to the RT application. The RT application then needs to send the data using TCP to the host using an open protocol (meaning NO LabVIEW-only). I must not lose data, so I thought I would use plain and simple TCP/IP.

 

The data is always a 32-bit integer (uint32), so I was thinking that I could drop the header information which normally tells how many bytes to read (like in the Simple TCP/IP example that ships with LabVIEW), as I always know how much data to expect.

However, if the integer is e.g. “5”, that doesn’t take up 32 bits, and I no longer know how many bytes to read.

So I made an example application, that converts the number to 32-bit binary value. The downside is, that the 32-bit will convert into a 32-byte string. So maybe that’s a bad idea, as there is a lot of overhead. See the attached example.

 

For my proof of concept, I need two communication channels.

1) One channel that streams data from RT to Host (one-way only) with U32 information at 150 KHz.

2) One channel for commands, bi-directional, with U32 commands.

 

(side-note: for the example, the communication jsut has to be done on two local VIs - no need to use the RT target for this test).

 

Any suggestions for how to achieve this?

Best Regards

Alex E. Munkhaus
Certified LabVIEW Developer (CLD)
System Engineer
0 Kudos
Message 1 of 19
(5,817 Views)

Well you can either use a string-based protocol or binary. If you type-cast a U32 to a string, you will always get a 4-byte string (which is essentially an array of U8 characters) which is your binary format. If you go for the string protocol, you can either prepend the number of bytes to the packet, use a CRLF to terminate the packet or use a fixed length string (e.g. pad with 0's or spaces) to send your data.

 

Edit: I had a look at your example and if you want to send the binary data - use type-cast instead of format into string. Type-cast just changes the 'type' of the data without modifying it.


LabVIEW Champion, CLA, CLED, CTD
(blog)
0 Kudos
Message 2 of 19
(5,791 Views)

Thanks for your reply.

 

Hmm, my first try was to use the type cast function. For some reason (that I no longer can reproduce) I got different string lengths, and thus went with the other solution. I most have done something wrong, cause I got it working now as you describe it: My uint32 is type casted to a 4 byte string. So that significantly reduces my overhead compared to my other solution 🙂 This solution is attached.

 

Is this the way to do it, if you want to transfer data at a rate of 150 kHz?
You mention a binary transfer is also an option - but I was of the impression that TCP always uses strings?

 

Best Regards

Alex E. Munkhaus
Certified LabVIEW Developer (CLD)
System Engineer
0 Kudos
Message 3 of 19
(5,774 Views)

Typecasting a U32 to a string is a binary transfer. A string is just a variable length array of bytes (and there are functions in the palettes to convert a string to U8 array and vice-versa).

 

If you were doing a string-based protocol, you would convert the number to ASCII characters and then transmit that.

 

For example, to send the number 123456 via TCP you could either send that as:

- 4 byte U32 (by type-casting)

- format into an ASCII string and send 6 bytes "1" (which is 0x31 in the ASCII table), "2", "3" etc.

 

When you were converting to binary in your original VI, rather than sending 32-bits (4 bytes), you were actually sending a string of 32 characters (bytes) that contained the ASCII characters "0" and "1".


LabVIEW Champion, CLA, CLED, CTD
(blog)
0 Kudos
Message 4 of 19
(5,771 Views)

Awesome, thanks for the explanation.

 

The basic structure is now in place, after being able to transmit 1 element 🙂

Now the challenge is; how to control the TCP buffer which enables me to stream data at 150 Khz.

 

I have now changed the test VIs; The server is supposed to stream data (a SW generated sine). Normally, you should write / read multiple elements at a time, and I suppose the same “rules” apply for TCP communication. But how is such a structure implemented? I have attached an example, to give you an idea (called Type cast with data).

Best Regards

Alex E. Munkhaus
Certified LabVIEW Developer (CLD)
System Engineer
0 Kudos
Message 5 of 19
(5,752 Views)

The TCP buffer is *supposed* to be clever to bunch up multiple writes into larger packets to send over the network - it's called Nagle's algorithm so you can have a read about that on Wikipedia.

 

If you have multiple data points to send, there's nothing to stop you from just concatenating the strings and sending them as a single write.

 

You may run into sync issues - if you miss 1 byte, you could be reading 3 bytes from one number and 1 from the next. In binary protocols you usually get around this problem with some sort of message framing (e.g. a start and end byte or sequence - perhaps with a checksum). In ASCII protocols you normally use a termination character (like you would with serial data).


LabVIEW Champion, CLA, CLED, CTD
(blog)
0 Kudos
Message 6 of 19
(5,742 Views)

So the only thing I need to worry about is the CPU usage my program is using. But I expect, that multiple write uses much more resources.

E.g. when reading data, instead of reading 4 bytes multiple times, I could read 1000 bytes and thus get 250 elements at a time. Correct?

 

But how to split the 1000 bytes into 250 elements?

 

Regarding the message framing: I thought the hole point of TCP/IP was to not worry about such issues. That it was lossless and thus all packages will arrive in order?

 

Best Regards

Alex E. Munkhaus
Certified LabVIEW Developer (CLD)
System Engineer
0 Kudos
Message 7 of 19
(5,736 Views)
Solution
Accepted by topic author A.E.M

@A.E.P wrote:

So the only thing I need to worry about is the CPU usage my program is using. But I expect, that multiple write uses much more resources.

E.g. when reading data, instead of reading 4 bytes multiple times, I could read 1000 bytes and thus get 250 elements at a time. Correct?

 

 


Yes


@A.E.P wrote:

 

But how to split the 1000 bytes into 250 elements?

 


Use the string functions in a for loop to get 4 characters at a time from the string and type-cast back to U32.


Regarding the message framing: I thought the hole point of TCP/IP was to not worry about such issues. That it was lossless and thus all packages will arrive in order?


Yes, that's true. Normally I read all of the bytes that are available and when I do that, I need to make sure that the first byte I'm reading is the start of my 'data' to get things in sync. If you're reading 4 bytes at a time you should be OK.


LabVIEW Champion, CLA, CLED, CTD
(blog)
Message 8 of 19
(5,728 Views)

Ahh, I get your point with the sync-issue now.

 

I'll go ahead and try to implement a small test app.

Thanks for all your help!

 

Best Regards

Alex E. Munkhaus
Certified LabVIEW Developer (CLD)
System Engineer
0 Kudos
Message 9 of 19
(5,718 Views)

@Sam_Sharp wrote:

@A.E.P wrote:

But how to split the 1000 bytes into 250 elements?


Use the string functions in a for loop to get 4 characters at a time from the string and type-cast back to U32.


That seems like the long way to solve the problem. Use Unflatten from String or Type Cast to convert the string to an array of U32; no loop necessary.

Message 10 of 19
(5,697 Views)