LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TCP server stream loses data

Solved!
Go to solution
Hi,

I have a high visibility application that has a bug in it that I'd like some
sharp minds of NI discussion forum to make a suggestion about.

The app is on the International Space Station. I have a LV executable on a
laptop on the ISS that sends TCP and UDP data to a LV executable on an earth-bound
computer.

The path is not a standard or "trivial" IP but it is intended to be equivalent to
a standard IP.  The catch comes when some data bytes are lost en route.
When this happens, the earth-bound LV says something like "Not enough memory
to complete operation"

Here's what we found...
=======================================================
The TCP Server (on the ISS) send a TCP data stream with the following format...

...LLLLSTART$d1$d2...$dN#llllSUMccccENDLLLLSTART...

where

    * LLLL is a 32bit binary integer with the number of bytes in the following packet beginning with the text string START and ending with the text string END.
    * $, #, START, SUM, and END are delimiters.
    * d1, d2, ... dN are text string data values.
    * llll is a text string with the decimal number of bytes beginning with the text string START and ending with # symbol.
    * cccc is a text string with the decimal number checksum of the bytes beginning with the text string START and ending with # symbol.

The TCP Client (on earth) reads 4 bytes as LLLL, then reads LLLL bytes (usually about 85 bytes) as the packet beginning with the text string START and ending with the text string END.

The error comes when the LLLL data is lost, so the TCP Client reads four bytes (probably S,T,A,and R) as LLLL, which convert to a number that is very large, then reads LLLL bytes (maybe 50M) but this read fails because there is not enough memory to read that much data.

The fix entails ensuring that LLLL is a reasonable size (like 20 to 200) and ensuring that the data packet begins with START.  The difficulty is how to restructure the code to reset the packet loop when LLLL is out of bounds or when that START header is not present.
=================================

My quandary is that in LV, I can't see what's going on in the TCP Server or
TCP Client vis.

Any suggestions will be appreciated.

JIM

0 Kudos
Message 1 of 7
(4,672 Views)

Have you used a network sniffer to check the raw TCP data? TCP should prevent any data loss "en route"...

 

Br, Mike5

0 Kudos
Message 2 of 7
(4,667 Views)
Solution
Accepted by topic author jim-henry

As Miha said, a TCP connection should not lose any data (not silently, at least), but from your description I understand that you can't really control the IP implementation.

 

Since this is a stream, what you can probably do is simply read N bytes each time (let's say 200) and simply have a buffer of the last N*X bytes. Then, you can check the data in that buffer. If you're missing the LLLL bytes for some reason (and you don't mind losing the entire packet of data), you can just look for the end of the message, make sure what follows it is the correct beginning of the next message and just continue from there.


___________________
Try to take over the world!
0 Kudos
Message 3 of 7
(4,647 Views)

Jim,

 

Without seeing how your packet loop is currently structured it is hard to suggest changes.  I would keep all the received data in a shift register until a complete, valid packet was present.  Then extract the valid packet string and pass it to the parser.  Keep any bytes after "END" to begin the next packet.  Discard any data before the valid packet as invalid (or save it for a human user to interpret if the value is high).

 

Lynn 

Message 4 of 7
(4,645 Views)


The error comes when the LLLL data is lost, so the TCP Client reads four bytes (probably S,T,A,and R) as LLLL, which convert to a number that is very large, then reads LLLL bytes (maybe 50M) but this read fails because there is not enough memory to read that much data.

Hi!

First a question: why not putting LLLL after a delimiter? (Or at least checking that after LLLL there is START...).

 

It'll be useful to check the part of your code where you read the TCP stream...

 

p.s. What about a trip on the ISS as a prize for an "accepted solution"??

0 Kudos
Message 5 of 7
(4,642 Views)

I would look at configuring the Nagle Algorithm and/or Delayed ACK options on your remote and earth-bound stations.

 

Do LabVIEW TCP Functions Use the Nagle Algorithm?

 

TCP Performance problems caused by interaction between Nagle's Algorithm and Delayed ACK

 

From wikipedia:

 

Applications such as networked multiplayer video games expect that actions in the game are sent immediately, while the algorithm purposefully delays transmission, increasing bandwidth at the expense of latency. For this reason applications with low-bandwidth time-sensitive transmissions typically use TCP_NODELAY  to bypass the Nagle delay.

 

0 Kudos
Message 6 of 7
(4,619 Views)
Hi,

THX for all the help on my issue.

I learned that the incoming TCP string is literally equivalent to
a string and I can read a few bytes and analyze them and none get lost.
If I don't see what I want, read some more till I get it.

Reminds me of a deck of punched cards or digital paper tape
of my youth ;>}

At least that's what it seems like.

You may be interested that communication TO the ISS
is about 200 bits/second (yes, bits); communication FROM
the ISS is several hundred Bytes/sec.  So, up-links are quite
reliable and down-links less so.

0 Kudos
Message 7 of 7
(4,590 Views)