Overview
Sometimes you may wish to interface with instrument or devices over TCP without using the VISA library. There are many reasons for why this might be the case, but I will not go into that discussion here.
In such cases, it is not uncommon to not know how many bytes incoming messages will consist of, and most of the time you have no control of the transmitting side which may or may not implement CRLF (aka \r\n) as the termination character(s).
This is not groundbreaking or special in any way, and chances are you are already doing something similar or the same, however, this seems to be one of the things that everyone “learned from someone else” as opposed to intuitively being the first solution people come up with, so I figured putting this up here would help expose more people to it.
Your mileage will vary, use, abuse, modify as you wish!
Description
I have a lot of text, but feel free to skip to the code and refer back to this for document for detailed reasons and explanations if you are interested in the murky labyrinthine ways that seem to be how my brain works.
Below is a 2011 Snippet of the "essentials" just to break the wall of text and keep your attention! (I squished the code and slightly alterted labels and label positions to get the width under 800 so the upload tool would not re-size the png snippet..)
The rest of this document gives a simple example that can be used as a starting point to creating "efficient" ways to read the incoming TCP messages.
In this case, I'm using "efficient" to mean "without using excessive time-out and/or polling/looping". The trade-off is that some care must be taken down-stream to handle cases of "no message received" or "message may not be valid/complete".
One of the governing assumptions here is that the sending devices may need some undetermined time to process a request (during which time we wish to sleep!), but once processed it can and will quickly complete its response message as a burst/stream over TCPIP.
One might think that a straight forward way to do a query would then be to send the request, then immediately start a TCP Read to get the reply. This works great if you know how long the reply is, and as long as a reply is indeed forthcoming. It may not perform satisfactory if the device "disappears" and never replies (dropped connections), or if it replies with an unexpected (shorter) message length.
For example, if your query results in the device responding with an error code instead of the data you expected, the TCP read will have to wait for the time-out to occur before returning the (assumed to be) shorter error message. Clearly many applications require the program to respond in a quick and timely manner to errors or sudden loss of the remote device. (If the network cable is unplugged for example, the TCP Connection is still valid so the TCP Read would not error out with a session error. This situation is often called half open or dropped connections.)
A common approach to at least partially deal with these issues are to issue a TCP Read for a single byte, using an acceptable time-out that trades off error-recovery/response time with the risk that the remote device just needed some extra time to respond. -If you know that your device typically will respond "quickly" for all commands except for some specific commands, this initial time-out value should be tailored according to the requested query type.
The code attached (LabVIEW 2011) combines relatively (on the scale of the instrument I happen to communicate with) quick and responsive query and read operations with some degree of scalability and fault checking built into it. Additional knowledge of your transmitting device will possibly let you optimize further.
The various controls and constants used for time-outs and for "number of bytes" to read etc., should be tweaked depending on your application requirements and the expected behavior of the device you are connecting to as well as the network your communication is travelling across.
Let us start by briefly consider the network. I tend to take a fairly “conservative” approach:
A 10Mbit/second link speed will (almost) never result in a 10Mbit/second performance end-to-end. I usually de-rate by at least 50% to accommodate for other traffic and for protocol overhead. Next, to make the number more usable in this context, I typically convert from bits/second to bytes/ms, in this case yielding an estimated 625 bytes/ms. This number should at least be in the back of your mind when you read the following paragraph on balancing the behavior after the “read first byte”.
Balancing the “Read until time-out error” loop
Assuming the TCP Read successfully reads the first byte (no time-out or other errors), I chose to allow some extra time to allow the rest/entire message to arrive before attempting the next read. I’m having a hard time justifying this, since we know that we will need to go one iteration in the while loop regardless, but I feel that this up-front wait will allow me some extra cushion for ping, follow up network packets (not to be confused with the more abstract “message” that we are interested in) and so on.. Especially since I use the “immediate” mode.. 😕
Once we enter the while-loop, our goal is to get all the data in as few iterations as possible and with a minimum of wait time. To achieve this, we use the “Immediate” mode of the TCP Read function. At first glance it appears to be pretty much the same as the default ‘standard’, but the difference is that if there is ANY data in the TCP in-buffer at all, the function will return them immediately without waiting on more bytes and without waiting for the time-out to occur, and it will only return a time-out error if NO bytes were received at all. Thus, with an up-front wait outside the while-loop, we would nominally expect the first iteration to immediately read and return the rest of the message, followed by the second iteration which would have no bytes and thus time-out, marking the completion of the packet. This allows us to reduce the time-out inside the while-loop dramatically, with the assumption that the transmitting device streams its message in one more or less contiguous stream. Obviously, running the loop at a high rate over many iterations is undesirable from a CPU load perspective. This can be somewhat countered by increasing the bytes-to-read and/or the time-out of the immediate read. At any rate, the bytes-to-read should be larger than the size of most/all messages so in nominal cases the while loop runs exactly twice.
When tweaking the pre-loop wait and the loop time-out, keep in mind the typical message lengths you are expecting vs. the bytes/ms rate of your connection to at least have a cursory check on your expectations.
Using the estimated bytes/ms for my network along with what I typically expect to see from the remote device, I may decide that most messages will be 5 ms * 625 bytes/ms = 3125 bytes or shorter, with the understanding that the message quite possibly transferred at a better effective link speed than my conservative estimate anyway, I would expect this to catch most messages in only two iteration of the while loop, with only rarely experiencing premature time-outs.
For really large streams of data, you probably need to have your loop rate time-out on the same order as the network ping rate to reduce the chance of timing out prematurely… but then for such a use case, the string on a shift register in a while loop quickly becomes very inefficient as well and you should probably take a different approach altogether!
If there are termination character(s) expected, I will check for the presence of that after the while loop exits. The reason for this was a design decision to not scan every single byte received while in the while-loop. The last byte(s) flag a warning if the last character isn't the expected termination character(s). This gives some flexibility downstream from the Query read on whether to process the potentially partial message, attempt to read further bytes, or ignore the warning altogether.
If termination character is an empty string, the warning will always be false.
Requirements
Software
LabVIEW Base Package, 2011 or later.
Example code from the Example Code Exchange in the NI Community is licensed with the MIT license.
This document came to be after I initially posted my thoughts and questions at this thread: