LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

how to get the number of bytes at ethernt port using tcp/ip?

I agree with rolfk that a protocol requiring such a hack is poorly designed, however I disagree that the presence of the VISA bytes at serial port is a big error. IMHO there are instances where you might want to know if and how many bytes are waiting at the port without actually reading those bytes - this is where the VISA bytes at serial port property is useful, and likewise a similar property for the ethernet port would also be useful.

 

In the absence of such a property node for TCP and UDP connections, the method described by Philippe_RSA works nicely; it certainly solved a pickle for me that would have required a major re-juggle of the code in order to convert an application from reading an RS232 serial port data stream to an ethernet UDP data stream. Note in this instance the protocol was properly designed and included a fixed length header which contained message length info.

 

For anyone new to using the call library function node (as I was when I stumbled across this thread), I've attached my coded version of Philippe_RSA's solution which hopefully will save you a bit of time and head sratching Smiley Happy. Note my version is for getting the bytes waiting at the ethernet port for a UDP connection - it could be easily modified to work with TCP connections instead.

 

UDP bytes at port

 

Hope this is helpful for someone.

 

Download All
Message 11 of 19
(3,927 Views)

How to use this VI in TCP/IP Communication?

Can you please attach the VI for TCP also.

 

Regards,

S Nagaraju

0 Kudos
Message 12 of 19
(3,857 Views)

@Sonti_11532 wrote:

How to use this VI in TCP/IP Communication?

Can you please attach the VI for TCP also.

 

Regards,

S Nagaraju


It should be as simple as replacing UDP Get Raw Net Object.vi with 

vi.lib\Utility\tcp.llb\TCP Get Raw Net Object.vi

 

I'm not at my computer at the moment so I can't check but I think this should work.

Message 13 of 19
(3,846 Views)

Thank you very much. Its working fine.

0 Kudos
Message 14 of 19
(3,836 Views)

If you are receiving a web page, which returns variable length strings, what do you do?

Or you are working with a data source written a long time ago, and you cannot add further information to it. 

It would still be nice to know how many bytes are available.

 

0 Kudos
Message 15 of 19
(3,434 Views)
wrote:

If you are receiving a web page, which returns variable length strings, what do you do?. 

The people who designed HTTP were smarter than that to make you rely on a bytes at TCP Socket function.

 

Let's look at a HTTP message:

HTTP/1.1 200 OK
Date: Mon, 27 Jul 2009 12:28:53 GMT
Server: Apache/2.2.14 (Win32)
Last-Modified: Wed, 22 Jul 2009 19:15:56 GMT
Content-Length: 88
Content-Type: text/html
Connection: Closed

<html>
<body>
<h1>Hello, World!</h1>
</body>
</html>

Basically the header consists of a number of <cr><lf> terminated lines that contain header fields (propertiies) of the message. The header is terminated by an empty line and immediately after follows the message body if the message type allows that.

 

The length of that body is determined either by the Transfer-Encoding or Content-Length header field. The HTTP/1.1 standard has following to say about this:

 

4.4 Message Length
The transfer-length of a message is the length of the message-body as it appears in the message; that is, after any transfer-codings have been applied. When a message-body is included with a message, the transfer-length of that body is determined by one of the following (in order of precedence):

1.Any response message which "MUST NOT" include a message-body (such as the 1xx, 204, and 304 responses and any response to a HEAD request) is always terminated by the first empty line after the header fields, regardless of the entity-header fields present in the message.

2.If a Transfer-Encoding header field (section 14.41) is present and has any value other than "identity", then the transfer-length is defined by use of the "chunked" transfer-coding (section 3.6), unless the message is terminated by closing the connection.

3.If a Content-Length header field (section 14.13) is present, its decimal value in OCTETs represents both the entity-length and the transfer-length. The Content-Length header field MUST NOT be sent if these two lengths are different (i.e., if a Transfer-Encoding

     header field is present). If a message is received with both a
     Transfer-Encoding header field and a Content-Length header field,
     the latter MUST be ignored.
4.If the message uses the media type "multipart/byteranges", and the transfer-length is not otherwise specified, then this self- delimiting media type defines the transfer-length. This media type MUST NOT be used unless the sender knows that the recipient can parse it; the presence in a request of a Range header with multiple byte- range specifiers from a 1.1 client implies that the client can parse multipart/byteranges responses.

       A range header might be forwarded by a 1.0 proxy that does not
       understand multipart/byteranges; in this case the server MUST
       delimit the message using methods defined in items 1,3 or 5 of
       this section.
5.By the server closing the connection. (Closing the connection cannot be used to indicate the end of a request body, since that would leave no possibility for the server to send back a response.)

For compatibility with HTTP/1.0 applications, HTTP/1.1 requests containing a message-body MUST include a valid Content-Length header field unless the server is known to be HTTP/1.1 compliant. If a request contains a message-body and a Content-Length is not given, the server SHOULD respond with 400 (bad request) if it cannot determine the length of the message, or with 411 (length required) if it wishes to insist on receiving a valid Content-Length.

All HTTP/1.1 applications that receive entities MUST accept the "chunked" transfer-coding (section 3.6), thus allowing this mechanism to be used for messages when the message length cannot be determined in advance.

Messages MUST NOT include both a Content-Length header field and a non-identity transfer-coding. If the message does include a non- identity transfer-coding, the Content-Length MUST be ignored.

When a Content-Length is given in a message where a message-body is allowed, its field value MUST exactly match the number of OCTETs in the message-body. HTTP/1.1 user agents MUST notify the user when an invalid length is received and detected.

Section 4) only applies if you were bold enough to indicate to the server in your request that you do understand "multipart/byterange" decoding for specific content, which basically means that you do understand the actual bytestream format itself such as streaming movie content. If you don't want to parse that on the fly, don't send that indication to the server.

 

Basically you have either a Content-Length header field that tells you exactly how many bytes to read for the message, the server terminates the connection to indicate the end of the body, and as third and "most" complicated you have the Transfer-Encoding header field present which indicates that the message is sent in chunks, with each chunk having its own header that indicates the chunk size.

 

Header reading is easily done with the TCP Read function with the CRLF mode set until you read an empty line.

 

Nowhere do you need to read the number of bytes on the incoming wire to guess when the message might be finished, which it might not, even if you have waited 100 seconds, as part of the message may still be in transit somewhere on a lazy (proxy-)server.

Yes that is an extreme example, but the principle simply is that you can not rely on the TCP socket (or serial port) having received all the data at the moment you read the bytes at socket/serial port value. You may read it just fractions of seconds before the last byte has been put in the buffer and then rely on that wrong number of bytes to receive the whole message and either live with that flaw or start to do elaborate message parsing to try to determine that you haven't received all and do another read for the remainder. And if you leave it in the buffer you will run into problems when trying to parse the answer for the next request you sent over the wire as there is still some "garbage" in front of the new response.

 

Basically the use of "bytes at port" for anything than a teletype terminal that simply presents the data to a user in a continues stream is always the wrong solution.

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
Message 16 of 19
(3,422 Views)

Thanks very much Rolf - excellent answer!

regards, Bart

0 Kudos
Message 17 of 19
(3,391 Views)

@rolfk wrote:

@Philippe_RSA wrote:

So may responses saying your question is wrong..... typical of this site, and no decent answer after 5 years !

The answer I have used is to use a call library function:

 

short int ioctlsocket(unsigned long socket, unsigned long fionread, unsigned long *len);    

where fionread is a windows defined constant =  4004667F

 

The socket can be obtained from using the       TCP Get RAW NET OBJECT.vi   which comes with Labview (even as far back as version 7).

 

Good luck.

 


A protocol requiring such a hack is IMHO very poorly designed. You should always have some way on the wire to determine the data stream size. If the data is fixed size that would be inherent to the protocol, if it's variable sized there should be a fixed size header or a known message termination indication that can be used to determine how to read the rest of the message.

 

As a side node, I do consider the existence of VISA Bytes at Serial Port a big error, and that is most likely where this question originally came from. Use of "bytes at port" to decode a protocol will ALWAYS lead to protocol errors sooner or later, and code that is unneccessarily complicated to force the routine to deal with the asynchonous reading of the "bytes at port" into the protocol decoding.

 

If a protocol can't be decoded with fixed size reads, fixed size reads with following variable size reads determined from information in the header, or a specific message termination indication, then it is very badly flawed.


I need a way to change the VISA TCP/IP termination character to "0x00 0x0D" (NULL and Carriage Return).  How do I do that in VISA?

 

Thanks!

0 Kudos
Message 18 of 19
(2,999 Views)

@BigApple0 wrote:

I need a way to change the VISA TCP/IP termination character to "0x00 0x0D" (NULL and Carriage Return).  How do I do that in VISA?

 

Thanks!


VISA only allows a single termination character. Depending which of the two only happens as part of EOM indication I would set it to that character and either (in case of LF) cut off or ignore these last two bytes when parsing the response or (when using NULL) always read an additional byte afterwards.

I your device is weird or you feel paranoid you can also explicitly verify these characters to be actually present and throw an error when they are not.

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
0 Kudos
Message 19 of 19
(2,983 Views)