LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Read and write simultaneously to same file from two VIs, or 'open when available?'

@Mark_Yedinak, I haven't been able to do much due to holidays and gov shutdown, but I did figure out a couple of things:

 

I don't appear to have a firewall issue, if I just create a TCP connection I can connect to it just fine.

 

The error is introduced in the GetRawSocketFromConnectionID.vi. This is inside of TCP_NoDelay.vi. It looks like the TCP connection is active, just that vi fails. The vi is password protected so I can't see what it's actually doing. Is it possible there is a problem with the 2015 version of the package and windows 10? What does disabling that Nagle algorithm do exactly in NoDelay?

 

Thanks for your continued help on this!


Alex

0 Kudos
Message 21 of 24
(907 Views)

I found this other topic, which seems to imply that this might be a windows 10 (or even 7) issue:

https://forums.ni.com/t5/LabVIEW/Disabling-Nagle-s-algorithm/td-p/862733/page/2

 

 

0 Kudos
Message 22 of 24
(893 Views)

The Nagle algorithme is a means in the TCP/IP protocol to reduce network bandwith usage. When you write a small number of bytes to a connection and this is sent immediately to the wire then the TCP/IP socket driver needs to create a TCP frame with an according header and checksum and then another IP frame around that with again similar information. This makes the few bytes you want to send suddenly use at least (TCP/IP can also contain options negotiation which increases the header overhead even more) 40 bytes more for these headers (and even more if you happen to use IPv6).

Nagle keeps these bytes in the send buffer for a while (typically around 100ms or until a minmum amount has accumulated) so that more small data packets being sent over the same connection can be added to that buffer and then sent in one go, only needing the 40 byte header overhead once for the whole data frame rather than for each individual little packet that was originally written by the application. This is a good thing to do and for many protocols not a problem at all. It's also no violation of the TCP contract as TCP guarantees that data arrives in the same order as it was sent but does not provide a means to inherently guarantee message (data packet) bounderies from the sender. TCP is a stream based protocol that simple guarantees a continous, ordered stream of bytes that is in the same logical order as it was sent. UDP is a datagram protocol that does not combine message datagrams so the message boundaries (sans fragmentation caused by the network infrastructure which can and will fragment datagrams that fit into the natural size of the involved infrastructure) are maintained but the data can arrive out of order.

Nagle goes however bad if you have a command-response protocol that is supposed to do many transactions per second. With Nagle enabled you can only get around 5 command/response transactions per second over the line. Now for some zealots that is enough reason to disable Nagle for all connections and with modern multi Gigabyte network infrastructure that may seem like an easy fix to all problems as the extra network bandwith overhead is likely to be unnoticable anyways. But IMHO it is a pretty short sighted approach. Disabling Nagle has its uses when you have this fast command/response requirement but for many protocols it is simply unneccessary and just causes more datatransfer on the network and with that more power consumption.

 

I would also guess that in at least half of the cases a LabVIEW programmer uses this TCP_NoDelay.vi he has no idea what it does and it is in fact totally useless for the problem at hand but not understanding what it does and everything seemingly working it will never again be removed as nobody is going to fix a working program.

Rolf Kalbermatter
My Blog
0 Kudos
Message 23 of 24
(875 Views)

@AlexGSFC wrote:

I found this other topic, which seems to imply that this might be a windows 10 (or even 7) issue:

https://forums.ni.com/t5/LabVIEW/Disabling-Nagle-s-algorithm/td-p/862733/page/2


Not really! It is a 64-bit issue instead at least as far as the linked thread is concerned. On Windows the socket for the WinSock library is a Windows handle and that is a pointer sized value. If you run in 64-bit, this means that the socket handle can be a number that does not fit in the 32-bit that the old LabVIEW VI used. They did upgrade the TCP Get Raw Net Socket.vi (and UDP Get Raw Net Socket.vi) in vi.lib to support 64-bit socket handles but quite a few of the floating around libraries were developed before LabVIEW 2009, which was the first version to be available in 64-bit and which also did support pointer sized integers in the Call Library Node. And they come often with their own version of a Get Raw Socket.vi instead of using the one in vi.lib.

 

There is a chance that earlier versions of Windows did not cause this problem as they may have arbitrarly limited the range of a Winsock socket handle to be always small enough to fit in 32-bit, as some sort of compatibility measure. But that was at best a workaround and nothing in the datatype guarantees that.

 

Replacing the Get Raw Socket function is only half of the fix. You also need to edit all Call Library Nodes that call into Winsock to change this parameter to be a pointer sized integer. And no this is not the right setting when calling the setsockopt() function on non-Windows platforms. Here a socket is an int file descriptor so it stays 32-bit, independent on the bitness of LabVIEW.

Rolf Kalbermatter
My Blog
0 Kudos
Message 24 of 24
(866 Views)