LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Problems with breaking a TCP/IP connection

I have a LV application which connects to a proprietary controller running embedded UNIX. The connection uses LV TCP/IP VI's. The problem occurs when I try to disconnect. Using the LV close connection VI leaves the thread active on the remote device. Closing my application does terminate the thread successfully. Any suggestions on being able to break into the packets being sent and get LV to send the correct sequence? Thanks.
0 Kudos
Message 1 of 4
(3,418 Views)
Are you using LabVIEW on both sides of the TCP communication or just on the client side? Also, when you say that the thread is left "active" on the server side, how are you verifying this and what do you mean by "active"?

The reason I ask is because I am curious whether you are actually seeing a problem or whether you are observing one of the unfortunate (albeit normal) symptoms of TCP. The way TCP termination works is that both sides agree a connection is done by telling each other that they are done sending. Suppose we have a connection, sides A and B. When side A closes, it sends a "FIN" to B. That FIN means that A is done sending. It says nothing about receiving. At this point, on side B, once all pending data has been read, reads will start failing. Side B may, however, continue to write. In fact, it successfully writes the data. The other side will usually respond with a "RST" (reset) -- telling side B that side A isn't accepting any more data. Thus the write on side B has apparently succeeded. Once B receives the RST, it knows that the other side is truly gone.

Hope this gives you some more insight into things, and if you still think you have a problem, please respond with further detail!
E. Sulzer
Applications Engineer
National Instruments
Message 2 of 4
(3,401 Views)
The server side is not a LV application. We know the connection is still active from our debug tools. I expected that when invoking the LV Close TCP/IP vi that the normal termination sequence would be used (ie FIN and ACK flags). This is NOT the case. In fact no packets are sent by LV when executing Close TCP/IP. This vi seems to be simply a housekeeping routine which closes the connection on the LV side ONLY. So my problem is how to I initialte a real TCP/IP sequence to signal the server to close the connection. The only way I see doing this is to either use a real TCP/IP stack with DLL's or to use timeouts on both sides (which is not possible for this application). NI really needs to expose more parameters of the TCP datagram to make this communication scheme work correctly. Any other suggestions? Thanks.
0 Kudos
Message 3 of 4
(3,388 Views)
Sorry...after further analysis I have found the Close TCP/IP vi executes exactly as it should. A TCP datagram with the FIN bit set is issued to properly request a close connection. The controller is responding as it should. Thanks for taking the time to respond.
0 Kudos
Message 4 of 4
(3,375 Views)