10-21-2015 05:41 AM
Thanks for the tip - works very well 🙂
The example is now working (code is attached)
Do you have any comments / recommendations? What should my delay times, time-out times, etc. be? Any rules of thumb?
10-21-2015 09:11 AM
10-22-2015 04:32 AM
Thanks.
With for-loop:
Without for-loop:
Huge improvement
(and YES, the code is running) ![]()
Regarding the timing:
Should the server and client loop run at the same rates? Or should the client run faster, because (in my case) it reads the data, and thus needs to run a little faster to be on the safe side?
10-22-2015 11:00 AM
On the client side I'd remove the Wait entirely, and let the rate of incoming data determine how fast the loop runs. That way you're not at risk of falling behind. As you're aware, any loop that runs continuously in LabVIEW should contain a timing mechanism, but it doesn't have to be a wait function - a timeout on a TCP Read, Dequeue, Wait on Notifier, etc. are all fine too. It's usually a good idea to have only one function in a loop dictate the loop rate to avoid confusion about what actually determines the loop rate.
10-22-2015 11:04 AM
@nathand wrote:
On the client side I'd remove the Wait entirely, and let the rate of incoming data determine how fast the loop runs. That way you're not at risk of falling behind. As you're aware, any loop that runs continuously in LabVIEW should contain a timing mechanism, but it doesn't have to be a wait function - a timeout on a TCP Read, Dequeue, Wait on Notifier, etc. are all fine too. It's usually a good idea to have only one function in a loop dictate the loop rate to avoid confusion about what actually determines the loop rate.
One caveat with this is that if there is an error out of the TCP Read other than the timeout error, you should add a wait in a case structure or handle the errors correctly so your loop doesn't spin as fast as your CPU allows. If you just have a loop that does a read and a non-timeout error occurs, it won't wait for the timeout.
11-04-2015 04:26 AM
Hi again.
I have a follow-up question for you guys 🙂
A normal dataflow is as follows (like the build-in LabVIEW TCP example).
* Server: Send header with data lengt, Send data.
* Client: Read header with data lengt, Read data based on data length.
I understand this flow. However, say the server is sending a lot of data, but the client is not reading. The connection IS established, but the client simply doesn't read the data.
Then after many hours, the client decide to start reading data. How do I know, that the first data the client will read is the header, not not at some random point in the data stream? If the first reading, isn't the header, then things will go wrong 🙂
11-04-2015 07:00 AM
11-04-2015 07:33 AM
If the client is connected, but not reading, eventually you will get a full send buffer on the server. If the client isn't connected then you will get a not-connected error on the server. The server always knows if the client didn't get the data with TCP. You can use this knowledge to do some sort of handshaking when you re-establish the connection.
11-04-2015 08:47 AM
Thanks for your quick replies 🙂