LabWindows/CVI

cancel
Showing results for 
Search instead for 
Did you mean: 

Client Connecting to Multiple Servers

Hello all!

 

I have a TCP client that runs on CVI and communicates to three different servers to get data and send commands to them.

 

I can succesfully connect to each one of them, but I get a weird behaviour where I can only communicate with one of them at a time. Even though I send the commands to all three servers, the program only receives data from one server for a few seconds, and them receives the data from the other server for a few more seconds, and so on. While one of them is "focused" the program cannot run the other TCP callbacks.

 

Each server is running on it's own thread and I am using ProcessTCPEvents() on them. For the main UI I use RunUserInterface();

 

Any ideas why this happens?

 

Thanks!

0 Kudos
Message 1 of 6
(3,622 Views)

Hi hashimoto,

 

Are you reading off one socket? You won't be able to simultaneously receive data from all 3 servers on one socket.

Humphrey H.
Applications Engineer
National Instruments
0 Kudos
Message 2 of 6
(3,586 Views)

I am not sure actually. This may be the case.

 

How would I read more sockets and have each server connect to a specific socket within labWindows? From what I can see, each server connects to a different port on my computer.

0 Kudos
Message 3 of 6
(3,584 Views)

Did you explicitly assign a port for the client? If not, it probably was assigned automatically. So it's likely you have 3 ports connected to a server each. Since you are connected to each server on a separate thread, you'll want to receive the data from each server separately. How did you verify that data was received from one server for a few seconds before switching to the next server?

Humphrey H.
Applications Engineer
National Instruments
0 Kudos
Message 4 of 6
(3,563 Views)

I found the solution! By changing the TCP read function timeout value I was able to solve the problem apparently.

 

At first it was set to 2000, which I think was too high, then I set it to 5, which was too low. Now at 1000 it seems to behave properly!

 

But I still have one doubt: is there an optimal way to determine how long the timeouts for all the tcp related fucntions should be?

 

 

0 Kudos
Message 5 of 6
(3,553 Views)

It depends on how long it takes messages to be sent to the client. My guess is that 1 second (what you have it set to currently) is more than enough. You could just time how long it takes to receive all the data and set the timeout slightly higher.

Humphrey H.
Applications Engineer
National Instruments
0 Kudos
Message 6 of 6
(3,531 Views)