01-25-2024 03:12 PM
Good afternoon,
I know there is a term in software systems that describes a connection that is not permanent, I can't think of the term but I know there is one.
The question I have is that I have a user interface to a target that will not always be connected. What would be the best way, if any, to use network endpoints on a system that runs on a target and will periodically have a computer with a user interface application connect to it.
In my experience with endpoints you set a timeout and if it times out, the actor stops, the alternative is to set no timeout, but then you have hung code.
My thought would be to use the Network Endpoint Actor with the TCP interface, then create a listener that listens for a TCP connection. Have a piece of code attempt a connection, get the TCP reference and then spin up the network endpoint with that TCP reference?
The listener would time out say like once per second and then ignore the error.
I am doing this with the STM library with good results, I would like to use the Network Endpoints if possible so I can encapsulate and type protect the data packages.
01-29-2024 11:05 AM
If memory serves, I've done this two ways:
Would one of those options work?
01-29-2024 11:40 AM
Thanks Casey,
I will give those a try.
01-29-2024 02:57 PM
@CaseyM wrote:
If memory serves, I've done this two ways:
- ...
- Restart the Network Endpoint actor if it closes.
This was my design intent when I wrote the package, and it's what I do in my own code.
I have a connection manager I've been meaning to publish that handles restarts. I can probably throw a package together for you as a way to start that process.
01-30-2024 09:45 AM
Thanks Alan and Casey,
My main concern is that when I am starting the user interface up.
The target should run a listener that just listens for a connection, and then a separate actor is spun up to actual handle the connection.
When the ui is stopped, the launched handler stops and then the listener goes back to listening for a connection.
01-30-2024 01:50 PM
@StevenHowell wrote:
Thanks Alan and Casey,
My main concern is that when I am starting the user interface up.
The target should run a listener that just listens for a connection, and then a separate actor is spun up to actual handle the connection.
When the ui is stopped, the launched handler stops and then the listener goes back to listening for a connection.
You can do it that way if you want. Just have your connection manager maintain the listener and serve up Network Endpoints as needed. I've considered building that for scenarios where I had multiple remote clients connecting to the same resource. It's why the endpoint "connected" message contains the endpoint's enqueuer.
I'd probably create a modified version of the TCP Listener with a separate Listen method. I would call this method in the Nested Endpoint's caller, before launching the endpoint. Then, when that VI returned with a valid connection, I'd pass the TCP Listener to an instance of Network Endpoint Actor, and then launch that actor. (The modified version's Connect.vi would do nothing, since there would already be a connection.) The caller could then, in parallel, create a new Listener class to wait for the next connection.
But honestly, if you aren't maintaining multiple connections, you can have the endpoint do that job. Have the server launch a nested endpoint with a TCP Listener with a fixed timeout, and when it fails, catch it in the caller's Handle Last Ack, and just relaunch it. It is both simple and effective.
01-31-2024 02:52 PM
Allen,
Thanks, I had thought about that but my concern was how much overhead and processing power is it using to stop and relaunch that actor over and over again in say a 500mS or faster timeframe? This would be on a cRIO target. I could see a once per second, but wasn't clear on overhead or resource usage from that aspect.
01-31-2024 02:58 PM
Overhead for launching an actor isn't bad; I want to say less than a second, though I haven't benchmarked it lately. To be fair, I've generally kept my timeouts in the 15 - 30 second range. The only issue I've found is that it can take a while for the application to fully shut down, but that's not terribly relevant on an RT application (which tends to run until the unit is rebooted).
Why do you need to time out after 500 ms?
02-01-2024 08:04 AM
@justACS wrote:
Overhead for launching an actor isn't bad; I want to say less than a second, though I haven't benchmarked it lately. To be fair, I've generally kept my timeouts in the 15 - 30 second range. The only issue I've found is that it can take a while for the application to fully shut down, but that's not terribly relevant on an RT application (which tends to run until the unit is rebooted).
Why do you need to time out after 500 ms?
The user interface will not always be connected. I would like to have the target listen for a user interface connection and then spin up the endpoints when needed.
What would be a rate that I could have the listener time out?
What would happen if the user interface were attempting a connection at precisely the same time that the listener timed out, and the code took several mS to recycle the listener again. Would the user interface throw an error, or would you set a timeout on the user interface as well?
02-01-2024 11:09 AM
@StevenHowell wrote:
The user interface will not always be connected. I would like to have the target listen for a user interface connection and then spin up the endpoints when needed.
I've implemented this exact scenario with an endpoint on the server running, waiting for a client connection, and intermittently timing out to reset the connection, with no issues.
What would be a rate that I could have the listener time out?
I have typically set the listener timeout at 15 - 30 seconds.
What would happen if the user interface were attempting a connection at precisely the same time that the listener timed out, and the code took several mS to recycle the listener again. Would the user interface throw an error, or would you set a timeout on the user interface as well?
Network Endpoints just wrap TCP/IP (or Network Streams), so they should respond like TCP/IP. I just ran a test with the Simple TCP example that ships with LabVIEW. Although the example says to run the server VI first, and then run the client VI, I find that I can start the client and then start the server, and everything works as expected. The client timeout just needs to be longer than the server cycle time. Launching an actor takes less than a second (I haven't benchmarked it lately, but I was surprised by how fast it actually is), so a timeout of a few seconds on the client side should be more than adequate.