08-11-2011 07:58 AM
LV 2010, Windows, LVRT
I need to communicate with several (5-50, I don't know) separate identical instruments via TCP.
I have to send requests and receive answers, more or less continuously.
I like state machines for doing this, but this particular model doesn't fit my way of doing things.
The driver that comes with it was built with the attitude that this is the only unit tied to the CPU so you don't have anything better to do than to wait on this one unit.
The responses are ASCII, and they come back terminated with a LF (not CRLF).
I can't change that in the machine. So I cannot use TCP READ (crlf mode).
What the driver does is to issue a command to the box, then a TCP READ (immediate), with a timeout of 100-200 mSec.
Some responses come back in 1-2 mSec, some take 130-150 mSec.
I can't sit there and wait for 150 mSec, I have other stuff to do.
What I have done if it's BINARY or CRLF terminated is to use TCP READ(buffered) with a zero timeout.
If I get a timeout error, the complete message is not there yet, so I go do something else and come back later.
That lets me keep 10 of them busy without waiting on any of them.
Since it is ASCII, I don't know how long the messages will be (can't use BUFFERED mode).
Since it's LF terminated, I cannot use CRLF mode.
I could implement my own buffering scheme, where I retrieve whatever is there with zero timeout, append it to a buffer for this device, search for LF characters and report messages that way.
Anybody got a better idea?
Blog for (mostly LabVIEW) programmers: Tips And Tricks
08-11-2011 08:05 AM
CoastalMaineBird wrote:
I could implement my own buffering scheme, where I retrieve whatever is there with zero timeout, append it to a buffer for this device, search for LF characters and report messages that way.
Anybody got a better idea?
That approach sounds reasonable. I used it as well without problems (even though only for a single connection).
I didn't get though how you handle the different connections. Do you have an array of 5-50 TCP references?
Another approach I could think of is to spawn a handler for each of the connections. Create an instance of a reentrant VI for each connection, then you don't care if the TCP read in a specific handler VI is waiting for data. The other (independant) handler VIs can continue to run.
Just my 2c.
08-11-2011 08:39 AM
I didn't get though how you handle the different connections. Do you have an array of 5-50 TCP references?
Yeah, there's a STATE cluster for each unit, with a CONN ID (and other info) in each one.
Another approach I could think of is to spawn a handler for each of the connections. Create an instance of a reentrant VI for each connection, then you don't care if the TCP read in a specific handler VI is waiting for data. The other (independant) handler VIs can continue to run.
I didn't think of spawning one. I have previously used a MASTER, which maintains N instances of a re-entrant CORE, which does the work.
I didn't do that here because I don't know what N will be. I also don't know which one needs service the soonest, and I'm trying to avoid servicing units that don't need it.
But I'll give it another think. Thanks!
Blog for (mostly LabVIEW) programmers: Tips And Tricks
08-11-2011 08:58 AM - edited 08-11-2011 08:59 AM
Spawning handlers is a nice approach in exactly those cases where you don't know the N.
It's used e.g. in the DateServerUsingReentrantRun example. For each connection to the server it spawns a process to respond, passing the connection ID.
Of course the best use scenario for spawning reentrant handlers if is they are "passive", i.e. the other end sends requests and the handler responds.
If the handler needs to be active it becomes a bit more complex. If the "intelligence" can be in the handler then I think it's perfectly fine, but if you need a "central" point (master) taking decisions than this master needs to be aware of all connections and the advantage of using handlers might be lost again...
08-11-2011 10:21 AM - edited 08-11-2011 10:21 AM
Spawning handlers will still work even if there is a need for a master controller. You will require additional messaging for the master controller to change state for the handlers but all communications can be handled in the spawned tasks. I have done this in the past without major issues. You will need a broadcast mechanism which can be accomplished using user events or notifiers. Each process can have a message queue for directed messages.
08-11-2011 10:25 AM
My problem is that I need to interact with them.
The CPU will ask for data, and then receive it some time later.
If I have 5 or 50 of these handlers running in their own loops (I don't envy the thread scheduler in that case), then unit #4 will still be in the middle of the waiting, when I want data.
When the time comes to collect the latest data from all these instruments, I want it all, and I want it NOW. It doesn't matter if a unit is almost finished reading a new scan, I can't wait.
I suppose I could have them deposit their individual data in some central place and collect it at will from there...
Blog for (mostly LabVIEW) programmers: Tips And Tricks
08-11-2011 10:27 AM
Thanks, Mark.
I need to decide if the overhead of all the messaging between all the handlers, and the burden on the thread scheduler is better than the overhead of the buffering/searching for LF myself. On the face of it, it doesn't seem so.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
08-11-2011 10:40 AM
@CoastalMaineBird wrote:
If I have 5 or 50 of these handlers running in their own loops (I don't envy the thread scheduler in that case), then unit #4 will still be in the middle of the waiting, when I want data.
When the time comes to collect the latest data from all these instruments, I want it all, and I want it NOW. It doesn't matter if a unit is almost finished reading a new scan, I can't wait.
I suppose I could have them deposit their individual data in some central place and collect it at will from there...
Exactly. You can deposit the data of all targets in the master (updating the data whenever a handler receives new data) or, probably even simpler, ask all handlers for their current data. The handler can respond to that request immediately (using the latest data, probably with a timestamp). All you need for that is a buffer for the latest data. Whenever the handler receives new data from the target it will update the data in the buffer.
I think the messaging between the master and the handlers need to be independent from the TCP communication.
08-11-2011 10:49 AM
probably even simpler, ask all handlers for their current data.
I don't think I can do that. I'm sampling at 10 Hz. At SAMPLE time, I need the data and I need it NOW (in addition to these 5-50 instruments, I have 250+ channels of other stuff [SCXI, CAN, MIO, DIO, other TCP instruments] which I need to sample NOW).
I don't want to be asking 50 questions to 50 different instances to get it.
If one of the handlers is in the WAIT state, waiting for data, then it's not able to respond to a request from the host.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
08-11-2011 10:49 AM
I am basically on the same page as Dan and Mark defering only in the details.
I have spawned over 100 background threads to maintain connections with little negative impact on the PC with all items updating at 100 Hz.
The deatils will depend gratly on your preferences and app requirements.
Have fun,
Ben