11-02-2012 05:00 PM
to niACS: I agree with your thoughts except one. The AF messaging is really based on queues. But it is a performance drawback because it uses priority queues. Just look at my document on "AF with pure queue" https://decibel.ni.com/content/docs/DOC-24589
I made an experiment to substitute priority queue with simple queue and experienced quite good improvements. Don't want to say numbers here but you can test it out by yourself. I actually believe that using a simple queue should be an option in AF.
11-03-2012 03:08 AM
If you're tempted to make more than one or two while loops in actor core in addition to calling the parent actor core.vi, is that a sign you should consider making another actor at a rung lower? It sounds too neat as an aphorism to be true in all cases, so I'm just wondering what the example would be that disproves it.
(and apologies, really, for putting words in AQ's mouth. Didn't realize I did when I wrote that, only realized after the replies. Normally I try rather hard not to do that, even when I namedrop.)
11-03-2012 07:51 AM
ChuckDiesel wrote:
...
Should I be seeing an issue using the Debug AF this way or am I doing something wrong? See some code snippets below. BTW, this problem does not seem to happen with the shipping version of AF in LV 2012, just the debug fork.
Thanks,
Chuck, using AF 4.1.1.34 DEBUG FORK on LV 2012, Win 7 x64
A few questions on your requirements:
Top Level Actor -> Motion Control Actor/Control Manager -> DAQ
What are the requirements for passing data up the chain? Can any of the actors recieve the "last value/lossy"?
The debug fork does run slower than the shipping version of the AF due to the debug code...
I would suggest running the AQ AF Debug Logger which will allow you to see what messages are being processed by what actor. This should let you know where the bottleneck is occuring. To get names and not numbers for the actors In the debug version of the AF, you can wire a name in for each "Launch Actor.vi".
See the instructions at https://decibel.ni.com/content/docs/DOC-23398
There are a few optimizations I could suggest:
1. Lossy transfer of data
Fire a notifier in the Do override (unbundled from the Actor's private data) to a parallel loop in your Actor Core.
2. "Registered Listener"/Short-Circuiting the Actor Tree as named by Daklu and suggested by niACS
3. Creating a DVR in the Send Msg Method and destroying in the Do override. Note: You will need a "Drop Msg" override to destroy the DVR so you don't leak when the actor shuts down.
As far as the proper way to "optimize" your code, we are still discussing the "best" AF-approved approach.
11-03-2012 01:06 PM
Ben_Phillips wrote:
If you're tempted to make more than one or two while loops in actor core in addition to calling the parent actor core.vi, is that a sign you should consider making another actor at a rung lower?
I wouldn't say it is an indicator that it is time to convert a loop to an actor as much as it is time to review the design/code to see if any of them are already actors in their behavior.
The decision to formally convert a loop to an actor should be based on what the loop is doing/needs to do, not the number of loops in Actor Core. For me, a loop becomes a separate actor when it gains the ability to receive and process messages (via queue, notifier, etc.) My implementation of Continuous Process loops and Metronome loops aren't really suitable for a stand alone actor by themself. They need a separate message handling loop to service the receive queue.
11-05-2012 11:32 AM
komorbela wrote:
to niACS: I agree with your thoughts except one. The AF messaging is really based on queues. But it is a performance drawback because it uses priority queues. Just look at my document on "AF with pure queue" https://decibel.ni.com/content/docs/DOC-24589
OK, that's fair. There is some additional overhead because the priority queue manages four separate queues. But I don't think that is contributing to the OP's problem. A performance hit transmitting between actors should add to latency, but not affect throughput.
11-05-2012 01:38 PM
I guess the major question here was, "Why am I breaking the debug fork but not the main fork?" and the follow up "Is my programming practice at fault or is the debug fork just not able to handle this in general?" In this simple test case the consumer loop was VERY stripped down; the motion controller was basically just receiving the 2d array and sending one column to the front panel. It is possible that my computer had too much running (virus scanner, firewall, possibly an internet browser) which seems to affect these issues somewhat.
I have since switched to using a separate queue outside the AF strictly for sending the DAQ data array to the motion controller, and another queue strictly for sending the processed and PID'd DAQ data to the analog output actor. I haven't had any issues with stop messages not getting received, and as long as I am careful about using the non-AF queues correctly it seems to be a safe method.
Optimizing the consumer loop and making sure my computer isn't bogged down by other processes is another story that I definitely need to look into but for this particular example did not seem to be the limiting factors.
to LVB: The top level actor in this case was basically just the UI, so updates at 10 Hz are deemed enough for the user to see and mentally process an indicator for each sensor. The motion controller however needs the data to get in and processed at as close to the specified control rate (500-5000Hz) as possible (the PID controller needs to know the dT of the inputs).
to niACS: you are right that the low levels of latency would not have caused my problem, but latency is still an issue in closed loop motion control. Would the latencies we are speaking of (regular queue vs priority queue) become an issue at around 5000 Hz (200 µs period)? For this reason alone it may be better to just use the non-AF queues for the high frequency transfers. I read the post the komorbela linked above and someone describes around 40 µs (although the sample size was questionable) for the AF. Maybe the debug fork adds to this significantly?
11-06-2012 04:55 PM
LVB wrote:
If this is the case, I would like to hear this from AQ directly. I am pretty sure that sharing references between actors is an anti-pattern.
It is *nearly* an anti-pattern. It might also be necessary if we really are fighting against some serious upper performance bound.
The problem with sharing any type of reference or shared variable between Actors is that it reintroduces one of the prime problems that Actors were created to solve -- establishing a coherent communication model among modules. If you have side channels, then you have the problem of one channel saying "stop" while the other keeps going, of failures in one channel not being reported to the other, and two different channels that have to be listened to, which usually results in polling one and then polling the other instead of going to sleep and waiting for a message on the one channel.
Now, within the AF. using a message to establish that second side channel has some of these issues, but they can be managed. The first actor launches the second actor. Then the first actor sends a message to the other actor that says "please start processing this queue". The first actor then starts filling the queue. At some point, someone sends a message to the first actor to stop (it might be the second actor saying "I need you to stop"). The first actor then sends a message to the second actor to tell him the queue is going away now.
By managing the queue within the context of AF messages, you can establish such a side channel that handles the one specific piece of data.
Now, all of that is only necessary if you really can't process the message queue fast enough for direct communication. When I originally created the AF, I used raw queues. At some point, we decided that a priority queue was needed. I have never benchmarked the exact overhead of the priority queue with respect to the non-priority queue, other than to verify that the data swapping -- normally the most expensive part of any communication system in LV -- is just as high performance as the plain queues.
If that latency is significant, I invite anyone who wishes to peek inside the priority queue VIs and see if you can improve the performance. I've gone over and over them for functional correctness. You may be able to simplify something down, just be careful -- multithreaded systems are twitchy.
You could also modify the AF to use regular queues instead of the priority queues. That's an easy enough change to make just by swapping out the priority queue class. See if that fixes your issue.
11-06-2012 04:57 PM
ChuckDiesel wrote:
I guess the major question here was, "Why am I breaking the debug fork but not the main fork?"
The debug fork adds A LOT of latency. The debugger does logging for every message sent and received, and the bookkeeping, although I tried to keep it minimal, is not cheap compared to the very low overhead of the queues themselves.