Actor Framework Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Controlling parallel continous loops in actor core

Solved!
Go to solution

Daklu wrote:

The continous process of updating the waveform graph is handled in a seperate while loop. The data for the waveform graph comes from a queue that another actor's actor core enqueues on.

The data acquisition actor sends messages directly to the waveform loop?  This is (imo) a mistake and is causing your difficulty with the waveform loop.  The data acquisition actor should send the data to UI.ActorCore, and UI.ActorCore should forward the data to the waveform loop.  Implement the pause functionality in ActorCore by not forwarding the data to the waveform loop.

The waveform loop doesn't need any message handling functionality.  It can be as simple as a dequeue feeding the data into the waveform indicator.

I think we may be misunderstanding each other or maybe I included too little information in my original post (to keep it shorter / more precise, or so I thought)

The DAQ actor generates and enqueues data on a queue.

The GUI actor handles UI events and user generated events in an event structure and uses an internal queue and state machine setup (the bottom while loop in the picture in OP) to move the actions/consequences of the user events away from the event structure (to keep the UI responsive).

So to answer your post: The "waveform loop" is inside the actor core in the seperate while loop. So I don't understand what you mean by having the data sent to the UI.actorcore and having it forward it to the waveform loop.

And yes, having a loop that only dequeues and feeds the data to the indicator is pretty much exactly what I want, BUT I am not sure what the best way to control such a loop is. How do I decided when it should start dequeuing and when it should stop/pause. What if I want to be able to dequeue from another queue instead? I think you may be correct, but I just don't quite understand what you mean.

Lastly, the shift register debate, while relevant and appreciated as general advice, originates from a bad description / error on my part. The only reason the shift register is there is because I obtain the data queue reference in a seperate case from where I do the actual dequeuing and output to the indicator.

_____________________________________________

Like I stated in the original post I had played around with another possible solution. This new screenshot might make things more clear, although it is a little different from the code in the original post.

The "dequeue from internal_queue_gui and switch state" while loop is still there, but the case with the actual "dequeue and output to indicator" loop is now a seperately nested while loop. The problem here is that in order to make the "state switching loop" responsive I have to stop the "dequeue and indicate" loop and I'm doing that with local variables attached to the front panel buttons.

Gui.Actorcore.png

0 Kudos
Message 11 of 20
(3,695 Views)

LVB wrote:

^^^ Like he said

If you want an example, take a look at the Angry Eagles project.  You could use a queue or a notifier to display the data (depending on where you want to store any "buffered data" and how you want to display it).

Angry Eagles DVR with Image.png

If I understand that example correctly, the frontpanel indicator (2D Picture) is only updated whenever a "Send notification" is fired from wherever (an actor message or something). If you are updating a waveform graph with a relatively high speed that results in a lot of messages (that either fire a "Send notification" or "Generate User Event") being sent. This doesn't seem very smart to me, but I might be misunderstanding the code the picture.

0 Kudos
Message 12 of 20
(3,695 Views)

"How do I decided when it should start dequeuing and when it should stop/pause."

One way is to create a time-delayed message (at an "acceptable" rate) that calls an actor method that takes data from the queue and sends it to the actor core's ES as an event (or directly to the control ref).

Another way is to put timing logic in the ES so that it doesn't display the value except every 50-100ms.

"What if I want to be able to dequeue from another queue instead?"

Ah, this is why you want the queue in the actor's private data - so you can change the queue, or have additional queues. Sometimes, I'll have a small Viewer Actor that simply receives data messages and decides how and when to display.

Message 13 of 20
(3,695 Views)

Hi, I'm working with Kaspercj on this same "learning the actor framework" project.

We are very actively trying to avoid sending messages from the DAQ actor to the GUI actor each time new data is available. We do this because once the DAQ task is started there will almost always be data available at a very high rate. So why indicate this with 10-100 messages per second when we could just tell the GUI _once_ "hey, listen on this queue, data will appear here at a constant (high) rate untill you hear otherwise"?

E.g. if we sample with 200 kHz and transfer 10k samples at at time then we would send 20 messages per second notifiying the GUI of the same thing, namely "hey, 10k samples available (again...)".

Is it stupid to try to avoid these messages? 20 msg/sec seems like alot to me, but on the other hand maybe it's not so much. However, in the application we aim to build in the end we will have quite of actors needing to notify each other that data is available.

DAQ Actor actually samples the DAQ device and sends it to other actors.

"Write to disk"-Actor will write the data to disk (e.g. needs the data form the DAQ Actor)
Different analysis actors, e.g. FFT- or Trending Actor will perform the analysis required and forward data to the gui (e.g. it needs the data from DAQ Actor)

GUI Actor displays the data on graphs etc. and it will need data from the analysis actors (e.g. it could need data form the FFT Actor)

And all of a sudden we maybe have 5-10 actors sending 20 msgs/sec (total of 100-200 msgs/sec) which does nothing but notify other actors that some data is available. Is this a problem (e.g. if we decide to take this path)?

0 Kudos
Message 14 of 20
(3,695 Views)

JonasCJ -

AF is designed for sending lots of messages quickly, like, tens of thousands of them per second. The act of sending and receiving messages between AF actors  is not going to cause performance issues in 99% of your applications. (You can cause problems for yourself with the code inside a "Send Message" or "Do" VI, of course.)

Some general advice: You're falling into the trap of optimizing performance before a performance problem exists. If you write some code and it runs too slowly, then you can work on making it faster. Until that happens, don't worry about trying to squeeze maximal performance out of a design.

Message 15 of 20
(3,695 Views)

Kasper, sorry for hi-jacking (our) your thread. Punch me this aftenoon if you feel offended.

David Staab. Thank you for your reply.

We am well aware of the "optimizing performance before having performance issues"-trap and maybe we've fallen into it anyhow.

But now that we know (taking your word for it) that the AF is designed for a high message rate (10k/s maybe) we can think about other ways of doing this.

Do you also know if the actor framework is designed to have messages containing data? E.g. is it okay to attach 10k samples (10k/ch * 8 byte/double * 4 channels = 320 kbyte) to a message and deliver the data that way? If the message handling is done with queues (which it is) then maybe it's stupid to setup another queue for data transfer and use the messages (which is also a queue) to transfer "data avilable"-messages without payload.

Now I remember why we were taking this queue approach to data transfer. It was because we originally planned on having multiple executables communicating exchanging data. E.g. the DAQ actor could be on one PC and the GUI on another. Before we found the Linked Network Actor all we could htink of was to transfer data with an tcp/ip queue. That must somehow slowly have evolved into using local queues for transfer within the same executable.

0 Kudos
Message 16 of 20
(3,695 Views)
Solution
Accepted by Kaspercj

JonasCJ wrote:

Do you also know if the actor framework is designed to have messages containing data? E.g. is it okay to attach 10k samples (10k/ch * 8 byte/double * 4 channels = 320 kbyte) to a message and deliver the data that way? If the message handling is done with queues (which it is)...

Yes, yes, and exactly. In an app I'm working on right now, I'm sending 2k samples at 400 messages per second via Network Streams and then through a series of AF messages, some of which are duplicated in a broadcast mechanism. There are examples from NI using DAQ in AF (somewhere, I swear I saw one from Eli...) that send samples in messages. AF is built on the LV Queue, and we've been sending samples in queues for years. It's safe unless you do something truly unique, I promise.

0 Kudos
Message 17 of 20
(3,695 Views)

Great. Well revise our whole data transfer approach then

When you say "400 messages per second via Network Streams" do you then mean you use the Linked Network Actor (developed by NI) or have you done something your self with network streams and AF?

0 Kudos
Message 18 of 20
(3,695 Views)

Just a thought about not spending a lot of effort sending very large / fast data over multiple message streams:

..

Why not use a dynamic call to a VI that gets the data, use the hardware buffer to hold the very large / fast data.  Usually, there is just one place where the full data is needed, then other places like a GUI where a processed down / slower set of data is used (example: graphs are limited to the number of screen pixels they occupy, so send just enough which will also speed up the GUI).  The VI that handles the very large / fast data stream could prepare much smaller sub-sets of data to be used elsewhere.

..

If the end user needs to zoom deep into the data, put the big data on the graph when that happens, not while the very large / fast data is streaming real time.  Or adjust what is sent to the graph in real time.

..

Waiting until there is a performance problem to optimize is also known as bloatware or cpu and memory hog design.  The time when the performance problem appears may not come until the end user (or the network service that the use must use) adds other things to the same computer.  There is a balance to be found.  I try to keep my applications using well less than half the CPU and memory in anticipation of other things  being added over the years.

0 Kudos
Message 19 of 20
(3,695 Views)

I developed my own networking components (on top of the Network Streams API) around the same time "niACS" was polishing and releasing his LNA. The LNA requires an AF Actor on the other end of the link; my components don't. Since I don't use AF on LVRT targets, I stuck with my stuff.

0 Kudos
Message 20 of 20
(3,695 Views)