LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

A better and efficient alternate to Queued State Machine

@Bob_Schor

Thanks for the comments. 

In particular I like that idea to set the control to default Read. Yes this is one area where in the past I have struggled with a wrong setting that went in-noticed until strange things happened ! 

 
Raghunathan
LabVIEW to Automate Hydraulic Test rigs.
0 Kudos
Message 21 of 25
(986 Views)

@crossrulz wrote:

MogaRaghu wrote:

So two questions :

1. Where is the possibility for a race here ? 

2. Can I simply use Global variables if FGV only add to latency ? 


1. As long as your values are only updated in 1 place, you will be fine.  You only care about the latest value and only 1 place is updating it.

2. Yes, you can simply use Global Variables.  Less for you to code.  They are more performant.  You can have multiple variables on a single global VI file.  I have also found the global VI useful for debugging.

 

And in case you are still curious, you can find my NI Week 2016 session here: TS9454 - Are Global Variables Truely Evil?


Thanks ... that settles it. 

Raghunathan
LabVIEW to Automate Hydraulic Test rigs.
0 Kudos
Message 22 of 25
(983 Views)

@crossrulz wrote:

@MogaRaghu wrote:

I read the document at the link : https://decibel.ni.com/content/docs/DOC-41734 

This purportedly is about AE and FGV ... but let me accept ... even after reading the whole document twice the only thing that I could get was the sarcastic and humorous statements. The starting premise and the ending summary seem to contradict.


I am curious what article you were actually reading in there.  There was nothing sarcastic in that article and the only remotely humorous statement was to tell me if I did something stupid in my benchmarks.  Perhaps you are referring to the article I linked to (http://labviewjournal.com/2011/08/race-conditions-and-functional-global-variables-in-labview/)?

 

The whole thing about the Get/Set AE (what some of us have started to call FGV) is that it does not protect the "critical section".  This is when you read a value, update it, and write it back.  If you have multiples of these happening at the same time, what happens to the data?  With the FGV and GV, there is no protection around the data, so whoever writes last wins.  But an AE that has the processing inside of it blocks others from being able to use the data until the whole read-modify-write process is complete.

 

For example:

You have $100 in your bank account.  You deposit $50 at the same time your wife purchases an item for $25.  With the FGV, you have both the deposit process and the withdrawal reading the account balance ($100).  The deposit adds the $50 while the withdrawal subtracts $25.  Then the deposit writes the value ($150).  Then the withdrawal writes its value ($75).  You just lost your $50 deposit.

 

But with an AE, the withdrawal is not even allowed to read the balance until the deposit has completed its process.  So the final value is $125.

 

Please don't get into the hypothetical situations where you come out ahead.  The point is that the final value is wrong when the entire read-modify-write process is not protected.


That's a very considerate wife. I do hope banks don't use any FGV !! It was a nice way to explain the concept !! Thanks.

 

Raghunathan
LabVIEW to Automate Hydraulic Test rigs.
0 Kudos
Message 23 of 25
(981 Views)

@Ben wrote:

@MogaRaghu wrote:

I am just attaching a very simple QSM. This is just a Proof of concept to change the User interface language based on a selection.  Actually in real time machine controls i have been using the QSM architecture for sequential logic.  Just  create a TypeDef for the Enumerated Queue states and code.  Its working but at times the number of discrete states runs into hundreds and it gets difficult maintaining.

 

Just thinking aloud if there is any better method in LV to handle Sequential Logic ? There are times when you keep doing the same thing that you miss an opportunity to migrate to new methods !!


THAT is exactly the reason I rail against the QSM every chance I get!

 

Sure it is simple to start and a good developer can guard against problems, but let just one novice take over not knowing the dangers and an avalanche of problems occurs.

 

1) Look at a QMH which process requests but does not feed itself like the Worm-Oroboros known at the QSM.

 

2) as mentioned above, use sub-VIs instead of "states" in the QSM. If you are tempted to queue up three states in a row in a QSM, just call three sub-VIs.

 

3) Use multiple QMH if there is more than one thing happening. Do not mix multiple functional areas into a single QMH. Create unique QMHs or whatever for each area of functionality.



I can certainly sympathise with a lot of what Ben writes here.  But I have come up with a different approach to a Queued message handler which replaces the Worm Orboros with a possibly worse alternative, a Hydra.

 

As Ben correctly notes, when a QSM starts ququeing up states for itself, things get really complicated really quickly.  Firstly, I am the opinion that any given STATE of a QSM should not enque itself.  Ever.  If an application needs periodically occurring commands to be sent, make a parallel loop to do exactly that.  Don't make an infinite loop in a QSM.  If, like me, you like making very fine-grained states which get called in various orders via "macro" states, it is also possible to have your State machine pass through several meta states, states where certain operations have only been partially completed and should not be interrupted.  Using a standard QSM, this cannot be guaranteed as any message from outside wreaks havoc with the current command queue.

 

When my "QSM" receives a message from its external API, it is receiving not a series of messages, but a message Queue.  This message Queue contains 0 or more commands which my code should execute.  This Queue Reference now becomes Queue level zero and the QSM starts processing it.  If, during the processing, we reach a macro state which wants to enque other states, it outputs another Queue Reference (freshly instantiated which contains the states it requires to execute).  Once we have this Queue, it becomes Queue level 1, and we start processing it, leaving Queue Level zero idle for the time being.  When any given Queue has no more elements, the Queue reference is destroyed and we move back to the previous Queue again to resume where we left off.  This can be repeated indefinitely until we have executed all of our states, regardless how many nested macro states we call.  Only when this has been completed do we again check the External Queue to see if a new job is available for processing.

 

So essentially, instead of enqueuing to the current job queue, we create a new job queue and process it until it's empty before returning to where we left off before.

 

This approach sounds complex (and it is a bit) but it offers the following features:

1) We can call batch commands from our external API which are executed in series and uninterrupted by any other process.*

2) We can intermix macro states and non-macro states as we desire without having to worry about execution order being disrupted.

3) We have a defined beginning and end for each and every "command" received from external processes.  This allows a host of other functionality to be implemented, transactions, Synchronous feedback (By incorporating a feedback channel in to the External communications path), delayed broadcasts and so forth. We know at any given time how deep our nested queues are.  We know when each individual macro state is finished (Queue empty).  We retain a lot more detailed information as to where we are in the order of execution.

 

*It is of course typically neccessary to define a small set of commands which ARE allowed to interrupt any given execution such as an emergency stop.  These are simply incorporated into the Queue / Event hierarchy as required (i.e. always check emergency stop before each state) but are spefically handled in a different manner than the rest.

 

 Note that this approach was born of our specific requirements.  It is not meant to be, not can it ever be, a one-size-fits-all approach.  I do, however, find that the ability to control execution order or commands in such a granular fashion reaps a lot of benefits.  We have elimiated lots of known (and certainly lots of yet unknown) bugs and race conditions by adopting this approach.

 

Spoiler

In a sense, it's quite similar to a recursive "Pushdown Automaton" as opposed to a "Finite State Machine".  I'm pretty sure I'm mischaracterising these terms, but they keep recurring when I search for similar architectures.

 

 

Message 24 of 25
(961 Views)

MogaRaghu wrote:

Just thinking aloud if there is any better method in LV to handle Sequential Logic ? There are times when you keep doing the same thing that you miss an opportunity to migrate to new methods !!


You're asking for a better way to handle Sequential Logic. SL doesn't need anything sophisticated, just put the VI's behind each other.

 

If it get's more complicated, a state pattern works well, if you're into OO. Takes some getting used to, though. But traditional state machines fit nicely in the state pattern.

 

Queued state machines are just to complex for me. I stay away from them. Don't worry, I do understand them from a LV point of view. But as soon as someone asks you to draw a flowchart, you find out it's near impossible because you used all the liberties the QSM offered. Flushing the queue and prepending the queue will not fit in a drawn flowchart.

 

Ever inherited a QSM or PC from someone? They can take you back hours or even days just to figure out what's happening on a basic level.

 

Same problem occurs with a Producer\Consumer. Works nice, until both loops start pushing stuff on the queue.

 

It's all nice an all that you can do fancy stuff. But I don't want fancy stuff. I want to solve fancy problems in the simplest manner.

 

And as I warned before, once you start to like those hammers ((Q)SM, PC, AF), everything starts looking like a nail. KISS is one of the most important principles.

 

In the title you mention "better and efficient". Better how? Efficient how? CPU, memory, execution time, programming effort, maintainability, clarity, simplicity, flexibility?

 

The simplest solution that will fit a problem is not an (not any) out of the box solution. It's a specific solution that fits the specific problem.

 

Analyse, Design, Program (sounds like an NI slogan). The Analysis and Design could result in picking a QSM, PC, AF or whatever of course (not for me usually though).

0 Kudos
Message 25 of 25
(956 Views)