LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Any penalty to using one big cluster in a state machine?

I've been working on a lot of small/medium sized state machines lately, and I've gotten into a habit of putting most of my data (single values, arrays, strings, even LVOOP objects) in one cluster that I pass from state to state with a shift register.  I just unbundle the data I need in each state, work on it, and bundle it back in for the next state.  Part of me says that this is a bad idea - that I should separate the big cluster into a set of smaller clusters that group the data by logical categories.  Another part of me says that if I do it that way, I'm just creating needless clutter on my diagram.

 

So my question is simply this - is there any significant performance penalty for using a single cluster rather than multiple clusters in this fashion?  I never run the whole cluster into subVIs, and I never split the main cluster wire, so it doesn't seem like there should be... but I've been wrong before! Smiley Very Happy

 

Thanks,

Jason

0 Kudos
Message 1 of 10
(4,142 Views)
I don't think there is a penalty for using a large cluster... have you searched the forum to see if anyone else has had this question?
Harold Timmis
htimmis@fit.edu
Orlando,Fl
*Kudos always welcome:)
Message 2 of 10
(4,133 Views)

Well, "penatly" is perharps not a very fitting word....but it can easily happen that a complex datatype has a negativ impact on performance. In order to achieve the best performance, the data structure should be as flat as possible.

As long as the depth in the cluster is not too high, you will not encounter any reasonable drawback.

 

A high depth in the data structure is for example: Array of cluster which contains an array of cluster which contains an array of something.....and so on.

 

hope this helps,

Norbert 

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 3 of 10
(4,109 Views)
For most applications I use a single Cluster to hold all of my variable data and I have not seen a performance hit. That said I write Functional Test code and we generally have to wait on the instruments and or the Unit Under Test (UUT) especially when dealing with Analog signals - as the UUT typically has filters on the Analog signals.
Visualize the Solution

CLA

LabVIEW, LabVIEW FPGA
Message 4 of 10
(4,083 Views)
Performance hits are generally not a concern with large clusters. Performance hits come more from poor coding practices.
PaulG.
Retired
Message 5 of 10
(4,068 Views)

I use a large cluster much like what you describe and have had no problems.  Make the cluster a typedef so that any changes propagate through the program with minimal effort.  I often call it "Indicators and Flags.ctl" or "InF.ctl."

 

If I have large arrays of data I may put those in a separate shift register or in an Action Engine. 

 

Rather than putting the large cluster on the front panel, I usually create a subVI with only the cluster as an indicator.  Set default values there and use the subVI terminal to connect to the initialization terminal of the shift register - saves block diagram space.

 

Lynn 

Message 6 of 10
(4,062 Views)

As Norbert mentioned, the answer to your Q is dependant on your data structure. Provided you can do all of your data manipulation "in-place" there should not be any issues. The more complex structures can still be handled "in-place" but you may have to use the in-place operations to achieve this effect. Depending how comfortable you are with those operators, they can complicate the appearence of our code.

 

If you can'y do everything "in-plcae" and you have a Super-Cluster, then its time for me to Quote Rolf again when he wrote "Once all of you physical memory is filled up with a single cluster, your application is probably going to suck."

 

So lest say you have cleared all of the above hurdles and still want to use a single SR that has all data for all states. I ask you to carefull examine where the app may go in the future and what type of animal it could turn into. if there is even a small chance that app may turn into something that non-computer users will use (requires robust app to prtect itself from dumb users) the single cluster approach is going to get in your way when the app gets big.

 

1) If you have to add another field to the cluster, every function that uses that cluster should be re-tested. WE have an app in-house that was developed by our customer and we support. There is an 800+ step proceedure required to re-verify the app!

 

2) AS more functionality is added you will add more states to your state machine. Personally, I cringe when I see the support developer have to choose from a list of 300 states when working with the Stae machine.

 

3) You will have a hard time re-using code aside from cutting-n-pasting

 

4) When you unbundle manipulate and replace, you are in danger of creating duplicate data which will impact performance.

 

I never went through the formal IS training but my wife did and she has given me the short version to learn how to normalize a DB. There is actually a science to the process that results in only related items being grouped together and if you take to the full exdtreme of "fully Normalized DB", there absolutely no duplicates of data. For LV apps a Fully Normalized DB adds some overhead so I don't go that far.

 

THe following will ignore using an Object Oriented approach and stick with old-school ideas.

 

So after I analyze my data structures and group related items together and review who touches what when, THEN I try to wrap-up the data in Action Engines. More often than not, the action will replace some or most of the work done in some states. From your Q it sounds like your "read-mod-replace" constructs can be moved into AEs with little effort.

 

Now for OOP

 

I'm still learning OO system design but I have found myself "turning my apps inside out" with LVOOP. By this I mean rather than think that the data AND the function are inside the AE, the data is outside and acted on by what is inside the LVOOP methods. I have been amazed at the degree to which LVOOP can operate in-place, but I digress. There is some arguement/design pattern that says that if you have a function like a test that uses other classes, then you can create a class for the test and slam all of the required object into you need.

 

Done rambling for now. As usual, if there is anyone out there that want to correct me any of the above, please do so!

 

Ben

 

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 7 of 10
(4,057 Views)

OK - from the comments, it's about what I had figured.  I tend to use a very flat data structure  - I only very rarely put a cluster in a cluster, and I never go deeper than that.  Frankly, if you do nest too deeply the bundle/unbundle tends to take up too much screen space... I also put it in a typedef to save myself a lot of edit issues.  I like the idea of putting it in a subvi for initialization purposes...I'll have to try that.

 

To Ben's point, I could easily use action engines, but I opted for this approach because it felt more in line with the data flow paradigm (Hey, that sort of rhymes!).  I typically reserve AEs for apps where I am communicating between VIs or loops.  No particular reason - just my habit.

 

The subVI re-test issue doesn't really seem to apply here - again, I never run the "super cluster" into a subvi.  I *always* break out the just elements I need and pass them to the subVIs for this very reason.

 

Thanks to everyone who responded - I appreciate the sanity check!

 

Jason

0 Kudos
Message 8 of 10
(4,034 Views)

Hi Jason P,

 

That is how I do it.  Having it all in "one big cluster" also makes it easy to save the state machine setup to an .xml file :).

 

-Nic

 

Edit: My state machines are typically small (less than 50 states) and my users are all within arms reach.   My most recent state machine delt with a stepper motor.  I divided the state machine into 2 parts, one which I declared were stepper motor "sequences" and then a state machine that handled the flow control of the automation.  The state machine consumed the sequences (there was a drop down list in each state with a list of sequences).  So in this particular example, my state machine was divided into 2 arrays of clusters.

Message Edited by Nickerbocker on 10-13-2009 11:07 AM
0 Kudos
Message 9 of 10
(4,029 Views)

Before the inplace structure came along, I did notice a performance penalty in using large arrays inside clusters.

 

For example, and this was relevant for LV7.1 (its probably not relevant when using the in place element), I had a cluster containing an array (of several million data points) and a boolean flag being passed around a shift register. Merely unbundling, inverting and rebundling the boolean caused a massive performance hit caused no doubt by the reallocation of memory for the big array. I separated the boolean flag out of the cluster and the performance problem went away straight away.

Message 10 of 10
(4,000 Views)