‎11-16-2012 08:10 PM
I'm looking for suggestions or recommendations for how to best handle a UI with a "large" number of indicators. By large I mean enough to make the block diagram quite large and ugly after the data processing for each indicator is added. The data must be "unpacked" and then decoded, e.g., booleans, offset binary bit fields, etc. The indicators are updated once/sec. I am leanding towards a method that worked well for me previously, that is, binding network shared variables to each indicator, then using several sub-vis to process the particular piece of data and write to the appropriate variables.
I was curious what others have done in similar circumstances.
Bill
Solved! Go to Solution.
‎11-17-2012 04:01 PM
So i'm running into the same issue. I think the solution i'm leaning towards is being very carful about how i label my indicators. I'm sticking to a strict naming convention for each indicator type.
You can then pass and array of indicator referecnes to an processing loop and then programtically input values there using property nodes. MGI has some good open source subvi's for seaching for control/indicator, on you GUI vi, references. You can write something yourself, but you can pass this array of references to your subvi and then search for the indicator reference that corresponds to the data you are working on.
‎11-17-2012 05:32 PM
I do as much processing as possible in subVIs, and use clusters both as display elements and simply to group data together as VI inputs and outputs. If there's a logical grouping for several elements, use a cluster on the front panel. It doesn't have to look like a cluster - one of my favorite techniques is to use a classic cluster and set the border to be transparent. That way you're writing several related elements, which can be processed together in a subVI, to a single indicator. I also don't mind having a cluster output from a subVI and a huge unbundle by name connected to individual indicators. If you don't need all the indicators to update simultaneously, another option is to update say 1/4 of the indicators every 250ms instead of updating everything once per second, with a different frame of a case structure for each group of indicators so that there aren't so many visible on the block diagram at a time.
‎11-17-2012 06:06 PM - edited ‎11-17-2012 06:06 PM
Can you show us a simple example of your front panel?
I often use arrays of indicators.
‎11-18-2012 05:41 PM - edited ‎11-18-2012 05:43 PM
Unfortunately, I can't provide a screenshot. It's basically three graphs and about 30 numeric and boolean indicators. I've decided to put related indicator references into groups of arrays and pass those to subVIs that will process the incoming telemetry for that group. In some cases I could put several individual booleans into an array to speed things up.
‎11-19-2012 01:18 PM
I highly recommend that you avoid references. They are useful if you need to update properties of an indicator (color, font, visibility, etc) or when you need to decide at runtime which indicator to update, but are not a good general solution for writing values to indicators. Do the processing in a subVI, but bundle the data into a cluster output and then unbundle for display. It's more efficient (writing to references is slow) - and while that won't matter for a 1hz update rate, it's still not a good practice. It takes about the same amount of block diagram space to build an array of references as it does to unbundle data, so you're not saving space. I know I sound very adamant about this; earlier in my career I took over maintenance of an application that made excessive use of references and it made it very hard to follow where data came from and how it got there. (As an aside, that application also mantained both a cluster of references and a cluster of data, with the idea being that you would update the front panel indicator through the reference any time you changed the associated value in the data cluster; unfortunately often someone updated only one or the other, leading to unexpected behavior.)
‎11-19-2012 02:04 PM
I can certainly feel your pain.
Note that's really what is going on in that png You can see the Action Engine responsible for updating the display to the far right.
In my own defence: the FP concept was presented to the client's customer before they had a person familliar with LabVIEW identified. So I worked it this way from no choice of mine. I knew it would get ugly before I walked in the door and chose to meet the challenge head on anyway. Defer Panel Updates was my very good friend. The sensors these objects represent were constrained to pass info via a single ZigBee network so I had the benefit of fairly low data rates as well but even changing view (Yes there is a display mode that swaps what information is displayed for each sensor) was updated fast enough that the user still got a responsive GUI.
(the GUI did scale poorly though! That is a lot of wires! I was greateful to Jack for the Idea to make align and distribute work on wires)
‎11-19-2012 02:35 PM
yes, this is your friend!
‎11-19-2012 11:11 PM
Thanks for the reminder on using the "Value" property node. I'm now simply processing the data in several subVIs and unbundling on the output. I don't need "defer panel update" in this case as the display updates fast enough to not even know the difference. This was more a case of how to keep the block diagram as clean as possible rather than a performance problem