LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Tag Data design for parallel read/write

I'm working on a project with a handful of dynamically loaded, parallel, instrument modules (Actors). I want their latest public state to be globally available within the program. So I'm trying to decide the best way to store this tag data.

 

  • Shared Variables
    • Must define at edit-time; cannot dynamically load instrument modules
    • Clunky deployment to the Network Variable Engine service is essentially a 3rd party dependency outside LabVIEW
  • Global: Map( tag: value )
    • Race conditions; cannot update in place
  • DVR: Map( tag: value )
    • Atomic updates eliminate race conditions
    • Allows parallel read access
    • Writes will still lock the DVR for all instruments
  • LabVIEW Current Value Table library
    • Under the hood this is an FGV, so it's worse than a DVR; both reads & writes will block all other instruments

 

Do you have any experience with tag data solutions, or suggestions how to approach this?

0 Kudos
Message 1 of 16
(1,893 Views)

Don't forget channel wires.

 

You can also use a private (lib or class) global. Much better then a normal global. 

 


@OneOfTheDans wrote:

I want their latest public state to be globally available within the program.


Maybe the question should be how to avoid this.

 


@OneOfTheDans wrote:

Do you have any experience with tag data solutions, or suggestions how to approach this?


My experience is mostly in avoiding global\by ref data. Usually with events, or some other data distribution system. Anything to avoid global or by reference data.

 


@OneOfTheDans wrote:
  • LabVIEW Current Value Table library
    • Under the hood this is an FGV, so it's worse than a DVR; both reads & writes will block all other instruments

Eh... When would a FGV block other instruments while a DVR wouldn't?

 

Both a DVR and a FGV will block all parallel access.

 

Both DVRs and FGV can be split up to limit the blocking time, and for both this could cause race conditions.

 

Add normal globals to that list. Read and Write are 'split' by default, so you always get (the risk of) race conditions. If you wrap the global access in a VI, you're back to blocking...

 

The main difference is that a (functional) global will persist while a DRV is destroyed when the top level VI that created it stops.

0 Kudos
Message 2 of 16
(1,856 Views)

wiebe@CARYA wrote:

Don't forget channel wires.


I've always considered these to be merely wrappers around standard queues. I tried using them recently and was put off by the pop-up configurator, like an Express VI, and all the class wrapping dependencies they pulled into the project. I'm writing code for a cRIO, so I'm trying to minimize bloat where I can.

 


wiebe@CARYA wrote:

You can also use a private (lib or class) global. Much better then a normal global. 

Maybe the question should be how to avoid this.

My experience is mostly in avoiding global\by ref data. Usually with events, or some other data distribution system. Anything to avoid global or by reference data.

All my instruments still have private state data. The global tag data is the instrument's public API, which should be globally accessible. What are some of the "other data distribution systems" you've used/designed?

 


wiebe@CARYA wrote:

Eh... When would a FGV block other instruments while a DVR wouldn't?

Both a DVR and a FGV will block all parallel access.


DVRs can allow parallel read-only access.

 


wiebe@CARYA wrote:

Add normal globals to that list. Read and Write are 'split' by default, so you always get (the risk of) race conditions. If you wrap the global access in a VI, you're back to blocking...


If 1 global/tag or 1 global/instrument, I still have the issue of needing to pre-define all my instruments at edit time (no dynamic loading). I think Shared Variables are essentially the same as globals. Either could be made dynamic, like the Global: Map( tag: value ), but it's so race-prone it's basically guaranteed to squash data from any writes.

 

Your comments reminded me that Notifiers exist and could maybe be used here... I wonder how bad the overhead is for LabVIEW to maintain a few hundred Notifiers?

0 Kudos
Message 3 of 16
(1,843 Views)

I'm thinking that this might just be a good user of wait on notifier from multiple feeding tags to a DVR.   I'd have to heat up the laptop and try a POC code example myself. Performance should be high for write but, limited to a single context.  With a parallel read DVR you could expose the tag data context wide but.... WHY you would want to expose component state data widely makes me wonder where that code smell is coming from.  Pushing it to a debug/trace/log module makes more sense.


"Should be" isn't "Is" -Jay
0 Kudos
Message 4 of 16
(1,834 Views)

Maybe my first post wasn't clear. I'm not trying to store all my program state in one global location. I'm only trying to store the latest public API values from numerous dynamically loaded instruments. I think this concept of global tag data is the de facto standard for PLCs doing industrial communication. I'm just looking for similar functionality in LabVIEW.

 

The Wait on Multiple is interesting. So one loop would own the global DVR and populate it with incoming tag data, and anyone could read the DVR. At that point maybe I should just use named notifiers?

 

I'll have to think about this more over the weekend. It's more of a thought experiment anyway. I'm using Shared Variables now, and I just have to hard-code the maximum number of each type of instrument. Not the end of the world, but it'd be nice to have a cleaner, more expandable solution.

0 Kudos
Message 5 of 16
(1,814 Views)

And if that was a DVR to a Void Variant the Tags <attributes > and Values would be in a red black tree for fast parallel read access.

 

Since wait on notification from multiple takes an array of notifiers of type (cluster of tag, variant sounds like my first approach or maybe tag, datatype, flattened string) naming the notifiers would be unnecessary the notifiers out array(of refnums) is enough.to differentiate source.  A quick compare elements of notifications and notifications z-1 on a FBN would reduce the DVR variant attribute writes to changes only.  You may want to do some sorting of the notifiers in if the notifier array size is highly dynamic but if your instruments are even close to an open once close once paradigm you only take a hit during startup and cleanup. ( do you need to delete tags when an instrument closes?)

 

The risk would be a possibility for deadlock on edge cases but there is the wait on notifier from multiple with history function for that (at some performance hit that won't effect the DVR access times.)  Either Wait on Note...  should perform well but would not be a substitute for a RTOS, or a true high performance PLC but would give "PLC like" performance If you want a "juice drink with artificial flavor" 


"Should be" isn't "Is" -Jay
0 Kudos
Message 6 of 16
(1,806 Views)

You say you are using "Actors" so note that part of the Actor Model is to only communicate information via messages, and not have parallel communication methods.  So all these suggestion break the Model.  You can do this within the Model using messages, using some kind of "Broker" actor, which other actors use to publish and subscribe for various information.

0 Kudos
Message 7 of 16
(1,795 Views)

@OneOfTheDans wrote:

wiebe@CARYA wrote:

Don't forget channel wires.


I've always considered these to be merely wrappers around standard queues. I tried using them recently and was put off by the pop-up configurator, like an Express VI, and all the class wrapping dependencies they pulled into the project. I'm writing code for a cRIO, so I'm trying to minimize bloat where I can.


I agree. They're useful for a quick start, but I never used them in production.

0 Kudos
Message 8 of 16
(1,751 Views)

@OneOfTheDans wrote:

wiebe@CARYA wrote:

You can also use a private (lib or class) global. Much better then a normal global. 

Maybe the question should be how to avoid this.

My experience is mostly in avoiding global\by ref data. Usually with events, or some other data distribution system. Anything to avoid global or by reference data.

All my instruments still have private state data. The global tag data is the instrument's public API, which should be globally accessible. What are some of the "other data distribution systems" you've used/designed?



Queues, (user) events. Ideally, the normal data flow of course.

 

As an example, a logging loop doesn't need access to global data of the DAQ loop. It just needs all the data, but it could be copies of that data.

 

Now copying might seem undesirable, but it has benefits too. If you sample 1 Hz, and log 1Hz, if the logging is stalled 5 sec., reading global data will give you 5X the same data (the global data at the time of logging), a queue will log the correct data.

0 Kudos
Message 9 of 16
(1,749 Views)

@OneOfTheDans wrote:

wiebe@CARYA wrote:

Eh... When would a FGV block other instruments while a DVR wouldn't?

Both a DVR and a FGV will block all parallel access.


DVRs can allow parallel read-only access.


That's new to me. How?

 

Still, it is not an advantage over FGV (nor globals). When  reading data from a FGV it won't block execution, as the read would be very fast..

0 Kudos
Message 10 of 16
(1,747 Views)