LabVIEW Development Best Practices Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

A New Paradigm for Large Scale LabVIEW Development

What is actually new, is that it really is a different paradigm than the native dataflow. As many people state it isn't new because they have already built it many years ago actually underwrites the need for this alternative way of programming.

Sure, we can all build our own framework, or buy a toolkit. But I do think it would be nice if Labview were to have some native form of by reference programming. Thinking about it, the new 'New Data Value Reference' node might actually be a step into that direction.

0 Kudos
Message 11 of 30
(1,910 Views)

OK, I'm confused a bit by this.  I don't see the conflict with dataflow, or why that necessitates by-reference tools.

What I think the statement is getting at, as I would express it, is that the Model, View, and Controller each run separately (and a system should have independent components as well) and therefore we don't connect them with wires.  (Internally the components use wires, but between components communication is wireless.)  Certainly this should be so.  (I would advocate having independent components that respond to data/signals.)  Then we need some form of data communication (middleware, say).  This doesn't necessarily mean by-reference programming.  (I'd advocate using networked shared variables, but that is just one option.)

Perhaps I misunderstood the statement, however?

0 Kudos
Message 12 of 30
(1,910 Views)

It depends on your intentions - queueing has an additional advantage of allowing different daemons (dynamically launched VI's) to run at different rates - the queue allows burst data from a producer to be buffered up and processed by a consumer - if a notifier were to be used then that data would drop on the floor if the consumer was not ready to consume it - this might have critical implications in certain situations but might be acceptable in others.   I've never used networked shared variables or events but my understanding is that there would not be the "buffering" effect that occurs with a queue and that the producer VI would have to wait for the consumer VI to consume the previous value or event before it could send the next one.

In some situations it might be desireable to make the producer and consumer behave in a synchronous manner but generally I would find this to be a disadvantage in the types of data acquisition systems I have developed in the past.

Sometimes it might be undesireable for a queue to be allowed to grow too large if the goal were to be able to send an "abort" or "exit" command to another VI.  I generally allow for this by providing an ability to dump the contents of an existing queue before sending such an "abort" command.  I should look into the use of events (other than the user interface event structure) in LabVIEW as a means of signalling between threads (loops) and between dynamically launched VI's to see if it offers me any additional advantages.   I have used the WM message queue in C++ Windows programming before to insert messages into the user interface thread from other places - perhaps this will be similar.  I should also look into the use of shared variables - I have generally avoided anything that looks "global" in the past as bad coding practice in terms of race conditions, deadlocks, etc. - I suppose these don't really apply in this case and it is just a programming prejudice that I need to get past.

0 Kudos
Message 13 of 30
(1,910 Views)

You are looking too deep into my statement. I simply imply that NI/Labview could promote async programming next to dataflow programming right from the get go. They already tried that with OOP. They implemented OOP such, that even beginners can at least use it. Labvoop doesn't even require a framework to function. But it is still dataflow.

And no, I don't know what this would look like.
My applications tend to become random objects delivering 'services' to the main application. Ideally, these services all come together in some kind of 'service market.' Problem is, in this market, all services must speak the same language.

I used text script to create objects, configure them and invoke the services. Ideally, though, one VI would go get a service on the market all by itself. Requesting services should invoke new VI's to be run etc.

I am not currently working with such a 'market place,' but I do use the service model

0 Kudos
Message 14 of 30
(1,910 Views)

Network Published Shared Variables do offer buffering capabilities and can be used like queues between applications.  The "middleware" becomes the NI Variable Engine service, which handles networking functions as well.  You might consider String SVs before Variant, since (aside from being more stable than Variant SVs for large data sets) not all data types are supported by Variant SVs (e.g. DAQmx Task Name, VISA Resource Name, BV Tag, DAQmx Globa Channel, IMAQdx, FieldPoint IO, nirioResource, IVI, etc. are NOT supported).


Certified LabVIEW Architect
TestScript: Free Python/LabVIEW Connector

One global to rule them all,
One double-click to find them,
One interface to bring them all
and in the panel bind them.
Message 15 of 30
(1,910 Views)

I have been wanting to move to SV, but something deterred me last time I tried to use it for this type of application. It is a very appropiate technology, I think.

0 Kudos
Message 16 of 30
(1,910 Views)

I have been wanting to move to SV, but something deterred me last time I tried to use it for this type of application. It is a very appropriate technology, I think.

0 Kudos
Message 17 of 30
(1,910 Views)

Doug - I reread my post and feel it sounded a little critical, for that I apologise. Your contribution here is more than valuable, and I appreciate you taking the time to write the content.

Regarding 'event based' communication, I find the advantages for me are:

  • Native queuing is part of the event management in LabVIEW. Events are received and buffered to be dealt with by the event case within a producer thread of a daemon. Typically I then place them into a local Queue for processing by the consumer thread.
  • Events can be broadcast - allowing multiple daemons access to the data contained in the event call simultaneously. I know queues can achieve this, but I find it a little less graceful as one has to 'review' elements in the queue, rather than simple de-queue one.
  • Only one event-based register is required for all daemons - no need for multiple queues, all messages can be handled by one event register. Each subscribing daemon will receive notification of the event, can browse the data and decide if the information is intended for itself or not, and thereby handle the content or discard it as appropriate.
  • Programmatic registration - like queues, any daemon launched can register to receive event calls at any time.
  • Ideal for sending critical commands - such as 'abort', so long as the producer loop does not build up a large buffer of event calls. The producer loop must receive and handle the event call very quickly, passing the data into an internal (local) Queue for processing. This ensure an 'abort' request is received instantly, and can be handled uniquely by the producer loop (as you stated, it can flush its local Queue of tasks to ensure the abort is handled imminently).

The downsides are presumably plentiful, but all I can bring to mind right now is the disadvantage of holding a lot of data in an event call. The data is propagated to all daemons, so there may be a lot of data duplication. On this I'm not entirely sure, but for the communication of large data sets I wouldn't use the event itself, but store the data elsewhere (Action Engine perhaps) and use the event to notify everywhere of the data's existence with a register of its location.

PS. A quick note on Shared Variables. I have twice had experience with these, both cases involving a networked Real-Time system and PC. They were easy to set up, easy to implement, and easy to use. But they were unreliable for long-term stability. We (the company I work for) used these in a plant control system to relay information between the controlling hardware (FieldPoint) and a supervising PC. After a relatively random time (usually between 2 and 8 days) the communication would be lost and a whole world of hurt would open up. Eventually we replaced our Shared Variable communications with a TCP/IP protocol, and everything has been sweet ever since. We had plenty of help from NI Technical Support, but it was just impossible to determine why the Shared Variable system would stopped responding. Despite our determined efforts, we had to just move away from SV and go with the 'devil you know' - TCP/IP.

Thoric (CLA, CLED, CTD and LabVIEW Champion)


0 Kudos
Message 18 of 30
(1,910 Views)

Regarding the use of AEs (Action Engines) for storing large data sets, as with any common storage area (local variables, global variables, unbuffered single process or network published shared variables, FGVs (LV2s) / AEs, shift registers):  Watch out for data collisions (clobbering your data) when two processes / subVIs / loops push a function / state / case with data in this way.  Confession:  I used to do this via a shift register when pushing states within the same loop.  The problem arises when State A enqueues State B with State B's data in a common storage area (i.e. shift register).  There were instances where State C was the next state to run and it too enqueued State B, only this time it wrote different data to the common storage area.  Because of the preceeding data collision, State B would execute both times with the data that State C stored in the common storage area (rather than once with State A data and then again with State C data).

So, a couple of years ago, we (the company) went away from this method and to enqueueing a cluster of [enum+variant], as has been described / prescribed other places in these forums.

Some disclaimers:

- There are (apparently) performance hits with variant and large data sets.

- There are vocal contributors who recommend string queued state machines (or event message handlers).  With projects with multiple developers and potentially hundreds of states, during development, I don't like continually digging to see what someone else (or I for that matter) named a certain state (yes I can touch type, but prefer the drop down item select of the enum).

- We don't use [enum+variant] when sending [message+data] between applications (EXEs) or between targets (PC to cRIO) since that would require coupling application specific development code.

Event based architectures (broadcast:  one to many / many to many) seem appropriate when multiple operations need the same information synchronously.  Queue based architectures (point to point:  one to one / many to one) seem appropriate otherwise.  They're both tools, and there's no reason the two cannot be joined when appropriate.


Certified LabVIEW Architect
TestScript: Free Python/LabVIEW Connector

One global to rule them all,
One double-click to find them,
One interface to bring them all
and in the panel bind them.
0 Kudos
Message 19 of 30
(1,910 Views)

Thoric wrote:

PS. A quick note on Shared Variables. I have twice had experience with
these, both cases involving a networked Real-Time system and PC. They
were easy to set up, easy to implement, and easy to use. But they were
unreliable for long-term stability. We (the company I work for) used
these in a plant control system to relay information between the
controlling hardware (FieldPoint) and a supervising PC. After a
relatively random time (usually between 2 and 8 days) the communication
would be lost and a whole world of hurt would open up. Eventually we
replaced our Shared Variable communications with a TCP/IP protocol, and
everything has been sweet ever since. We had plenty of help from NI
Technical Support, but it was just impossible to determine why the
Shared Variable system would stopped responding. Despite our determined
efforts, we had to just move away from SV and go with the 'devil you
know' - TCP/IP.

Yes, there have been some growing pains with shared variables -- I've had a dozen SRs (Service Requests) with NI over the last 30 months regarding them.  At one point R&D had to remote into my PC and cleanse the deployed processes out of the NIVE with special tools because things had gotten corrupted.  In the past I've seen weird behavior with the Variable Engine as well.  That said, there have been significant functional or performance improvements with each release since LV 8.0.


Certified LabVIEW Architect
TestScript: Free Python/LabVIEW Connector

One global to rule them all,
One double-click to find them,
One interface to bring them all
and in the panel bind them.
0 Kudos
Message 20 of 30
(1,910 Views)