LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Passing Large Clusters to Sub VI's

I am creating a fairly large application with some large cluster arrays shared between several sub vi's. I have read the App notes on optimizing performance and memory allocation and have done my best to break up the structures. I have also check my memory allocation and buffer allocation. I have found that I have minimized the number of copies being made of my cluster arrays and found the memory usage well below 5 MB and the execution speed is acceptable.

Within the sub vis I use shift registers to pass around the cluster arrays. I know this is a tradeoff between performance and clean block diagrams but the diagrams have cleaned up immensely.

My question is that I have 3 vi's running in parallel. All of these vi's can read and write the main cluster arrays being passed around. After much thought I have decided to implement the data exchange using invoke nodes and a state variable for each vi interaction. A sequence of execution might be as follows:

1. VIOne changes a value in the cluster array.
2. The value is sent to the receiving vi(s) using invoke nodes
3. VIOne sets a state variable (or semaphore) for each receiving vi to signal a change and set an action.
4. The receiving vi then processes the change.

Using this method the changes would be interlocked and if properly implemented would avoid race conditions and unexpected or lack of updates.

Comments and suggestions would be greatly appreciated! Thanks!
0 Kudos
Message 1 of 13
(4,228 Views)
I wouldn't use the invoke node for this. I would use a queue to pass the new information around. By using one queue for each target sub-VI you will also get around the problem of some loops not running fast enough to get the data. This way each sub-vi will (if it's running too slowly sporadically) have a backlog of the values sent (Setting the queue size to >1).

If you need to make sure that the target sub-vi is finished processing before the main vi continues, you can perform 2-way communication via Queues. Then, sending the new data and waiting for an OK (Preferably not over the same queue) to come back will force the same "interlock" you describe.

Of course, this may be the perfect use for a notifier, but I don't use them really, so I don't have any great experience.

Invoke node (I presume you mean setting the values by VI server?) calls are (AFAIK) quite inefficient and force the actions into the UI thread.

Hope this helps

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
Message 2 of 13
(4,221 Views)
Thanks for your response. The queue idea seems to answer all the issues. Does having multiple queues effect performance? I can see having up to 5 or 6 different queues, one for each data type and a command/reponse queue. It seems efficient since something happens ony when enqueueing. Thanks.
0 Kudos
Message 3 of 13
(4,213 Views)
Queue primitive operations are very fast. I use multiple queues in many projects without difficulty. You will likely find processing or acquiring your data takes much longer than passing it around via queues.

Lynn
Message 4 of 13
(4,204 Views)
I agree with all of the above.

If you are going to go the queue route, it would be a good idea to look at this link

http://sine.ni.com/apps/we/niepd_web_display.DISPLAY_EPD4?p_guid=B45EACE3D9CD56A4E034080020E74861&p_node=DZ52061&p_submitted=N&p_rank=&p_answer=&p_source=External

were Jim Kring not only gives you a good starting point but is also an excellent example of how to write code.

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 5 of 13
(4,194 Views)
I agree with all of the above.

If you are going to go the queue route, it would be a good idea to look at this link

http://sine.ni.com/apps/we/niepd_web_display.DISPLAY_EPD4?p_guid=B45EACE3D9CD56A4E034080020E74861&p_node=DZ52061&p_submitted=N&p_rank=&p_answer=&p_source=External

were Jim Kring not only gives you a good starting point but is also an excellent example of how to write code.

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 6 of 13
(4,195 Views)
You might consider using a more object-oriented approach to your problem. Consider the clusters as the data of your object, and reading and writing them as methods or accessors. Using semaphores and shift registers, you can encapsulate your data in such a way as to avoid most race conditions and still enable reading from several places "simultaneously." Attached is an NI-Week presentation I gave on the topic, complete with code, that should explain better. Good luck.
0 Kudos
Message 7 of 13
(4,164 Views)

Two observations about Mr. Gray's programs above:

1. He uses the semaphore VIs within his LV2-style global vi (called  "..Repository"). Can anybody tell me why the semaphore VIs are even necessary? I thought that the entire VI could only execute one instance at a time so there shouldn't be a possibility for race conditions.

2. He uses the 'Release Semaphore' vi, but nowhere uses the 'Acquire Semaphore' vi. Can anybody explain to me how that works?

0 Kudos
Message 8 of 13
(3,988 Views)
The repository is only that - a repository.  I contains data.  The semaphore prevents the following sequence from causing problems.
  1. Location 1 reads the repository data set
  2. Location 2 reads the repository data set
  3. Location 1 modifies the data and writes it back to the repository.
  4. Location 2 modifies the data and writes it back to the repository.
At this point, the changes made by Location 1 are lost - a classic race condition.  The semaphores prevent this problem by introducing the concept of "checking out" data.  Instead of using the normal read from the repository, the read for write VI is used to acquire the semaphore, then read the repository.  This VI should always be used to get data from the repository if data will be modified and written to the repository.  The above sequence now looks like this.
  1. Location 1 locks and reads the repository data set
  2. Location 2 attempts to lock the repository data set, but cannot since it is already in use, so waits on the semaphore
  3. Location 1 modifies and writes the repository data, releasing the semaphore
  4. Location 2 can now lock the repository and read the data
  5. Location 2 modifies and writes the repository data
The semphore acts as a serialization mechanism to prevent data from getting out of synch.  Let me know if you have further questions.
0 Kudos
Message 9 of 13
(3,965 Views)
Good explanation of semaphores and race conditions, but my questions were trying to get at something else.
I took another look at your code, and now I'll be more specific.
 
In the program above, are the Semaphore VIs necessary? Since the 'Acquire Semaphore' VI isn't used anywhere, I think the answer is "No". I suspect that you built 'Repository' from a standard template you have developed because it provided a quick way to answer the original question on passing large clusters, and that template just happened to have the Semaphore VIs in it.
0 Kudos
Message 10 of 13
(3,950 Views)