LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Dynamic cluster member selection and output selected values to analog output.

Please repost as LV 7.1.

 

This will get you more advise and also make it easier for me to "sneak a peak" to see what I can do to help. Unless I completely mis-understand, doing this in 1ms or less should be do-able.

 

Ben

Message Edited by Ben on 01-03-2006 02:10 PM

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 11 of 18
(1,438 Views)

Hi woutert,

I've been watching this thread for a week - finally a VI - in LV8 - darn!

I think your initial solution might have worked - had you used Strict-type references.  You mentioned that it was time consuming to convert variants, but if the references were Strict-type references, they would not return variant VALUEs. Smiley Happy

Is it true that the values in the [cluster] data-structure might change from one millisecond to the next?

Might it be possible to maintain this nested-cluster/data-structure on the FP of a VI, create/obtain STRICT-type references to the specific controls (all DBLs I assume?), assemble an array of references based on operator-selection, and update the AO array from the array of references?  Again: by using strict-type references, VALUEs will not be a variants. Smiley Wink

When they give imbeciles handicap-parking, I won't have so far to walk!
0 Kudos
Message 12 of 18
(1,427 Views)

Hi Woutert,

I took a quick peak at your cluster first thing this AM (Sorry did not have time to save as earlier version and I do not have LV 8 on this machine).

It is time for me to break out my favorite quote by Rolf Kalbermatter who posted the following to Info-LabVIEW on 3 Dec 2002 (thank you tst for finding this for me!).

"

Once your physical memory is used up by a single shift register storage, your application is probably going to suck.

"

It looks like you have packed every possible value possible into a single cluster. This leads to bad performance because the entire contents of the cluster has to be re-writen everytime any part of it changes.

If you were contarcting to me to fix this up, the first thing I would do is attempt to re-organize the data so that it is not all in a single cluster. The sub-clusters that are of the same cluster type but differ only in name should be stored as an array of clusters. If you need to get at the elements of these arrays of clusters fast, then build an "index array" that can be used to determine which array element needs to be acted on and then index modify and replace array sub-set.

If you have interelated clusters that play of each other, put the interacting data structures in the same action engine (SMART LV2 global).

The above comments will just get you started. The key to doing things like this fast in LV is to NOT move any data or copy any data that does not absolutely require being moved. Do as much as you can "In-place".

If you have access to a database guru, you may want to ask them about what a "fully normalized database" for this data set would look like. You will probably de-normalize the data a little (for better performance) but that could give you a good start on figuring out how to store all of these details without taking forever to find it and update it.

BUT LET ME REPEAT...

re-post your code as LV 7.1 so more advisor can step and help!

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 13 of 18
(1,415 Views)

Here my Cluster in LV71

It is a lot larger than LV80

This cluster is used by four powerplantsimulations in parallel.

So all variables in it are altered 400 times a second.

In the "Powerplantcluster" the clusters themselves are hidden because so I gain a lot of time.

Each Powerplantcluster is stored in a shift-register of a stacked sequence.

Each function retrieves the subcluster, reads the inputs and stores the results back into the main cluster.

A little benchmark shows the times it takes, it isn't a lot of overhead as you can see.

To see it in nanosecs just run the loop a milion times.

To see the overhead of loading/saving into the storage vi, add a get/set sequence into the loop.

 

 

Have program a realtime simulator for powerplants written in C++.I translates the RT sim to NI components and software (Labview).


My rt pxi turbine simulator for simulating grid incidents was succesfully used in a nuclear plant in 2006.

Look at http://sine.ni.com/cs/app/doc/p/id/cs-755
0 Kudos
Message 14 of 18
(1,406 Views)

Hi Woutert,

I still would like to urge you to break this up into seperate clusters instead of everything in one " Super cluster".

The attached zip contains major re-writes of your Global and its benchmark VI (in LV 7.1).

I moved the data manipulation to inside the global turning it into an action engine. An example of one of these actions is shown in the below image.

By keeping the data manipulation inside the action engine, I can avoid the expense of copying the contents of the "Super Cluster" (Did I mention this should be broken into seperate clusters? Smiley Happy ) out just to pull a single field and back to update the value and share with the world. (Note: The "floating control" is for symetery Smiley Wink ).

The benchmarking VI now looks like this
 
 
To demonstrate that the incrementing, decrementing, and monitoring can all be done in parallel with results on my laptop of about 7 usec per update (note my bench mark does two fields to emulate your version).
 
Have fun,
 
Ben

Message Edited by Ben on 01-07-2006 11:32 AM

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Download All
Message 15 of 18
(1,401 Views)

Hi,

 

Thanks for your effort and solution.

Now you have the user of the program. He has to choose one of the values in one of the clusters to show them on a plot.

What is here the smoothest solution.

 

Smiley Surprised

 

Thanks anyway for your effort.

Wouter

Have program a realtime simulator for powerplants written in C++.I translates the RT sim to NI components and software (Labview).


My rt pxi turbine simulator for simulating grid incidents was succesfully used in a nuclear plant in 2006.

Look at http://sine.ni.com/cs/app/doc/p/id/cs-755
0 Kudos
Message 16 of 18
(1,337 Views)

Hi,

I tested your example and its works fine.

On a dual core cpu it takes 10µsecs to run on a single cpu, affinity set to cpu 1.

When using both cpu's the result is 16µsecs, affinity cpu 0 and 1.

Smiley Sad

I have a 2.80 Pentium D Dell computer.

Can we select affinity for a cpu in runtime? Or in editmode?

Working continues on my RT sim.

 

 

Have program a realtime simulator for powerplants written in C++.I translates the RT sim to NI components and software (Labview).


My rt pxi turbine simulator for simulating grid incidents was succesfully used in a nuclear plant in 2006.

Look at http://sine.ni.com/cs/app/doc/p/id/cs-755
0 Kudos
Message 17 of 18
(1,321 Views)

Thanks for the update.

Please start a new thread for your new question.

(I believe the answer is yes).

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 18 of 18
(1,314 Views)