08-07-2010 02:09 PM
WouterG wrote:I looked for setting off debugging, I guess you meant the setting off the option "Enable automatic error handling" in the VI properties -> Execution?
Uncheck "allow debugging" on the execution properties (same screen, below the priority setting).
08-07-2010 02:27 PM
altenbach wrote:As you can see, the boolean is set to false only once, delayed by dataflow.
Does this constitute a race condition? You can't be certain the Local write is going to be scheduled prior to the terminal write. Of course, once the execution order is compiled, there's no a run-time race condition (the two writes will always be synchronized in the same order), however this generates a design-time "race condition"... which will the compiler schedule first? Control dependency is uncertain where execution order is required.
Is there another terminology for this? Race condition might not be the right term.
08-07-2010 02:38 PM
@altenbach wrote:
WouterG wrote:I looked for setting off debugging, I guess you meant the setting off the option "Enable automatic error handling" in the VI properties -> Execution?
Uncheck "allow debugging" on the execution properties (same screen, below the priority setting).
Ah yes I see, I overlooked (again) that.
08-07-2010 03:11 PM
JackDunaway wrote:Does this constitute a race condition? You can't be certain the Local write is going to be scheduled prior to the terminal write.
Interesting comment...
The terminal write is dependent on the completion of the generate data subVI via the sequence frame, while the local write executes in parallel to that subVI (and anything else on that loop diagram that can execute right away).
This subVI is relatively slow so in real life we won't have a race condition. Even if the subVI is infinitely fast, the local will most likely be written first, just because of the subVI overhead alone.
Sequencing the local write would prevent certain parallel executions and possible compiler optimizations. I would probably leave it as is. 😉
08-07-2010 03:44 PM
Well one thing is for sure LabVIEW isn't a champion in deallocation memory. When I first try 20 iterations my memory consumption is about 260MB. When I then change the total iterations to 22 the memory skyrockets to 860MB (which is of course pretty obvious because you have 2^22 - 2^20 more datapoints).
However when you then change your number of iterations from 22 to 20 the memory keeps up at around 780 MB. It looks like LabVIEW doesn't free the 2^22 - 2^20 datapoints in the memory.
08-07-2010 04:40 PM
@WouterG wrote:
However when you then change your number of iterations from 22 to 20 the memory keeps up at around 780 MB. It looks like LabVIEW doesn't free the 2^22 - 2^20 datapoints in the memory.
Why should it? From your usage patterns, chances are high that it will need the larger data sizes soon again in the future. 😉
For subVI's where you know that you no longer need the memory once the VI has finished, you can request deallocation.
Also, your "Generate strings" subVI is very inefficient, because the constant resizing of arrays. You should initialize the shift registers with fixed size arrays corresponding to the number of iterations, then fill with the real data based in [i] using "replace array subset".
08-07-2010 05:19 PM
You should initialize the shift registers with fixed size arrays corresponding to the number of iterations, then fill with the real data based in [i] using "replace array subset".
Or even better: get rid of the shift registers and use autoindexing. 😉
08-07-2010 06:11 PM
@altenbach wrote:
You should initialize the shift registers with fixed size arrays corresponding to the number of iterations, then fill with the real data based in [i] using "replace array subset".
Or even better: get rid of the shift registers and use autoindexing. 😉
Can you explain the quoted sentance and your sentance? Because I don't understand how I can use your method with not losing data each iteration. My array is grows each iteration with 2^(i-j) datapoints. When i initialize the array again each iteration I need to generate 2^j datapoints...
Further I don't get the part how autoindexing is gonne replace my shiftregister... :S
And last thing I knew "request deallocation" function and I also called it after the benchmark but it didn't changed the situation.
08-07-2010 08:13 PM
WouterG wrote:Can you explain the quoted sentance and your sentance? Because I don't understand how I can use your method with not losing data each iteration. My array is grows each iteration with 2^(i-j) datapoints. When i initialize the array again each iteration I need to generate 2^j datapoints...
It seems more reasonable to create the max size data in one step before the loop (now you can use autoindexing!), and in the loop take subsets of increasing size in the benchmarking loop.
08-08-2010 05:54 AM
I'm really sorry but I still don't understand how I will not lose data then.
Outside loop -> initialize array sizes -> Inside loop -> Create subset array -> Add array subsets...
^ ^
| |
Still need shiftregisters here... and here
I made a small example and added my currently version, I created a new SubVI which initialized the data arrays. Can you maybe look in to it?