LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Benchmarking VI (getting offset each time when benchmarking)

Solved!
Go to solution

 


WouterG wrote:

I looked for setting off debugging, I guess you meant the setting off the option "Enable automatic error handling" in the VI properties -> Execution?


Uncheck "allow debugging"  on the execution properties (same screen, below the priority setting).

 

0 Kudos
Message 11 of 24
(1,538 Views)

 


altenbach wrote:

As you can see, the boolean is set to false only once, delayed by dataflow.


Does this constitute a race condition? You can't be certain the Local write is going to be scheduled prior to the terminal write. Of course, once the execution order is compiled, there's no  a run-time race condition (the two writes will always be synchronized in the same order), however this generates a design-time "race condition"... which will the compiler schedule first? Control dependency is uncertain where execution order is required.

 

Is there another terminology for this? Race condition might not be the right term.

 

20913iFD4C8B5D7ABA09DB

0 Kudos
Message 12 of 24
(1,531 Views)

 


@altenbach wrote:

 


WouterG wrote:

I looked for setting off debugging, I guess you meant the setting off the option "Enable automatic error handling" in the VI properties -> Execution?


Uncheck "allow debugging"  on the execution properties (same screen, below the priority setting).

 


Ah yes I see, I overlooked (again) that.

 

0 Kudos
Message 13 of 24
(1,526 Views)

 


JackDunaway wrote:

Does this constitute a race condition? You can't be certain the Local write is going to be scheduled prior to the terminal write.


 

Interesting comment...

 

The terminal write is dependent on the completion of the generate data subVI via the sequence frame, while the local write executes in parallel to that subVI (and anything else on that loop diagram that can execute right away).

 

This subVI is relatively slow so in real life we won't have a race condition. Even if the subVI is infinitely fast, the local will most likely be written first, just because of the subVI overhead alone.

 

Sequencing the local write would prevent certain parallel executions and possible compiler optimizations. I would probably leave it as is. 😉

 

0 Kudos
Message 14 of 24
(1,518 Views)

Well one thing is for sure LabVIEW isn't a champion in deallocation memory. When I first try 20 iterations my memory consumption is about 260MB. When I then change the total iterations to 22 the memory skyrockets to 860MB (which is of course pretty obvious because you have 2^22 - 2^20 more datapoints). 

 

However when you then change your number of iterations from 22 to 20 the memory keeps up at around 780 MB. It looks like LabVIEW doesn't free the 2^22 - 2^20 datapoints in the memory.

0 Kudos
Message 15 of 24
(1,512 Views)

 


@WouterG wrote:

However when you then change your number of iterations from 22 to 20 the memory keeps up at around 780 MB. It looks like LabVIEW doesn't free the 2^22 - 2^20 datapoints in the memory.


 

Why should it? From your usage patterns, chances are high that it will need the larger data sizes soon again in the future. 😉

 

For subVI's where you know that you no longer need the memory once the VI has finished, you can request deallocation.

 

Also, your "Generate strings" subVI is very inefficient, because the constant resizing of arrays. You should initialize the shift registers with fixed size arrays corresponding to the number of iterations, then fill with the real data based in [i] using "replace array subset".

0 Kudos
Message 16 of 24
(1,502 Views)

 


You should initialize the shift registers with fixed size arrays corresponding to the number of iterations, then fill with the real data based in [i] using "replace array subset".


Or even better: get rid of the shift registers and use autoindexing. 😉

 

0 Kudos
Message 17 of 24
(1,495 Views)

 


@altenbach wrote:

 


You should initialize the shift registers with fixed size arrays corresponding to the number of iterations, then fill with the real data based in [i] using "replace array subset".


Or even better: get rid of the shift registers and use autoindexing. 😉

 


 

Can you explain the quoted sentance and your sentance? Because I don't understand how I can use your method with not losing data each iteration. My array is grows each iteration with 2^(i-j) datapoints. When i initialize the array again each iteration I need to generate 2^j datapoints...

 

Further I don't get the part how autoindexing is gonne replace my shiftregister... :S

 

And last thing I knew "request deallocation" function and I also called it after the benchmark but it didn't changed the situation.

0 Kudos
Message 18 of 24
(1,484 Views)

 


WouterG wrote:

Can you explain the quoted sentance and your sentance? Because I don't understand how I can use your method with not losing data each iteration. My array is grows each iteration with 2^(i-j) datapoints. When i initialize the array again each iteration I need to generate 2^j datapoints...


It seems more reasonable to create the max size data in one step before the loop (now you can use autoindexing!), and in the loop take subsets of increasing size in the benchmarking loop.

 

 

 

 

 

0 Kudos
Message 19 of 24
(1,470 Views)

I'm really sorry but I still don't understand how I will not lose data then. 

 

Outside loop -> initialize array sizes -> Inside loop -> Create subset array -> Add array subsets...

                                                      ^                                        ^

                                                      |                                         |

                                     Still need shiftregisters here...          and here

 

I made a small example and added my currently version, I created a new SubVI which initialized the data arrays. Can you maybe look in to it?

0 Kudos
Message 20 of 24
(1,456 Views)