LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to handle massive array operations

Jason,

Why delete, rotate and then reinsert?

Why not just extract the row, rotate and replace? I thought this might be faster.

- Gurdas

Gurdas Sandhu, Ph.D.
ORISE Research Fellow at US EPA
0 Kudos
Message 11 of 31
(2,013 Views)
Saverio,
 
That clarifies.
 
All,
We are currently taking upward of 60 secs to process our array. The target is less than 1 sec!
We have hit upon a few ideas which involve 'tricks' specific to our application. Shall post results/insights as they become available.
 
I still need inputs on what is faster: replace row in an array OR insert row into array? Further, if the insertion is always at end of array, is it not same as building an array (which I know to be bad at speed because of periodic reallocation)?
 
Thanks,
Gurdas
Gurdas Sandhu, Ph.D.
ORISE Research Fellow at US EPA
0 Kudos
Message 12 of 31
(1,996 Views)

Hi Gurdas,

In general, replacing array elements will always be more memory efficient than inserting array elements.  This is because replacing array elements does not involve a memory reallocation, while increasing the size of an array does.

Of course, this rule goes out the window if your array elements are not flat datatypes.  All the memory reallocation advice everybody is giving assumes your arrays are numerics or booleans.

-D

0 Kudos
Message 13 of 31
(1,992 Views)
60 seconds down to 1 second. Well, that can be a bit of a challenge.

If you post what you're doing with the array perhaps someone can offer suggestions to improve the speed...
0 Kudos
Message 14 of 31
(1,993 Views)

"If you post what you're doing with the array perhaps someone can offer suggestions to improve the speed..."

Yes, please do!

Curious,

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 15 of 31
(1,986 Views)
Sorry folks, cannot post the code (or even sections).
If the questions start becoming tricky and it will be help to have code accompany them, I shall build dummy VIs for the forums. But, lets hope I don't need to do that because I am really short of time right now!
 
- Gurdas
Gurdas Sandhu, Ph.D.
ORISE Research Fellow at US EPA
0 Kudos
Message 16 of 31
(1,998 Views)


@Gurdas wrote:

2) How do we pass/retrieve this array data to/from sub-VIs without creating indicators or controls.


I haven't read the posts in this thread, but you might want to have a look at JPD's attachment here for an example of a global array which keeps the memory consumption down.


___________________
Try to take over the world!
Message 17 of 31
(1,991 Views)

All,

We managed to beat the problem and our own expectations by a huge margin. And it feels great   🙂

While there was some scope for improvement in our use of LV syntax, we soon realised that it might return a maximum benefit of only a few seconds (say 5 secs; while the target was from 60s to 1s!). So, our problem forced us to think out-of-the-box and we relooked at the application from scratch. Some 3 days of brainstorming later, we now have the array worked, reshaped and cooked in less than 5ms!! This is such an important milestone that we stand in awe at the sheer power of an idea. More so because the array operation (which was taking 60s) has to be done atleast a few thousand times to get a final optimised array. With speeds of 5ms, we can now confidently run the optimiser for long enough to guarantee a global optima.

We learnt a few lessons, which I would like to share with the LV community:

1) For most of the time (shall I say 95%), if you strictly adhere to the basic thumbrules of good programming, you should be fine. The esoteric features of any language should be left for esoteric functions. Our LV team might get just 4/10 on LV skills but we still manage to write some very good and robust software. I feel greedy to write down my list of top10 rules to follow but I'll leave that for another day and thread.

2) More than programming skills, focus on domain knowledge and a true understanding of the problem. The advantages from good coding are just a fraction of the advantages that come from good algorithms.

3) Use "engineering" tools (like VI profiler) to trap the larger pitfalls in your code. You can achieve 80% improvement in application speed and robustness by just focussing on the largest 20% pitfalls.

4) When the going gets tough, send a post to discussion forums, sip a cup of piping hot coffee and sleep over the problem for a night 🙂

To know a little more about the application (SilentRoll), you could visit http://www.qagetech.com/SilentRoll.html

Rgds,

Gurdas

Gurdas Sandhu, Ph.D.
ORISE Research Fellow at US EPA
0 Kudos
Message 18 of 31
(1,950 Views)

What did you actually do to speed up the operation?

Was it anything clever that we would like to know about or was it just cleaning the program to improve the algorithm and avoid things like multiple copies?


___________________
Try to take over the world!
0 Kudos
Message 19 of 31
(1,941 Views)
I've been reading this thread and would love to know the tricks you did to pull it off. i think it would be beneficial to the LV community to know the way you really solved the problem.



Joe.
"NOTHING IS EVER EASY"
0 Kudos
Message 20 of 31
(1,929 Views)