LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

passing array taking lots of time

Dear all,

i am working on an application where i need 3 huge size(2 with 6,00,000 and 1 with 18,00,000 elements). i pick data from my H/W after 25 ms. process the data and display it on the graph.

time of processing is very less - just 2-3 ms. the most time consuming part in the algo is passing the array into the cluster. just passing(excluding the processing part) the array into the cluster takes arnd 40 ms.

 

i am attaching a vi here. i am just taking the time diff b/w reading the array from the cluster and writing it back.

pls have a look at it and let me know why is it taking so much of time for just passing the array.

 

thanks,

Ritesh 

 

 

 

 

 

 

 

speed test.png

0 Kudos
Message 1 of 11
(3,391 Views)

The way you did  the timing is not very accurate.  When I tried the way shown below it takes about 400 microseconds to update the cluster AND I made the arrays 10 times larger.

 

Lynn

 

time test check.png 

Message 2 of 11
(3,370 Views)

Lynn,

 

Is that For loop being folded?

 

Using a control instead of constant can prevent to structure folding.

 

Ritesh,

 

"Data copies on wire branch" is probably hitting you. Use Profile >>> Show Buffer allocations to see the duplicates.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 3 of 11
(3,344 Views)

Ben,

 

It did not show up as folded, but I changed to a control.  The time is the same or a few milliseconds faster for 10000 iterations.

 

Correction: The for loop does not fold but the cluster and array data does.  Let me think about it a bit more. 

 

Lynn 

Message Edited by johnsold on 06-09-2010 09:46 AM
0 Kudos
Message 4 of 11
(3,330 Views)

Putting controls on the arrays to be written and moving them into the for loop slows things way down but I cannot tell whether it is a result of reading the control or moving the data.

 

Thanks for keeping the timing honest, Ben. 

 

Lynn 

0 Kudos
Message 5 of 11
(3,319 Views)

I just figured out that the timing is not because of moving the array, but using the control in the vi. if i convert the control to a constant the time required to move the array becomes 0ms.

i just don't understand the reason behind such theory. why would passing the arrays takes 30x more time when using a control rather than a constant.

 

John, how did u calculate time in microsecond? the minimum resolution that windows support is millisecond. 

 

 

Pardon my knowledge! but what did u mean by for loop being folded? 

Message Edited by ritesh024 on 06-09-2010 08:24 PM
0 Kudos
Message 6 of 11
(3,292 Views)

ritesh024 wrote:

I just figured out that the timing is not because of moving the array, but using the control in the vi. if i convert the control to a constant the time required to move the array becomes 0ms.

i just don't understand the reason behind such theory. why would passing the arrays takes 30x more time when using a control rather than a constant.

 

John, how did u calculate time in microsecond? the minimum resolution that windows support is millisecond. 

 

 

Pardon my knowledge! but what did u mean by for loop being folded? 

Message Edited by ritesh024 on 06-09-2010 08:24 PM

Lynn's was doing the operation multiple times and dividing. Standard approach to measuring stuff that happens fast.

 

If you search on the word "folding" you find this thread that has links to others on the same topic.

 

Folding is why your new code now takes "0ms". Smiley Wink

 

Ben

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 7 of 11
(3,280 Views)

The reason your timing changes when you convert your control to a constant is that you are timing the write to the control as well as the write to the cluster.  Look carefully at your data flow.  It has a couple of major issues in the timing.

 

  1. The start value can possibly be generated any time before or after the cluster is written.  There is nothing to determine this time.  You can easily solve this by running your data wire through the case structure instead of just connecting it.
  2. The end value can be generated either before or after the write to the control.  In your case, the compiler seems to have put it after the control write.  You can solve this the same way.

 

A better way is to use the structure johnsold used.  However, you will probably get a zero if you don't use a loop to do it many times, since the operation is probably comfortably under a millisecond.  Loop multiple times and divide the final result by the loop count to get your answer.  Be aware that if you are using a desktop operating system (such as Windows XP or Windows 7), other processes can easily preempt your process, adding extra time.  Shut down all extraneous processes and do it several time, taking the shortest number.

 

I would recommend you read Managing Large Data Sets in LabVIEW. Then read the LabVIEW help on the In Place Element Structure.  You may also want to read up on the Data Value Reference.  Let us know if you need more help.

Message 8 of 11
(3,247 Views)

DFGray wrote:

The reason your timing changes when you convert your control to a constant is that you are timing the write to the control as well as the write to the cluster.  Look carefully at your data flow.  It has a couple of major issues in the timing.

 

  1. The start value can possibly be generated any time before or after the cluster is written.  There is nothing to determine this time.  You can easily solve this by running your data wire through the case structure instead of just connecting it.
  2. The end value can be generated either before or after the write to the control.  In your case, the compiler seems to have put it after the control write.  You can solve this the same way.

 

A better way is to use the structure johnsold used.  However, you will probably get a zero if you don't use a loop to do it many times, since the operation is probably comfortably under a millisecond.  Loop multiple times and divide the final result by the loop count to get your answer.  Be aware that if you are using a desktop operating system (such as Windows XP or Windows 7), other processes can easily preempt your process, adding extra time.  Shut down all extraneous processes and do it several time, taking the shortest number.

 

I would recommend you read Managing Large Data Sets in LabVIEW. Then read the LabVIEW help on the In Place Element Structure.  You may also want to read up on the Data Value Reference.  Let us know if you need more help.


Yes imporatant note!

 

Tip:

 

1) Get yourself a laptop with 8 cores and use Windows Task manager to figure which cores get used and which ones are mostly idle.

 

2) Put your benchmark code inside a Timed Sequence structure and use the property node to run the Benchmark in one of the idle cores.

 

Works for me!

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 9 of 11
(3,239 Views)

DFGray wrote:

The reason your timing changes when you convert your control to a constant is that you are timing the write to the control as well as the write to the cluster.  Look carefully at your data flow.  It has a couple of major issues in the timing.


the constant cluster i mentioned is not the indicator in the end, but at the starting. in the vi i posted its already a constant so the timing of the vi would be very low(0 ms). but if i change the starting constant cluster to a control, the timing increases enormously. 
even with the same data flow, changing the starting constant cluster to a control should not effect the timing. but it does. 

 

0 Kudos
Message 10 of 11
(3,229 Views)