LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

recursion performance

Hi,

 

I am relativly new to LabVIEW and currently I am evaluating what can be done with LabVIEW. I played a bit with recursion and I am disappointed about the performance. I used the dumb Fibonacci algorithm to test recursion. Here ist how it looks like in the block diagram:

fib.PNG

case 0 and 1 are just wired through. The VI is marked as "shared clone reentrant execution".

 

I did the same algorithm in Python:

 

def fib(n):

  if n == 0:

    return 0

  if n == 1:

    return 1

  return fib(n-1) + fib(n-2)

 

When I run these programs the Python script takes about a second for n=30 whereas I had to kill the LabVIEW script after 3h where it already consumed 1.5GB RAM. I am aware that there are algorithms which performs better and I wont expect LabVIEW to perform as fast as the Python script as the overhead for the recursive call is probably larger for LabVIEW than it is for Python or C. Did I do anything wrong or is using excessive recursion in LabVIEW just not a good idea?

 


Regards

 

Lars

0 Kudos
Message 1 of 12
(3,883 Views)

Lars,

 

i reprogammed your VI using LV 2012 to benchmark the issue. My benchmark shows about 300ms for n=30.

So it seems to me that something is going wrong on your machine or you missed some important optimizations.

 

The most important is to disable debugging in the VI settings of the recursive Fibunacci VI. You can also improve performance by eliminating the redundant decrement function in your shown screenshot (the first is done for both branches....why split up before the decrement then?).

 

hope this helps,

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 2 of 12
(3,874 Views)

Lars,

 

i wrote a caller which displays the required time to compute the Fibunacci values starting from 0 up to a selectable n. The benchmark result is displayed in a graph once all values are computed. As you run the application and look in the graph you will see that time consumption is non-linear with a rising n.

It could be interesting if Python behaves differently here. So maybe, you want to reprogram the caller in Python and attach the result to an answer here.

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
0 Kudos
Message 3 of 12
(3,864 Views)

Norbert,

 

thanks for try this. I did the same and reimplemented the algorithm in a blank VI – et voila – works in resonable time. So back to the inital VI and removed everything and reimplemented it – works too. By try and error I found that the subVIs caused the trouble. I removed them and redragged two from the upper left and rewired everything and it works.

 

If you want to try for your own take the VI attached and run it with n=20. This takes about half a minute on my laptop. After exchanging the subVIs in the block diagram with new copies it runs in a blink of an eye. This is a bug isn't it?

 

Regards

 

Lars

 

PS: Of course the recursive version is not linear. It is O(n²) (n*(n+1)/2 iterations) if I remember right.

0 Kudos
Message 4 of 12
(3,842 Views)

Lars,

 

the question wasn't if the algorithm was non-linear or not but "how does Phyton compare to the LV code?".

Well, misbehavior caused by corrupted code is a bug, but as i would expect that to hardly reproduce that. So if there is no step-by-step manual to reproduce the curroption, there is not much NI can do about it....

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
0 Kudos
Message 5 of 12
(3,831 Views)

Norbert,

 

hm, this is a bug which is really annoying, because there is no clue about it. The code runs correct, it is just too slow. How would one digg for such a problem in a real (big) project? I cannot exchange tons of subVIs in dozen of block diagrams.

 

 

Regarding your question here is the Python code that does the same performance measurement:

 

import timeit

def fib(n):
  if n == 0:
    return 0
  if n == 1:
    return 1
  return fib(n-1) + fib(n-2)

for i in xrange(31):
  print "%d\t%f" % (i, timeit.timeit(stmt="fib(%d)" % i, setup="from __main__ import fib", number=10) / 10)

 

The performance between Python an the working LV code does not differ significant.

 

n Fib(n) Python Fib(n) LabVIEW
0 0,0 0,0
1 0,0 0,0
2 0,0 0,0
3 0,0 0,0
4 0,0 0,0
5 0,0 0,0
6 0,0 0,0
7 0,0 0,0
8 0,0 0,0
9 0,0 0,0
10 0,1 0,1
11 0,1 0,1
12 0,1 0,1
13 0,2 0,3
14 0,4 0,4
15 0,7 0,6
16 1,0 1,0
17 1,6 1,6
18 2,7 2,4
19 4,3 4,5
20 10,5 6,8
21 13,5 10,9
22 17,5 16,6
23 28,7 27,1
24 46,6 43,8
25 75,9 73,4
26 121,7 115,1
27 201,4 186,0
28 320,0 304,9
29 515,4 490,1
30 841,3 800,7

 

Regards,

 

Lars

 

0 Kudos
Message 6 of 12
(3,817 Views)

I got 326ms for n=30, feels like a fast enough fibonaccicalculation (if something ever can be fast enough). 🙂

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 7 of 12
(3,791 Views)

Yamaeda,

 

do you have these 326ms even for the VI I have posted in message 4? If yes which LabVIEW version did you use? If no what time do you get with the VI I posted in message 4?

 

Regards,

 

Lars

0 Kudos
Message 8 of 12
(3,778 Views)

Lars,

 

I see what you are talking about.  I have a couple of observations, and some speculation.

 

1.  The VI seems slow to start with during editing.  When I pick the first subVI to delete, it highlights, but then seems to take awhile before I can delete it.

2.  If I delete one subVI, then drag in the subVI from the icon.  The VI is still slow to run.  Repeat for the 2nd subVI, still slow to run.

3.  The VI only begins to run fast if you delete both subVI's before dragging in the new subVI's.

4.  If you make a copy of the subVI, or drag one instance of itself in, before fixing the 2 used subVI's, the problem is still there.

5.  You basically have to clear all instances of itself before you create new instances to get it to work fast.

 

Do you remember how you created the VI originally?  Would you have saved the VI before starting to use it in itself?

 

Speculation:

It seems like the VI is struggling in self compiling.  Before a VI can run, it needs to compile.  It needs to compile its subVI's has well.  This basically sets up and endless loop of needing to compile itself before it can compile itself.  (Sounds crazy to say.)  I don't know what is forcing all of this activity, there is nothing in your VI that indicates there are changes that have been made and/or need to be resolved.  But the fact that it is acting slow on editing makes me suspect this.

 

I think once you eliminate the subVI's from itself, you've broken the cycle of it needing to compile itself for the subVI's.  The recompilation goes quickly even considering the VI is broken because of the broken wires.  But you have broken the chain from itself.  Then when you go to drag itself in, the compilation of itself as its own subVI's goes smoothly.  Finish the construction of the VI until it is runnable, it is basically compiling as it goes along in a manner that it is always "agreeing" with itself.  The editing enviroment behaves more smoothly, and the execution is as quick as you'd expect it to be.

 

What version of LV are you running?  Mine is LV 12 SP1, fix2.  (12.0.1f2)

 

Hopefully others can work with your VI in message 4 and can concur with the behavior I see as well (especially an NI employee.)  And following your steps of deleting both subVI's then dragging in itself seems to fix the problem.  And thing you can remember about how you originally created this VI that is saved in message 4 that might be a bit unusual (like when you saved, or didn't save) might help explain it.

 

It sounds like the problem is fixed by the steps you have done, or by rebuilding from scratch, but there does seem to be something odd about the VI posted in message #4.  If you want NI to investigate that specific VI further, I'd suggest unmarking the one post as the solution.  It seems like you still have a problem.  It's not Python vs. LabVIEW.  It's not the performance of recursion in LabVIEW.  The problem is  what is not write about that specific VI file.

0 Kudos
Message 9 of 12
(3,765 Views)

@merula wrote:

Yamaeda,

 

do you have these 326ms even for the VI I have posted in message 4? If yes which LabVIEW version did you use? If no what time do you get with the VI I posted in message 4?

 

Regards,

 

Lars


Nah, i made a vi based on the code posted, the attached file is what i came up with. LV2012

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 10 of 12
(3,747 Views)