Bay Area LabVIEW User Group

cancel
Showing results for 
Search instead for 
Did you mean: 

BALUG LabVIEW Coding Challenge!

Logan,

I’ve played a little bit with methods for measuring solution timing. Here is a brief summary:

1.       A Call By Reference takes ~ 20x longer than calling a static VI (@ Subroutine Priority). Dell Latitude 630 (dual core Pentium M) clocked 0.700 microseconds and 0.036 microseconds per above calls on average ( 10,000,000 iterations)

2.       I think we need to measure and subtract overhead time for each solution – 0.7 microseconds may skew results for faster solutions (especially when saying that X runs N times faster than Y)

3.       Regarding reported 20% to 30% time difference between subsequent runs – the numbers I see are more like 1% to 3% CV (Standard Deviation/Mean in %) on a sample of 25 subsequent runs. To get the jitter this low I used Tick Counts instead of Get Date/Time in Seconds and set the Measurement VI to run at Time Critical Priority in Execution System other that User Interface (Data Acquisition exec system in this case). I ran 4 very different solution implementations – all stay in this range if there are no ‘external’ activities (like Norton Internet Security or SQL Server) actively running on Windows

4.       Attached please find a test bench (Test) Measure Solution.vi I used to get these numbers. It would be interesting to run it on different computers with other solution implementations to check jitter level. To use it –

a) Unzip all files to a folder ;

b) Drop your solution in that folder ;

c) Wire your solution relative path to the 2nd Measure Solution Performance.vi call (instead of DS_Simulated_Solution.vi) ;

d) Set desired # of Runs and # of Iterations (in each run) and run the VI.

5.       I ran it on a Dell Latitude D630 (dual core) with Windows XP and on a Dell Precision M4500 i7 Quad Core (with Hyper Threading – actually shows 8 processors in Task Manager Performance Tab) under Windows 7 64-bit (32-bit LabVIEW 2011 SP1). It was quite interesting to see how LabVIEW leverages the extra cores. I got a 1.6x runtime improvement for a solution creating lots of string copies, while another solution (gentle on Memory Management) ran about the same on both. It looks like LabVIEW tries running its Memory Management tasks on 2 extra cores when available. In both cases CV was around 1.2 %

6.       Under Windows we don’t have much control of services and tasks running in parallel with LabVIEW. It may be a good idea to run each solution multiple times and throw away the outliers (like anything longer that average + 3xSt.Dev). I can easily tweak attached code to do that.

7.       I think there might be a number of solutions clocking within 3% to 5% to each other. It may be hard to get measurement precision within this range. I would suggest setting a number (like 5%) that would make it a tie for two or more solutions if they show results within such range.

Dmitry

0 Kudos
Message 11 of 11
(377 Views)