07-17-2017 09:32 AM
Create a time trial program to compare the average execution times of the “Formula Node” and the native LabVIEW Math Function. This program will require a For Loop, a Flat Sequence Structure, and a Case Structure. The For Loop is required to run the time trial N times and then the results can be averaged using the “Statistics” function in the Probability $ Statistics sub-palette. The Sequence Structure is required to sample the “Tick Count” before and after the code executes. The Case Structure is required to determine whether the user would like to execute the Formula Node or the native LabVIEW Math Functions. To test the timing, using the following formulas:
a = X^2 / 4;
b=(2*X)+ 1;
Y=sin(a+b);
this is a homework question, but I have run into a wall. If you could help me or atleast let me know where I should research then that would be appreciated 🙂
1. Tick Count is not re-initializing
I run the program and tick count just gets larger and larger.
2. Tick counts before and after are the same so when i subtract the tick count of before and after (then divide by sample size to get average time) I get 0 or "infinity".
I either do not fully understand tick count but I think it just times your program or the flat sequence structure, I made three frames but why is tick count before and after the same. Would stacked be better?
Thank you! By the way, I'm using LabVIEW 15.
Solved! Go to Solution.
07-17-2017 09:44 AM
To get a meaningful difference, you need to iterate a sufficient number of times to get a difference in tick count. Your formula is simple enough you should probably iterate at least 10000 times. Oh, and your subtraction is backwards; subtract start time from end time.
07-17-2017 09:50 AM
This is the basic setup for time trials. I can't open your VI to see where you went wrong but I suspect you have the tick count in the wrong frame. Dataflow dictates that execution of each frame cannot continue until the previous frame is complete. If you have the tick count outside of the frame, that is the first thing to execute because the frame can't start execution until all of its inputs are available.
07-17-2017 09:56 AM
Also, do any statistical analysis outside of the sequence, otherwise you throw the timing off. And turn off debugging.
07-17-2017 09:57 AM - edited 07-17-2017 10:03 AM
The slowest part in your test VI are the controls and indicators INSIDE the FOR loop! Move them out of the loop as you only measure their read/update times instead of that tiny calculation!
LabVIEW also has that HighResRelativeSeconds function!
Edit: You even should move the statistics part (the mean function) into the 3rd frame - or even calculate it after the 3rd frame. You even should move the controls of the 2nd frame in front of the sequence:
Even now the measurement isn't "accurate" (apart from still activated debugging in my snippet): you also measure the time needed to create the output array by LabVIEWs memory manager…
07-17-2017 10:32 AM
Thank you everyone! I fixed the subtraction error. I can't believe I overlooked it lol. I made the iterations 10,00+ as well. Also, I turned off the debugging and it made the time calculations faster. Thank you Macho00! I also removed the statistical components to the end and that really helped to reduced from like 2000ms to 300ms when doing alot of iterations. Thank you GerdW and Mancho00! I decided to do total calculation time instead of having average sample time. It was easier to see the difference for me. Honestly, I thought the formula node was going to be faster because it looks more efficient. The results I have been getting is that the mathematical function operations are a few milliseconds faster than the formula node after doing a lot of iterations. I might remove the difference calculation out of the third frame to see if it makes a difference. Thanks again everyone!
07-17-2017 10:45 AM
@Tnorm007 wrote:
Honestly, I thought the formula node was going to be faster because it looks more efficient.
With debugging on, the formula node will be slightly faster due to all of the probe points the native functions have (all of the wires). With debugging turned on, you will see the native is faster since the compiler can do some optimizations.
But here is what I came up for you. Notice it is actually doing the math on the time instead of the result. Also notice I used a random number for X in order to make sure that code was actually being done (LabVIEW has been known to optimize out a loop and therefore your benchmark is completely wrong). BTW, I was getting average times in the 35-45ns.
07-17-2017 11:20 AM
To be more unbiased on the absolute execution time of the two version, I would use the "array min" of a sufficient number of trials. All external influences (e.g. other computer tasks, etc.) tend to increase the time so a single long outlier can heavily bias one of the measurements when looking at the mean. I often look at a histogram of the times to get a feeling of range. The distribution is typically not a Gaussian and is sometimes even bimodal.
Looking at the code, things like x2 or /16 can be done with the "scale by powers of 2", which is a much simpler operation (a bit shift for integers, a simple operation on the exponent for DBL).