06-04-2025 06:41 AM
Dear colleagues,
I observed some strange behavior during a simple benchmarking experiment.
The code is trivial: two U16 arrays are added (no need to worry about overflow, this is just dummy calc):
On my Xeon W5-2445, the execution takes around 125–130 ms for both runs, which is expected. Nothing unusual so far.
However, when I added a Flat Sequence around the constant (without any fundamental change in dataflow), the result changed — the first test now takes almost twice as long:
It seems like buffer allocation was moved into the benchmarked section. Let’s check, indeed:
Interestingly, if I apply "Copy Buffer" only to the upper array, the timing remains the same:
But if I apply it only to the second array, the doubled execution time returns:
Tested on LabVIEW 2025 Q1 (64-bit) 25.1.2f2 and LabVIEW 2018 SP1 (32-bit) 18.0.1f4.
Yes, the buffer allocation dot is here:
But why was it "asymmetrically" moved to this location just because a "meaningless" sequence was added? It looks like some hidden compiler's "optimizations" are at play...
06-04-2025 08:51 AM
After runnin example 1
After runnin example 2
It looks like by adding the sequence is reducing the data usage, would that explain the speed ? it runs faster but consumes more memory. (Don't aske me why...)
06-04-2025 03:29 PM - edited 06-04-2025 03:30 PM
Hi Andrey,
I have seen some strange effects when using flat sequences during speed tests in the past. I am very uncertain in this regard. I suspect that there are situations where the first frame could start execution before all the data is available at the sequence.
However, the effect you describe is new to me.
I recommend using the stacked sequence structure for time measurements. I haven't seen such effects there so far. If you replace the flat sequence with a stacked sequence, then this effect will also disappear.
06-05-2025 07:40 AM
That's an interesting find indeed!