07-17-2019 08:24 AM
07-17-2019 09:07 AM
![]()
Gu
07-17-2019 09:16 AM
Hi altenbach,
I am glad to let you know that a 20% time reduction was achieved by using fixed size arrays in my application compared with using dynamic growing arrays. In other word, 20% sample rate increases.
Gu
07-17-2019 09:29 AM
@edmonton wrote:
,... I did not find it in my 2015 LV. any way I typed it in and it works.
Because this old idea is only implemented in LabVIEW 2019. 😄
Other special values you can type into a DBL directly are e.g. +Inf and -Inf. No need for special constants. 😉
07-17-2019 03:37 PM
Hi altenbach,
I posted an improved result using fixed array sizes previously, that was done in one program instance, meaning data collection, logging and display were all together.
Then I changed the program by using data collection and logging in producer loop (enqueue) and data display in consumer loop (dequeue) and was expected to see a better performance, however, the result was just opposite to the previous one. Attached is a chart showing time interval between 2 consecutive streaming data sets as a function of number of streaming data sets, one data set consists of results from 8 channels (X, Y1, Y2 through Y8). The comparison test lasted for 15 minutes.
It can be seen that dynamic growing array option needed longer time to process data with number of data increase, while the fixed array size option needed much longer time in the beginning and then seamed levelling off. I do not know how to explain why the fixed option needed much longer time in the beginning.
In average, dynamic and fixed options achieved 45.9 and 38.5 data sets per second, respectively.
In addition, I overserved that dynamic option can update XY-graph page 3-5 times per second, while the fixed option can only update XY-graph page once every 2-3 seconds, which become unacceptable slow. I have XY-graph in one page and all data indicators in another page of a table control. The interesting thing is that all indicators (40 in total) updated so fast that I was not able to read any one with 7 digits clearly.
I do not know why I can not paste a picture/image in the text body here, I tried "Control+V", so I have to do attachment.
Regards,
Gu
07-18-2019 12:35 AM
Benchmarking is a very tricky thing and most likely there are many other things that come into play here. I cannot comment further without seeing the full benchmarking code.
(for some ideas, have a look at our presentation)
@edmonton wrote:
I do not know why I can not paste a picture/image in the text body here, I tried "Control+V", so I have to do attachment.
To insert a picture, use the tool that looks like a camera.
07-18-2019 03:20 AM
@edmonton wrote:Then I changed the program by using data collection and logging in producer loop (enqueue) and data display in consumer loop (dequeue) and was expected to see a better performance, however, the result was just opposite to the previous one.
What parts are you doing in the producer and what in the consumer?
07-18-2019 05:09 AM
Hi altenbach,
See the attached code similar to my application.
You can see that the XY-graph is very slow, even looks like stopping, however, the data display shows enqueue loop run much faster than the dequeue loop.
I also moved the waveform out of the table, and it is still slow.
Regards,
Gu
07-18-2019 06:36 AM
Hi edmonton,
however, the data display shows enqueue loop run much faster than the dequeue loop.
That's the point of using queues: the producer can create the data much faster then the consumer sinks the data (for a short time).
Your consumer is very slow compared to the producer!
- you create new plots each iteration and send them to the graph indicator
- handling those larger arrays is slower than creating the 10 elements in the rpoducer
- why do you use a queue for the stop condition? A notifer would be easier.
- why do you sequence both queue operations?
Place a 1ms wait in the consumer…
07-18-2019 09:25 AM
Well, your consumer spends an unreasonable amount of time in the UI thread and shuffling huge amounts of data around. Graphing 5000 point in a plot with ~800 horizontal pixels is also unreasonable. Maybe you want to decimate the data to a reasonable amount. Also note that you are producing an infinite amount of data, but everything after 5000 points is lost in the consumer because you run out of allocated array.
You know the max number of iterations for both loops, so they should be FOR loops.
Here's a quick rewrite maintaining all data in the final data structure for the xy graph, seems easier. (Just a draft. No error handling and such)
You definitely should add some timing to ensure reasonable loop rates. Running full bore without wait makes the program behavior strongly dependent on the computer hardware, i.e. unpredictable.