LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Most efficient way to implement my own buffer

Hi all,

 

My program collects AI Voltage data from 16 channels at 15,000Hz (per channel.) When only turning on around 8 channels, the consumer loop, which saves, is able to stay up to speed for the most part with the producer loop, which grabs data 15 samples/channel at a time (essentially using it as a 1ms hardware clock, a method that Bob_Schor recommended, thanks Bob ! 🙂 ).

The program saves to disc every second, so I created a 2D double array of 16(or whatever the number of  channels to save is) by 15000. Using the replace subarray function to place 15 samples at a time (per channel) into the array every millisecond.

 

The problem is though, with more than 8 or 9 channels, the save loop lags behind the read loop by a very large margin (20 seconds over a 2 minutes, for example.) This is surely going to cause a buffer overflow for the queue at some point...

 

What's a more efficient way for me to implement a 16x15000 buffer? I understand that I am updating the graph for every 1ms, and that's something I will get around to fixing, but my testing shows that the "replace subarray" in the accumulationg stage of the consumer loop is the culprit. Taking anything else out does not dramtically speed up the process like taking it out does.

 

In the ZIP below, "June 29th DAQ.vi" is the main VI. It's a big (oops) VI so I have also attached a picture of the area I am talking about to save you some time.

 

Thank y'all.

 

DAQ PIC.png

Download All
0 Kudos
Message 1 of 29
(4,860 Views)

I may have found a solution to my own problem, but I need your inputs...

 

My solution was to not use a buffer at all.

 

Since I am writing in binary form, I figured that I might as well try saving at once every millisecond, which neglects the need of a 15000 buffer.

 

The performance is much better now. Each iteration of the save loop is happening under 1ms and my consumer loop is much always caught up to the read producer loop.

 

Any downside to saving to disc every millisecond? Our MATLab analysis is set up so that if we are missing even just 1ms worth of samples in the file, the chart would not compute.

 

Thanks.

 

Best,

Ray

0 Kudos
Message 2 of 29
(4,836 Views)

@RaymondLo wrote:

The performance is much better now. Each iteration of the save loop is happening under 1ms and my consumer loop is much always caught up to the read producer loop.


As long as you open the file before the loop starts are close it only after the loop, then you should have no issues.  Windows handles the buffering for the harddrive writes for you then (most likely in the harddrive's cache).  And then it sounds like your logging loop is keeping up with the producer loop, so I see no issues there.

 

My only possible concern is how you are stopping the consumer loop.  If you just destroy the queue, you can be throwing data away (whatever was left in the queue when it was destroyed).  So you should be sending a command of some sort to the logging loop telling it when it is safe to stop and close the file.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 3 of 29
(4,832 Views)

Why not read more than 15 samples at a time? That would help, too. Does your consumer loop really need to run every 1ms (assuming your sample rate is 15kHz)?

 

You should get rid of the outer while loop that never stops, because the only way you can stop a while loop like that is with the Abort button, and as a wise user on this forum wrote years ago, "using the Abort button to stop your VI is like using a tree to stop your car. It works, but there ma..."

 

Message 4 of 29
(4,826 Views)

I want to be able to use the Read loop's iterator essentially as a clock. I will now implement digital outputs, i want to be able to control the DOs down to the milliseconds, so that's why I restricted it to 1ms.

 

Yeah, I was planning on fixing the outer while loop. It's important that the user doesn't have to start the program again to start a new experiment... What's the best way to go about this? A state machine? "wait for user input" state and an "experimenting" state?

0 Kudos
Message 5 of 29
(4,818 Views)

@crossruz

0 Kudos
Message 6 of 29
(4,812 Views)

@RaymondLo wrote:

I want to be able to use the Read loop's iterator essentially as a clock. I will now implement digital outputs, i want to be able to control the DOs down to the milliseconds, so that's why I restricted it to 1ms.


Do note that Windows is a non-deterministic OS.  What this means is that you cannot reliably count on a 1ms loop rate for you digital ouptuts.  If you can handle a few 10s of ms of jitter, then you will be fine.  If not, you really should be using a Real-Time OS.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 7 of 29
(4,808 Views)

@RaymondLo wrote:

@crossruz


That is exaclty what I do.  And then let the consumer destroy the queue as part of that shutdown sequence.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 8 of 29
(4,807 Views)

Got it. This is the third rendition of the software in this lab. The guys before always just used tick counts or other LabView built-in software clocks. If that was good enough, I hope what I did was at least just as good time-wise. Without using real-time OS, is using the loop count to control the DO a sound strategy?

More importantly, for reading, can I count on the fact that 15,000 data points will always be exactly 1 second of data? I mean, I believe this timing is timed by the DAQ card, which promises 1M Hz of max DAQ rate, so I was hoping that the DAQ card would be more accurate than the Windows timer. Yesterday, I compared our "clock" against the LabView's tickcount, and it seemed like our clock was faster than the tick count by 0.01% (6ms faster over the course of a minute), which doesn't seem too problematic because Windows should only jitter and not jump ahead (that's how I see it anyway)

0 Kudos
Message 9 of 29
(4,793 Views)

@RaymondLo wrote:

I want to be able to use the Read loop's iterator essentially as a clock. I will now implement digital outputs, i want to be able to control the DOs down to the milliseconds, so that's why I restricted it to 1ms.


Altough Bob Schor's suggestion on this might seem clever, it's a poor way to handle your timing. What DAQ device are you using? There's probably a better way to do what you want to do. You should have one high-speed loop to handle the DOs, and let the rest of your code handle larger chunks of data at a slower rate. Instead, you're trying to make your entire program loop excessively fast to keep up with the data being generated from the producer loop, because you've given the producer loop the dual roles of reading inputs and handling high-speed outputs. This will actually decrease the chances that your DOs don't have timing glitches, because now you have so much other code trying to run at high frequency as well.


@RaymondLo wrote:

Yeah, I was planning on fixing the outer while loop. It's important that the user doesn't have to start the program again to start a new experiment... What's the best way to go about this? A state machine? "wait for user input" state and an "experimenting" state?


Something like that, yes. You should have an event structure that responds to user input to start (and maybe also pause or stop) an experiment. That event structure should send commands to another loop, over a notifier or queue, that causes the experiment to start (or stop, etc). You might be tempted to put the event structure and everything else inside a single state machine, where waiting on the event structure is a single state, but I don't recommend that approach. Undoubtedly it will cause problems later.

0 Kudos
Message 10 of 29
(4,790 Views)