05-10-2017 09:42 AM
Hi, All
I build a multi-channel daq system based on daqmx , and I set to continuous samples mode(rate=1000,sample per channel=1000). After two days, the system was stop and show the error that increasing the buffer size and read more frequency and so on. Could anyone give some suggestions that how to fix it? Thank you so much.
Thanks.
lh
Solved! Go to Solution.
05-10-2017 09:48 AM
05-10-2017 09:58 AM
Hi, GerdW
Thanks. I did not put a wait function in the while loop, and there is only DAQmx Read in the while loop. How can I read more often?
Thanks.
Best,
lh666
05-10-2017 10:00 AM
05-10-2017 02:25 PM
Hi,GerdW
Thanks a lot. I have attached the VI.
Best,
lh666
05-10-2017 02:39 PM
Since you are using continuous acquisition, the buffer size is specified "the samples per channel".
Do something like the snippet below to increase the buffer size
05-10-2017 08:14 PM
Thanks. I will try it.
Best,
lh666
05-10-2017 10:21 PM
@mcduff wrote:
Since you are using continuous acquisition, the buffer size is specified "the samples per channel".
Do something like the snippet below to increase the buffer size
Note. Posting by phone I cannot see the ops vi. With buffer overflows occurring over long periods like discribed look for
Unbounded array growth. Larger buffer reallocation takes time.
Unneeded implied DAQmx state transitions. They can cause some nasty memory leaks cleaningup garbage state transitions.
05-11-2017 10:27 AM
The snippet is just a modification of the VI of the original poster.
I have found that for Continuous Acquisition the buffer size should be at least 4 times bigger than the number of samples requested. On my system at least, multiples of 4 of the samples requested , ie, 4, 8, 12, ..., seem to work best. No idea why it follows this.
Cheers,
mcduff
05-11-2017 11:11 AM
Ah. That is about what I expected and feared!
@ mcduff, I strongly expect that something has been ripped out of the acquisition loop (Like "Display"). you just hid the real problem.
@ OP. What does the vi really look like? If you have a waveform GRAPH on on of those Tab Pages replace it immediately with a waveform CHART! and consider also moving it off the Tab!
Graphs grow and grow and grow so after 2 days you have 172800000 points per channel triing to be displayed your Tab is 1647 pixels wide! simple math to see you can't plot >100k points per pixel (Insanity!) So, the control itself has to manage decimating that huge number of samples to a meaningful display (using interesting algorithms behind the scene) Meanwhile LabVIEW needs to find memory for that MASSIVE amount of data (172800000 8 4Ch * 8 bytes per DBL is a whopping 5.3G.) I'm not surprised that the display updates and memeory allocation delays actually slow down the loop below 1Sec/iteration (where the DAQ device read can't keep up with the samples coming in and the buffer overflows) long before LabVIEW itself starves out of memory since it can't really move massive chunks like that and effectively update the UI While also dealing with some nasty side effects of charts and graphs on tabs.