04-29-2025 01:01 PM
Hello,
I have created a VI that will allow me to collected voltages from two thermocouples. I need to sample at 1kHz, so I am using an NI 9205. Using the DAQmx assistant, I created a function like the image attached. In the settings for the box called "Thrmcpl" I set the VI to Continuous Samples with a rate of 1k Hz and set Samples to Read at 10k. While this appears to run without a problem, when I view the data later, I have very large discontinuities in the data every 10 seconds (coinciding with the Samples to Read). If I reduce the 10k, however, I get an error:
"Possible reason(s): The application is not able to keep up with the hardware acquisition. Increasing the buffer size, reading the data more frequently, or specifying a fixed number of samples to read instead of reading all available samples might correct the problem."
I've tried every Google search I can think of to help me fix this problem, but I have not had any luck. Any help would be greatly appreciated.
Attached files for reference are the VI, a screenshot of the settings for the Thrmcpl DAQ assistant, and a plot of the data containing discontinuities for reference.
Thanks.
04-29-2025 01:13 PM
You will need to ditch the DAQ assistant and use the DAQmx VIs for your acquisition if you want it gapless. In addition, use the built in logging feature of DAQmx to save to a TDMS file, then post-process if you need a different format. The DAQ assistant "starts" each loop, that is why you have a discontinuity.
Look in the example finder for Voltage Continuous Acquisition for a starting template.
Lastly, for temperature measurements, does your thermocouple have a response time less than 1 ms?
04-30-2025 06:26 AM
I got suspicious when i saw the graph, and the VI proved me right. You add the new data _in front_ of the old one.
04-30-2025 08:14 AM
@Yamaeda wrote:
I got suspicious when i saw the graph, and the VI proved me right. You add the new data _in front_ of the old one.
Oh, yes, this is a very, extremely unefficient way:
Difference:
04-30-2025 08:57 AM
@Andrey_Dmitriev wrote:
@Yamaeda wrote:
I got suspicious when i saw the graph, and the VI proved me right. You add the new data _in front_ of the old one.
Oh, yes, this is a very, extremely unefficient way:
Difference:
Yes, it's inefficient, but more important is that it's wrong. 🙂 If he wants to do it like that he'd have to Reverse the insert and and then Reverse at the end before plotting it to get the right result. 😄
When you insert at the start i think LV have to copy and allocate the full array each time, if you add to the end it reserves extra space until you need a new allocation, which is the spikes you can see (i think you can calculate how much is reserved for a starting array (100 elements?) and then it basically doubles each time you need more space). If it's a know size, as an autoindexed array it should allocate enough from the beginning.
04-30-2025 09:30 AM
@Yamaeda wrote:
which is the spikes you can see (i think you can calculate how much is reserved for a starting array (100 elements?) and then it basically doubles each time you need more space). If it's a know size, as an autoindexed array it should allocate enough from the beginning.
It is always double the size allocated in advance. These spikes happen approx at 2, 4, 8, 16, 32 MB, and so on. Every time the preallocated space exceeds the reserved limit, LabVIEW moves all data to a new (lower) memory address and reserves exactly as much space as is currently allocated already. Everything is done in upper high memory, starting from 1MB Array LabVIEW allocates everything in highest possible address, like VirtualAlloc() called with MEM_TOP_DOWN Flag, this reduces memory fragmentation, but this can be slower than regular allocations.