LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Moving mouse speed up DAQmx

Hello there, 

I'm doing some properties testing with a very large project and found that collecting data at 1500 Hz by reading 30 data is fine, reading less than 30 data per read creat buffer buildup. However, I also found, by pure hazard, that moving my cursor increased the speed of my VI to the point where the buffer bugo back to zero. It is speeding read/iteration speed. Any idea of what could create that behavior? I would voluntarily share my project, but it includes more than 100 sub VI/Ctrl after 10 years of work on it. 

 

Thanks!

0 Kudos
Message 1 of 12
(298 Views)

Somehow, I can't edit my original post... I just re-read it and found it unclear.

I'll try to clarify my situation.

I'm using a consumer/producer VI. The consumer loop only collects data and pushes it into a queue.

I collect data at 1500 Hz, and DAQmx Read is set to retrieve data when 30 samples are available. This means the buffer is read at 50 Hz (every 20 ms), which works fine. However, when I reduce the number of samples to read (e.g., 20), my loop isn't fast enough, and the DAQmx buffer slowly fills up.

I understand that 75 Hz is relatively fast, especially for a large VI running in a multi-threaded environment. However, I noticed that moving my computer mouse speeds up the loop. When I move the cursor, the reading loop becomes fast enough to clear the buffer.

I'm trying to understand why moving the mouse affects DAQmx Read. One hypothesis is that mouse movement somehow reallocates more resources to that specific loop.

 

I hope this is clearer! 
Thanks 

0 Kudos
Message 2 of 12
(237 Views)

There is so little information in your posts.  I assume you are running LabVIEW, since (a) you post in the LabVIEW Forums, and (b) you mentioned DAQmx.  Here is what you haven't said, information that might help us to help you:

  • Are you running on Windows (and if so, Windows 7, 8, 10, ll?  32-bit or 64-bit?)
  • What version of LabVIEW (Year and # of bits) are you using?  [Note that many long-time LabVIEW developers on this Forum might not be using the latest versions, and hence might not be able to examine your code that you failed to attach].
  • What specific hardware are you using?
  • You seem to be acquiring data.  What is the nature of the data?  How many (simultaneous) channels are you acquiring?  What sampling rate do you need?  [Note -- several kHz should be trivial to acquire continously until your disk fills up].
  • We cannot comment on code we haven't seen.  The optimal attachment would be to attach an entire Project, saved for LabVIEW 2019 or 2021 (which almost all the "experts" participating on the Forum can open and view) and saved as a "Compressed (zipped) folder" (right-click the Folder containing the Project and choose "Send to").
  • As an alternative, particularly if you write neat and compact LabVIEW code, you can attach a VI Snippet or a "snipped" image of critical Block Diagrams.  

Bob Schor

0 Kudos
Message 3 of 12
(227 Views)

Hi,

I'm indeed running LabVIEW 32-bit 2018.

 

As specified in my last post, I'm collecting voltage data using a producer/consumer architecture. Data is read in one loop, then sent to another loop for processing, followed by loops for logging and real-time display—both on the front panel and an additional monitor.

 

I'm acquiring data at 1500 Hz from 7 RSE channels on a USB-6001: 5 channels from a force plate and 2 from EMG. I use continuous acquisition with a voltage range of -10V to +10V. Data is read in chunks of 30 samples per channel through DAQmx Read, which runs smoothly. However, when I try to read fewer than 30 samples at a time, the DAQmx buffer starts filling up, indicating that the loop isn't keeping up.

Data is being streamed to TDMS files, and acquisition automatically stops after 60 seconds. No DAQMx properties are being used.

 

The issue occurs on two different computers:

  • Laptop: Dell G5 (2020)
  • Desktop: Custom-built (32GB RAM, AMD Ryzen 3800X)

I've imported my project for review, but I want to note that this project has evolved over the past 10 years. Some parts of the code are more rudimentary, while others are optimized. While not perfect, it generally works well. However, I do plan to clean up and redesign the project after my PhD, ideally into an application where users can customize the UI without digging into multiple subVIs.

 

The issue arises when displaying real-time data on a second monitor. I understand that real-time visualization requires additional system resources. However, at 50 Hz (reading 30 samples per channel per loop iteration), it works fine. When I reduce the number of samples per read (thus increasing the read frequency), the DAQmx buffer starts filling up—until I start moving my mouse.

 

I don't need to update the second monitor faster than 25 Hz, as I currently update it every ~40 ms (25 Hz). But the fact that moving the mouse prevents buffer overflow concerns me. If cursor movement increases the reading speed enough to clear the buffer, it suggests that something is slowing down the acquisition when the mouse is idle. This likely points to an issue somewhere in my VIs, but it all seems to come back to the acquisition process

 

I hope sharing my project helps clarify the issue, and I appreciate your patience given its complexity.

0 Kudos
Message 4 of 12
(222 Views)

Here is an example of how moving my mouse increases my project's speed. 

I prob a shift register that measures the time of an iteration in ms. The loop speed increases why I move.

0 Kudos
Message 5 of 12
(189 Views)

Hi Hunter,

 

  • in the "Acquire" VI you read 150 samples per DAQmx call, then you do some "massive" calculations on those samples: you can greatly improve speed by NOT using a formula node!
  • You can apply basic scaling directly in DAQmx, no need to scale those 5 AI channels "manually": all you need is to calcutate the Mx/COP data, which can be done directly using arrays instead of scalars…
  • Then you decimate the 150 sample arrays by a factor of 20: do you really need this step?
  • You can "stream to TDMS" directly in DAQmx, atleast the raw sampled data (not your other calculations)…
  • In the Logging loop you only read data from your "log data" queue when logging is enabled, otherwise the queue will grow (and demand memory)…

 

Do you really run into problems when you don't move the mouse while your 60s DAQ routine?

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 6 of 12
(177 Views)

"I'm acquiring data at 1500 Hz from 7 RSE channels on a USB-6001: 5 channels from a force plate and 2 from EMG. I use continuous acquisition with a voltage range of -10V to +10V. Data is read in chunks of 30 samples per channel through DAQmx Read, which runs smoothly. However, when I try to read fewer than 30 samples at a time, the DAQmx buffer starts filling up, indicating that the loop isn't keeping up.

Data is being streamed to TDMS files, and acquisition automatically stops after 60 seconds. No DAQMx properties are being used."

 

That sounds like you have a Wait in the sample loop. Remove that. If you've told DaqMX to grab 20 samples it'll wait until it has 20 samples.

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 7 of 12
(168 Views)

Hello,

Thanks for taking the time to check my code.

I’m curious how you determined the 150 samples per call. I’m reading from 7 channels, meaning that 30 samples per channel should result in 210 samples per call (30 × 7 = 210). My code is designed to add the last AI channel if needed. However, only 5 of these channels are used for calculations, while I store the raw data for the others.

I also wonder how you arrived at a decimation factor of 20. I send only one sample to a notifier from every 30 collected samples per read to display results on the front panel (FP). This effectively simulates 50 Hz, which is more than sufficient for real-time visualization. However, all collected data is sent to the logging loop. I just want to ensure that I didn’t accidentally send the wrong project.

 

I didn’t use Formula Nodes in the past, but I struggled to verify and debug calculations when they were scattered across different parts of the code. Using Formula Nodes made the computations clearer and more manageable. Is using a Formula Node significantly slower? The loop handling the calculations runs smoothly, but I understand that it might still require additional resources. Do you have any suggestions for improving efficiency and lisibility?

 

Regarding your comment:

"In the Logging loop, you only read data from your 'log data' queue when logging is enabled. Otherwise, the queue will grow (and demand memory)..."

Logging is enabled at the beginning of a trial, so streaming happens in real time. I don’t believe the queue grows indefinitely... but I could be wrong.

 

I experience issues when I don’t move the mouse. Some loops slow down, which isn’t a problem as long as the acquisition loop and the data display on the second monitor remain stable. However, when I reduce the number of samples read per iteration (e.g., 20 samples per channel instead of 30), everything slows down significantly—unless I move the mouse.

  • Trial durations increase from 60s to ~85s.
  • Real-time data display on the second monitor lags significantly.
  • The DAQmx buffer fills up rapidly—but again, only if I don’t move the mouse.

This behavior is puzzling, and I’d like to understand what’s causing it.

Thanks again!

0 Kudos
Message 8 of 12
(136 Views)

@Yamaeda wrote:

That sounds like you have a Wait in the sample loop. Remove that. If you've told DaqMX to grab 20 samples it'll wait until it has 20 samples.


After doublechecking, there is no Wait in the DAQmx loops.

0 Kudos
Message 9 of 12
(135 Views)

Hi Hunter,

 


@LonelyHunter wrote:

I’m curious how you determined the 150 samples per call.


I was looking into the "Acquire" VI in the Acquisition subfolder: you explicitely read 150 samples from DAQmxRead…

 


@LonelyHunter wrote:

I also wonder how you arrived at a decimation factor of 20.


The "Decompose Notifier" VI is a subVI of Acquire.vi: here you explicitely decimate by a factor of 20.

 


@LonelyHunter wrote:

I didn’t use Formula Nodes in the past, but I struggled to verify and debug calculations when they were scattered across different parts of the code. Using Formula Nodes made the computations clearer and more manageable. Is using a Formula Node significantly slower?


Yes, usually they are slower than G code. Especially when you calc ona "per sample basis" instead of full arrays. Example replacement from the "COP calculation subVI":

(Set the needed scaling factors instead of my placeholders.)

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 10 of 12
(127 Views)