Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Limitations in speed and AOs for current and future multifunction i/o

Solved!
Go to solution

Hello everybody

 

1. Problem:

Currently we are facing a problem with our old PCI-6031e. Neither our LabVIEW or comedi C code reaches the 100 kS/s for the analog i/o. Those are just simple feedback programs. We read an input voltage and output another accordingly. We are using new AMD 5800x on AM4 as a platform and. Is it generally impossible to reach the maximum kS/s value (Single channel input/ single channel output) or do I need to code in a special way, like using parallelization? We want to buy new cards like the PCIe-6363. With the native drivers would they perform close to their maximum in a similar use case?

2. Problem:

The amount and speed of analog outputs are always very limited. What would a modern solution looking like? Can the digital i/o be used as an analog output with a converter? Would such a device be available somewhere?

0 Kudos
Message 1 of 8
(2,172 Views)

So you are trying to do software timed single point IO at 100kHz i.e. calculate a response for the AO based on each AI reading, 100000 per second? That's completely unrealistic (you are not even saying what OS you are using)

 

I would recommend to look into FPGA.

 

Quote from here:

"With current technology, you can run PID loops up to 20 kHz using NI Compact FieldPoint controllers, 40 kHz with NI PXI technology, and up to 1 MHz with NI CompactRIO hardware when using PID functions based on field-programmable gate arrays (FPGAs). You can find an FPGA-ready version of the NI LabVIEW PID Control Toolkit in the LabVIEW Real-Time Module."

Message 2 of 8
(2,159 Views)

There is no timing in the program. I just read the analog input as fast as possible. Currently we reach about 60 kS/s. I am not sure, how FPGAs would help my problems, but I will check it.

0 Kudos
Message 3 of 8
(2,149 Views)

I mean the modern cards have up to 2 MS/s. What kind of use cases can take advantage of that? Our use case is already quiet simple.

 

Edit: We are using Debian 11.2 with Xfce and a PREEMT_RT real time kernel

We isolated the cpu cores on which we execute the feedback program.

0 Kudos
Message 4 of 8
(2,142 Views)

If there is no timing, the program will perform unpredictably, depending on computer hardware (It might even slow down as the CPU gets hotter!). If this runs on windows (you did not specify), there are no guarantees, even for a target of 1KHz.

 

On FPGA, you can easily go to 1MHz.

Message 5 of 8
(2,138 Views)

@HMay wrote:

I mean the modern cards have up to 2 MS/s. What kind of use cases can take advantage of that? Our use case is already quiet simple.


These specifications are for hardware timed IO, not single-point. This means you cannot react to each single point.

Message 6 of 8
(2,132 Views)

Ah okay. Now I can follow you. Thank you very much

0 Kudos
Message 7 of 8
(2,125 Views)
Solution
Accepted by HMay

Just one little tidbit to add: achieving 2 MHz will *also* depend on buffering, which typically gets configured implicitly when you configure your hardware clock.  With buffering, data can be sent or retrieved in multi-sample blocks at a block rate that's much less than 2 MHz.   You just need (blocks/sec) * (samples/block) to be 2 MHz.  But as you can see, the block itself will imply latency, so this mode brings *other* problems to a control loop.

 

If you really need a 100 kHz control loop rate, you'd better plan on FPGA rather than a CPU and its reliance on driver layers.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 8 of 8
(2,091 Views)