Real-Time Measurement and Control

cancel
Showing results for 
Search instead for 
Did you mean: 

RT Target Desktop PC Slow Measurements

Hi there,
 
I'm working with LabVIEW Real-Time 8.2.1.
I installed a Desktop PC as a Real-Time Target. The RT Target Desktop PC has a PCI-6023E DAQ-card.
Everything works fine with the Real-Time Module, I have communication between host and target, and I can Load programs on the target.
I'm trying to read and write a Digital Line to and from the DAQ-card.
The thing that is annoying is that I can't achieve any Samplefrequencies higher than 1666 Hz.
 
Is this the limitation of a RT Desktop PC or am i doing something wrong?
 
A picture of the blockdiagram is attached, in this VI I only read a digital line (the write will be implemented when the read signal is fast enough).
 
Best Regards,
Jens Dassen
 
 
0 Kudos
Message 1 of 7
(8,392 Views)

Dear Jens,

How many  channels are you trying to read in and write at the same time?  Multiple samples reading and writing will decrease your maximum sampling rate depending on how many samples you use at one time.

 

Michael Boyd

0 Kudos
Message 2 of 7
(8,375 Views)

Dear Michael,

Currently I'm trying to read in 1 channel at one sample at a time.This in order to test the highest achievable samplingrate, if this is high enough then I will implement the the write function and add 2 more channels. So in total it wille be 3 channels read and write.

I'm aware that adding more samples and channels will decrease the sampling rate, but I didn't expect that it would be this low (1666Hz) at only 1 channel 1 sanple at a time.

Are you familiar with this problem?And do you maybe have a solution or another way to work around this problem?

Best Regards,

Jens Dassen

0 Kudos
Message 3 of 7
(8,332 Views)

Jens,

Just a few suggestions:

1) Add code to allow for a few warm-up iterations. This will allow your machine to bring all the necessary code and data into the CPU cache and not be forced to survive the initial hit of reading all of it from system RAM.
2) Make sure Legacy USB support is disabled in the BIOS. The BIOS support for Legacy USB will cause very large interrupts that will keep your single-point rates down.
3) Try with and without the "data" Shared Variable in the loop, just to get an idea of how much that is costing you.
4) Remove both indicators from the loop, as this can cause the app to switch to the UI thread to handle the UI updates.
5) If possible, set your Ethernet driver to work in polling mode, therefore improving determinism by getting rid of interrupts used for Ethernet-packet handling. This can be done through MAX.

I hope this helps,

 

Alejandro

0 Kudos
Message 4 of 7
(8,290 Views)

Dear Alejandro,

Thanks very much for the usefull suggestions. I will implement them and feedback the results.

 

Best Regards,

Jens Dassen

 



 

0 Kudos
Message 5 of 7
(8,166 Views)

Dear Alejandro,

I implemented the suggestions you made. They were very helpfull and imporved the sampling rate. Especially the suggestion of the warmup iterations. This improved the sampling frequency from 1,6 kHz to roughly 16 kHz.

With all the suggestions I now have a maximum samplerate of 19kHz on a pentium 733 Mhz, 256 MBram. I'm still working on improving the speed of the system so maybe I can get higher samplerates than 19 kHz. If you still have some suggestions, they are very welcome.

Thanks for your support,

Best Regards,

Jens Dassen

 

0 Kudos
Message 6 of 7
(6,598 Views)
Hi, Jens,

At this point you are probably approaching the point of diminishing returns but here are some more tips to get a bit more performance:
  • Turn debugging off in your VI:
    • From the VI properties > Execution page
    • This removes the debugging hooks and should make your code slightly faster.
  • Use the new RT FIFO promitives instead of a Single-Process Shared Variable
    • The Single-Process Variable with the RT-FIFO option is fast, but using the bare RT-FIFO primitive would be even faster.
    • Internally, the Shared Variables are still using the old implementation of the RT FIFO (as of 8.5). The new RT FIFO primitives are faster than their old VI implementation and are part of the reason using an RT-FIFO primitive should be faster.
    • This should make your loop faster, but not necerilly by much, as I believe the majority of the time would actually be spent in the digital read and the Timed-Loop scheduling algorithm, which are pretty efficient, but definetely represent the bulk of the code.
  • Use the built-in MHz Timing source
    • From the Timed-Loop configuration dialog
    • With this approach you can still vary the period at configuration time or on the fly
    • I don't want to go into much detail here but the idea is that processing an interrupt from the CPU (MHz timing source) is faster than processing an interrupt from the board (requires several register I/O calls)
    • Based on your diagram, I think this could be a valid option, but maybe there are some other requirements I don't know about
  • Use a regular While Loop instead of a Timed-Loop
    • The Timed-Loop has many benefits and it would be the last thing get rid of would do in order to improve performace, but it does have slightly more overhead than a regular while loop, which gets compiled to almost nothing. Using the Wait Until Next Microsecond VI you could time your loop and perform the SW-Timed digital I/O call at a slightly faster rate.
  • Get a faster computer 😉
    • Sure, you might think I am cheating, but hardware prices have dropped so much lin the last few years that you really have to consider the cost of a fast PC vs. the engineering time you are putting into optimizing an application. PXI is also an option where you can get a high-end processor with a large amount of slots in acompact, rugged form-factor. The bottom line is that  you can get a fast mult-core machine for very little money these days, and if you combine tha with LabVIEW RT 8.5 you can dedicate CPU cores to your I/O tasks and have them poll for data, which is a lot faster than waiting on an on an interrupt, while still having other CPUs available for processing, logging and communication.
I hope this helps,

Alejandro
0 Kudos
Message 7 of 7
(5,025 Views)