Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Legacy DAQ PCI-6025E Support Under XP / MX / LabVIEW 8.2 -100% CPU

Hello,

This may get a bit confusing.  I'll try to layout what I have done and why...

We started out with a vi that ran on LabVIEW 5.1 and used a PCI-1200 DAQ card, we set out port this VI to LabVIEW 8.2.  The PCI-1200 isn't supported under LV8.2, so we opted for the PCI-6025E with a PCI-1200 adapter.  The original 5.1 VI uses (of course) legacy DAQ VI's.

Installed traditional Ni-DAQ 6.9.3 on top of my existing MX VISA install.  When I loaded the 5.1 VIs into 8.2 for the first time, it complained about missing many of the AI/AO VIs in this path:

C:\Program Files\National Instruments\LabVIEW 8.2\vi.lib\Daq\Ai.llb, AI Continuous Scan.vi, AI Read (scaled array).vi, AI Read (waveform).vi, etc..

I copied these files from the same location in the original 5.1 machine into the path on my 8.2 setup, tried to open the 5.1 install again and it found all the missing VIs.  I modified the 5.1 application, replaced any VIs that were grayed out with the "4.1x" view and replaced them with their 8.2 equivalents.  A few changes were made to get the PCI-6025E working correctly, acquisition channel #'s needed to be changed...  I got it compiling and built an executable...  I took the exe to the new test PC (the one with the PCI-6025E installed and hooked up the test equipment).

The SW runs, but used 100% CPU time in a tightly nested AI loop, sometimes the AI data comes in with holes in it (zero reading reported)...  I installed LabVIEW 8.2 eval version on the test pc and ran the VIs with highlight execution order on.  This let me find the portions of code that were hogging CPU time, but the data holes are still there.  What I mean by data hole is that the card will report a DC voltage as say 9V, 9V, 0V, 9V, 9V, 0V, 9V, 9V, 9V, 9V, 9V, 9V, 0V,9V, 9V, 9V, 9V, 0V...  when I am absolutely certain that the voltage isn’t changing.

The sampling is only set to 2 times a second, and I've tried slowing the acquisition loops even more by adding 100 ms metronomes.  Nothing helps, I still get junk data back and 100% CPU utilization.

Any help is GREATLY appreciated.

Thanks,
Adam

0 Kudos
Message 1 of 3
(3,005 Views)
I tried to scale back the problem and make a simple little VI that just does AI / AO with the card, setting a pressure and reading it, even this simple example gives me 100% cpu.

Is there some sort of profilier that will tell me exactly what is hogging the thread?  highlight execution order cant keep up and doesnt give me anything useful.
Download All
0 Kudos
Message 2 of 3
(3,003 Views)

Hi Ajckson

There have been a couple of changes in how NI-DAQ handles the threads for this reason I would like to know which version of Traditional DAQ you are using? Since you are taking the effort to migrate the code, I will advise to upgrade your code to NI-DAQmx drivers; this Knowledge base will help you: Transition from Traditional NI-DAQ to NI-DAQmx in LabVIEW.

But going back to your problem and after doing a little bit of research in our database I found that it is a know issue with Traditional DAQ.  If you take a look at this Discussion Forum: Problem in continuous acquisition (Traditional DAQ), you will find where am I getting this information from. More information about this issue can be found at this link.

I hope it helps

Jaime Hoffiz
National Instruments
Product Expert
0 Kudos
Message 3 of 3
(2,963 Views)