LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

read analog physical channel

I'm creating a test setup, attached to a USB-6212 DAQ, and read via LabVIEW 8.2 on Windows XP.

I read 8 analog inputs, and write 24 digital outputs (to control valves, etc).

I defined all of the ins and outs in MAX, set the scaling, etc.

When I run it, the loop takes in excess of 300 mSec to loop through all of the above.

I then changed and used DAQmx, and set up a task to read all the analogs at once - process- and write all the digitals at once.

That's better, it's down to about 35 mSec per loop.

 

But, I'd like to get it down to 12 mSec per loop.  I thought if I just read the analog inputs, I could do the scaling, etc. without the DAQmx layer, and it may be faster.  But, I can't seem to get the READ vi to accept the physical channel name (Dev1/ai1, Dev0/ai1, USB/ai1) to complete the read.

 

Any thoughts?

0 Kudos
Message 1 of 12
(3,902 Views)

You should be creating a single DAQmx task that contains all of the channel names.  There is no need to loop through them.  Place a DAQmx channel constant and browse to add the channels you want.

0 Kudos
Message 2 of 12
(3,900 Views)

Hi Raven,

 

Thanks.  I'm doing a single read of all Analog In channels (8), making decisions based on the values, and then doing a single Digital Out write (24 channels) of all the boolean valves.  I write the values and boolean states to a file, and loop around to do it again.  It is this loop that is taking about 37 mSec.

 

Any other thoughts?

0 Kudos
Message 3 of 12
(3,896 Views)
Are you doing the writing to file in the same loop you are doing the DAQmx functions?
0 Kudos
Message 4 of 12
(3,887 Views)

 

Yes,

 

I need to capture the readings that I used to make the decisions, so I need to write these values to a file before I loose them, and get the next set of values to evaluate, and write them to a file.

 

Jeff

0 Kudos
Message 5 of 12
(3,885 Views)

Your file writing is probably taking longer than the 12 msec you want to give it.

 

Look at Producer/Consumer architecture with data.  You want to pass the data you want to write to a file off to another loop by way of a queue.

0 Kudos
Message 6 of 12
(3,882 Views)

 

Thank you, I'll give that a shot.

 

Jeff

0 Kudos
Message 7 of 12
(3,880 Views)

 

Before implementing the change in data storage, I tried a couple things to prove the assumption.

 

In the following, all analog reads are done as a single read and then parse, all digital writes are done by combining all signals and doing a group write.

The write to file includes all analog and digital channels, and that doesn't change (the data may be static, but I still write it).

 

Read Analog, Write digital, save to file, 36 mSec loop.

 

Read Analog, Write digital, do not save to file, 36 mSec loop.

 

Read Analog, do not write digital, save to file, 26 mSec loop.

 

Do not read analog, Write digital, save to file, 11 mSec loop.

 

Do not read analog, Do not write digitial, save to file, 6 or 7 loops per millisecond.

 

Based on this data, I'm assuming it isn't the file writing that is slowing me down.  It looks like DAQ access.

Any other thoughts?

 

Jeff

 

 

0 Kudos
Message 8 of 12
(3,835 Views)

I'd agree with your conclusions.

 

I don't have the DAQ device you have, so I'd have no way of testing this.

 

But if you are able to post your VI, perhaps I can look into it and see if there is anything about the way it is coded that would explain it.  Anything like reconfiguring the task on each loop, or starting and stopping it could slow things down.  Are you reading 1 sample at a time for N channels or N samples for N channels?

0 Kudos
Message 9 of 12
(3,833 Views)

This may sound like an obvious question, but

Are you doing you Task setup outside the acquisition loop?

 

If you are, the you can probably increase the loop speed further by

1) Running in a timed loop/structure - they seem to be better at getting the processor from windows

2) Examining the particular case of the polymorphic VI you are using and extracting that code straight onto your block diagram - this removes that tiny bit of decision making LabVIEW has to do when it works out which subVI to call. Not recommened for normal practice, but it works for super optimisation.

 

James

CLD; LabVIEW since 8.0, Currently have LabVIEW 2015 SP1, 2018SP1 & 2020 installed
0 Kudos
Message 10 of 12
(3,827 Views)