Counter/Timer

cancel
Showing results for 
Search instead for 
Did you mean: 

How to use Hardware timed meaurement in 6052

I am using PCI 6052 for data acquisition at 5ks/s thru labview Traditional DAQ.I make the samples to read a time =2 .setting configuration like this results a delibrate inconsistancy and file stores some times less samples ,and some times more than the desired, samples.
Now I want to use Hardware timer to scan the channels , but I dont know how to include the hardware timer from labview and other settings ,?
waiting for help
Thankz
 Haider
0 Kudos
Message 1 of 12
(4,840 Views)
Hi Haider,

If you open the NI Example Finder you should be able to find the example Cont Acq to File (binary).vi in the path Help > Find Examples > Hardware Input and Output > Traditional DAQ > Analog Input > Stream to Disk

In that example you can easily see how to acquire at 5kS/s and then log that data to disk.

Regards,
0 Kudos
Message 2 of 12
(4,824 Views)
Dear Otis
Yes I go thru to the example u said but , even though here is no indication either it uses windows clock or DAQ STC built in timer.
my question is how we know that a normal programme in labview is using which clock???
0 Kudos
Message 3 of 12
(4,813 Views)
Hi Haider,

If you use that example and dig into it you will see that the AI Start.vi calls the AI Clock Config.vi.  Unless you state otherwise any of the continuous or finite analog input examples set the analog input clock, so your measurements are hardware timed.  If you ever have questions on how a particular VI works, just drill down into the sub-VIs and see what is going on and you can even probe the VI while it's running to see what values are passed.

That being said, I personally find Traditional DAQ to be much more difficult to see what is actually happening.  If you use any of the DAQmx examples you will see that all of the timing information is set in DAQmx Timing.vi.  If you use that VI you can set it to whatever timing source you so desire.  Typically I use the Sample Clock as the source and simply set it to whatever rate I need.  That then sets your internal sampling clock (the hardware clock on your board).

In short, the example you are using IS hardware timed.  If you ever need to know what a VI does, just look into it a bit deeper and you can see what is going on.

Regards,
Message 4 of 12
(4,805 Views)
Dear Otis
thank u for ur brief description regarding hardware clock.Now I understand the whole story to 80% but still there is some questions clicking in my mind that , If the hardware timer is by default active then ,there should be no timming differences or sample difference between the acquisitions.I mean when I run my application for the first time at 5Ks/s to 10sec by doing "Scans to read at time"=>500 or 1000. it is fine because it stores 50,000 samples ,
In contrast when I set "scans to read at a time"=1or 2 .then it becomes inconsistent like it stores 50,001 or 49,0500 not 50,000 why??
0 Kudos
Message 5 of 12
(4,797 Views)
Hi Haider,

Every sample you take will be based off of the clock signal that you send it when reading multiple points.  However, there are certain limitations to the number of samples that you are reading.  Let's use the 5kS/s rate that you are talking about.  If I set the number of scans to read to 500 then it will wait for 500 samples to occur and then pull those samples off of the buffer and read it into your program.  This operation will happen at a rate of 10 Hz (5kSamples/s / 500 Samples = 10 Hz) and update your program.  However, if you try to read 1 or 2 samples at a time it means you will update your application at a rate of 2.5-5 kHz, which is way beyond any rates that the software can support. 

Typically the rule of thumb (link) that I like to use is do not make your refresh rate be more than 10-20 times per second, and the fewer refreshes the smoother things tend to go. 

I think the real confusion here is where you set the buffer size.  That is essentially just how large of a buffer you have and you typically want a buffer at least twice the size of what you are reading.  This helps to prevent buffer overruns and such. 

I hope this helps to clear things up,
Message 6 of 12
(4,776 Views)
Thanks
Now I come to clear end of the problem , that you said that refresh rate have some limitations.in short if my application is time critical .
if scan rate is 5ks/s and time delay should be 0.2msec. then I have to set "Scans to read at a time=1"
if I increase the scans to read at time to 500 then there will be a significant  delay in my programme ,So to get rid off such problem if any idea you have??
0 Kudos
Message 7 of 12
(4,770 Views)
If you have a truly time-critical application then you should use a Real-Time Operating System (RT OS).  Windows is not a RT OS therefore you can never have deterministic behavior. 

In your application your should determine the update rate that you need.  Update Rate = Scan Rate / Scans to Read at a Time

You should read the maximum number of scans possible.  If you read only 1 or 2 scans per read, then you will most likely cause an error because of a buffer overflow.  This will happen because it takes a certain amount of time to send data over the bus.  Eventually you will begin filling up your buffer with samples faster than it can deliver those samples to your application.  When the buffer overflows the applica

If you want to also update controls on the front panel then I would recommend adjusting your Scan Rate and the Scans to Read so that the Update Rate is no more than 20 Hz.

If you read 500 Samples at a rate of 5000 Samples/s then you would update 10 times per second.  This is a pretty fast rate for larger most control systems.

Best of luck,
0 Kudos
Message 8 of 12
(4,757 Views)
Again thanks for ur help.yes of course in windows the layers involved restrict the speed of reading from buffer at high sampling for a single point.
For labview which RT OS will work fine.?u didnt specified name.and anther thing that my application is time critical so I can not go delay of beyond 0.2ms at 5ks/s.
Thanks
Haider
0 Kudos
Message 9 of 12
(4,745 Views)
Hi Haider,

Maybe you could explain what it is that you are actually trying to do with your application.  The answer to your last question could vary greatly.  If you are controlling some time-critical system and you need point-by-point analysis then you definitely need to go for an RT system.  If you just need to know that the samples were taken with a time-resolution of 0.2 ms, then a typical PC with the DAQ board that you have should be perfectly fine.

If you can let me get a better insight into what you are trying to do and how you would like to do it, then I can have a better idea of which option would be best for you.  This should make sure that we don't either find a solution that won't work, or that we don't try to pursue a more complex solution than you need.

Regards,
0 Kudos
Message 10 of 12
(4,736 Views)