Counter/Timer

cancel
Showing results for 
Search instead for 
Did you mean: 

Measuring period of a pulse

Hi
I m measuring period of a pulse coming from a magnetic proximity sensor. There r two porblems.

1. I want to monitor the data coninuosly in a while loop but the moment it waits for pulse it doesn't stop and hangs for some time....

2. While monitoring the data , sometimes high values appear ....... but these values are unwanted. Once if i want to filter out thse values and see only the exact pattern , how to make it...

PLz. do answer these queries at earliest......

Thanks and regards

GNS
0 Kudos
Message 1 of 14
(5,121 Views)
Hi,

To monitor data continuously without hanging:
---------------------------------------------
1. Configure your counter to do "buffered period measurement"
2. In your loop, query the counter for "available points" using "Counter Get Attribute.vi"
3. When == 0, do nothing. When > 0, read that # of points from the counter buffer. This way, you're never asking for any data that isn't already there and your loop won't ever have to hang and wait for it.
4. Accumulate/store/graph your data according to the needs of your app.

To filter out unwanted high values:
-----------------------------------
First, I'm assuming that your "high values" are high frequencies, thus small periods. If so, these may be caused by sensor and/or electrical noise. The best thing to do first is try to reduce/eliminate the noise at its source. Failing that, there *may* be a way to make good estimates of the correct data, but it could get pretty tricky. Here's an outline of a way to think about and deal with the simplest case -- where you know the expected measured frequency and it's very nearly constant over time.

The characteristic of such a noise glitch would be a a pair of transitions in rapid succession, either low-->high followed by high-->low or vice versa. Either way, you'll pick up one of those transitions in your buffered period measurement. The trouble is that it may happen anywhere within the nominal period, possibly more than once.

Let's first look at a case where the nominal period is 1 msec +/- 20% and there is one glitch of duration 0.1 microsecond, occurring at the 70% point of the real period. The measurement should show one period of duration 1.0 msec, but the noise glitch will cause you to receive instead two periods of duration 0.7, and 0.3 msec.

The simplest correction would be to simply trash all measurements outside the acceptable range of [0.8, 1.2] msec, including these two. Note however that a noise glitch occurring at the 90% point would lead you to trash the 0.1 msec measurement, but believe and keep the 0.9 msec measurement. Note also that if noise glitches are distributed randomly in time, you would end up keeping 40% of such erroneous data (glitches in either the first 20% or final 20% of the real period).

Another correction would be to estimate the period interrupted by the glitch. Start by assuming no more than one glitch per legitimate period. Since the glitch subdivides the true period into a pair, you can re-create the true interval by summing the pair of periods. The catch is to identify which pair needs summing.
The smaller of the pair will show a period <= 50% of the real period, and can be identified. However, the larger of the pair cannot always be identified. The larger of the pair can be anywhere from 50%-99.999% of the real period and may located either right before or right after the smaller of the pair. If you wish to recreate the real period, you'll need to make a mathematically educated guess about which adjacent period to consider as the "larger of the pair."
This is tricky enough as a post-processing exercise, but it's even worse when you process the data as it comes in. Then there will be times where the last element in the buffer is the "smaller of the pair" and you don't yet have the "larger of the pair" data. There will also be (rarer) times when the last element is the "larger of the pair" but you can't yet know that it needs to be summed with the next "smaller of the pair" measurement.

Now consider a case where there could be two or more glitches inside true period. You'll need to evaluate the best choice of summing any two, three, four, etc. consecutive periods to reconstruct the real period. {note that for n glitches and a +/-20% acceptance criteria, then (2/5)^n of the glitched intervals will produce one measurement within the +/-20% bounds.} In such a scenario, I would advise working *really hard* on eliminating the glitches at the source.

Whew, that's a mouthful and a half! Reply if you'd like an outline for an alternate approach, involving buffered semi-period measurement...
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 2 of 14
(5,121 Views)
I have tried this by modifying the "Measure Buffered Period (DAQ-STC).vi" example program. I set the souce specification to internal timebase and 20MHz, the gate specification to default, buffer mode to continuous, and buffer size to 50. Wat I see is the "available points" count incrementing each time by the number of samples actually read from the buffer in the last loop. Since I'm inputting that value to the Read Buffer VI, it always reads up until it times out.

How do you get the available points to actually read what's in the buffer?
0 Kudos
Message 3 of 14
(5,121 Views)
Hmmmm, I've never seen any behavior like that from the 'available points' output so it's tough to guess. Can you post your modified example (v7.0 or earlier please)? What version of NI-DAQ are you using?

I'll try to toy with what you post in the next couple days or so. If you can describe a bit more about how you'd like to interact with this continuous stream of buffered periods, I can aim my "toying around" in that direction.

-Kevin P.
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 4 of 14
(5,121 Views)
I am attaching a very slightly modified version of the sample "Measure Buffered Period (DAQ-STC).vi". The only change is an insertion of "Get Attribute.vi" to retrieve and display the available points. On my system, the available points just increments each time by the number of points read in by "Read Buffer".

System Details:
OS: Windows 2000
LabVIEW v6.0 (6i) Pro Ed
NI-DAQ 6.9.3f5
DAQ Device: DAQCard-6036E
0 Kudos
Message 5 of 14
(5,121 Views)
A couple more details left out of the other message:

DAQCard-6036E is connected to a BNC-2110 terminal block. A function generator producing a low frequency (<40Hz) TTL square wave is connected to DGND and PFI9/GPCTR0_GATE.

The program also halts with a -10920 error when the function generator recalibrates due to a large frequency change.
0 Kudos
Message 6 of 14
(5,121 Views)
Oops! I left the type def's out of the previous library. This one has them. I also added code to handle the -10920 error.
0 Kudos
Message 7 of 14
(5,121 Views)
Ok, I think I've got it. I spot two things to comment on.

1. Given your default values, you're asking to read 20 periods of a <40 Hz pulsetrain. That should take a bit more than the 0.5 second timeout value you're passing in to 'Counter Read Buffer.vi'. I suspect you're timing out, in which case the 'number read' output would be 0, and the 'available points' value would be higher on the next pass through the loop.
With this approach, try increasing the timeout value.

2. When you call 'Counter Get Attribute.vi' to query for the # of available points, why not pass that value right into the '# to read' input of 'Counter Read Buffer.vi'? Give it a try -- you'll be finding out how many values are available, then i
mmediately reading that exact number. It should work for you.
With this approach, you'll probably want to add a call to 'Wait (ms)' to add a brief delay to the loop. Otherwise you may just cycle through the loop faster than new values can come in. The program should still work, but you'll usually get either 0 or 1 value at a time.

-Kevin P.
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 8 of 14
(5,121 Views)
No luck. I tried both of your suggestions, and the "available points" still continues to increment. Feeding the value into read buffer just causes it to reach its timeout faster as it waits to read in an ever increasing number of points from the buffer.

Curiously, the number that I'm getting from the "available points [3]" input, is the same as the result I get if I input:
read mark [8]
write mark [9]
read mark low snapshot [13]
write mark low snapshot [15]

Oh well, mark it down as a system quirk. I'll just have to capitalize on the timeout value.
0 Kudos
Message 9 of 14
(5,121 Views)
Hmmm. Right now I can only inspect the code, but don't have hardware to test it. I'm trying to think of scenarios that could create the symptoms you describe.

First off, any error except -10920 should terminate the loop and that doesn't seem to be happening.
If you consistently generated -10920 errors, you'd be reprogramming the counter at the end of the loop and the 'available points' value shouldn't keep growing from loop to loop. So that doesn't seem to be happening either.
So it would seem that there must not be any error being asserted, and things must be working just fine. Only that isn't what seems to be happening either.

I'm beginning to sound like Vizzini from "The Princess Bride."

I'm inclined to point out the relatively small default value of 50 for the buffer, though I'm not convinced that enlarging it would help. I changed
that to 100000 in the example below. I then added a front-panel boolean switch for selectively controlling when to go ahead and call 'Read Buffer.' A big buffer will let you watch 'available points' accumulate for a while before you decide to read them. This may give you a better feel for the rate at which the input pulses are arriving.
I also used a shift register to pass the error cluster around the loop. Previously, each iteration of the loop would start by using the pre-loop value of the error cluster.
I put the delay after the call to 'Read Buffer' so you wouldn't wait 200 msec before the initial query for 'available points'. I then set up the delay as a front-panel control with a default of 25 msec.
Finally, I made a change in the case structure you use to detect an error or a non-running state for the task. Instead of reprogramming the counter, I pass the True value through and let the whole loop terminate on the first instance of any error. You'll want to change this stuff back, but failing on the first error may help debug efforts.

Give it a try. Observe what happens if you never turn on the 'Read' switch. Do you get an error when 'available points' exceeds the buffer size?
Try depressing the 'Stop' button and turning on the 'Read' switch prior to running the vi to guarantee a single pass through the loop. What'd you get for 'available points', 'number read', and 'backlog'?

Finally, try using an example for counting edges with your external signal wired up as the counter source. Add a waveform chart inside the loop to see whether the counts grow at a fairly consistent rate.

Hopefully you'll get things working, but if not post back and describe the behavior of various scenarios.

-Kevin P.
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 10 of 14
(5,121 Views)