Hi,
To monitor data continuously without hanging:
---------------------------------------------
1. Configure your counter to do "buffered period measurement"
2. In your loop, query the counter for "available points" using "Counter Get Attribute.vi"
3. When == 0, do nothing. When > 0, read that # of points from the counter buffer. This way, you're never asking for any data that isn't already there and your loop won't ever have to hang and wait for it.
4. Accumulate/store/graph your data according to the needs of your app.
To filter out unwanted high values:
-----------------------------------
First, I'm assuming that your "high values" are high frequencies, thus small periods. If so, these may be caused by sensor and/or electrical noise. The best thing to do first is try to reduce/eliminate the noise at its source. Failing that, there *may* be a way to make good estimates of the correct data, but it could get pretty tricky. Here's an outline of a way to think about and deal with the simplest case -- where you know the expected measured frequency and it's very nearly constant over time.
The characteristic of such a noise glitch would be a a pair of transitions in rapid succession, either low-->high followed by high-->low or vice versa. Either way, you'll pick up one of those transitions in your buffered period measurement. The trouble is that it may happen anywhere within the nominal period, possibly more than once.
Let's first look at a case where the nominal period is 1 msec +/- 20% and there is one glitch of duration 0.1 microsecond, occurring at the 70% point of the real period. The measurement should show one period of duration 1.0 msec, but the noise glitch will cause you to receive instead two periods of duration 0.7, and 0.3 msec.
The simplest correction would be to simply trash all measurements outside the acceptable range of [0.8, 1.2] msec, including these two. Note however that a noise glitch occurring at the 90% point would lead you to trash the 0.1 msec measurement, but believe and keep the 0.9 msec measurement. Note also that if noise glitches are distributed randomly in time, you would end up keeping 40% of such erroneous data (glitches in either the first 20% or final 20% of the real period).
Another correction would be to estimate the period interrupted by the glitch. Start by assuming no more than one glitch per legitimate period. Since the glitch subdivides the true period into a pair, you can re-create the true interval by summing the pair of periods. The catch is to identify which pair needs summing.
The smaller of the pair will show a period <= 50% of the real period, and can be identified. However, the larger of the pair cannot always be identified. The larger of the pair can be anywhere from 50%-99.999% of the real period and may located either right before or right after the smaller of the pair. If you wish to recreate the real period, you'll need to make a mathematically educated guess about which adjacent period to consider as the "larger of the pair."
This is tricky enough as a post-processing exercise, but it's even worse when you process the data as it comes in. Then there will be times where the last element in the buffer is the "smaller of the pair" and you don't yet have the "larger of the pair" data. There will also be (rarer) times when the last element is the "larger of the pair" but you can't yet know that it needs to be summed with the next "smaller of the pair" measurement.
Now consider a case where there could be two or more glitches inside true period. You'll need to evaluate the best choice of summing any two, three, four, etc. consecutive periods to reconstruct the real period. {note that for n glitches and a +/-20% acceptance criteria, then (2/5)^n of the glitched intervals will produce one measurement within the +/-20% bounds.} In such a scenario, I would advise working *really hard* on eliminating the glitches at the source.
Whew, that's a mouthful and a half! Reply if you'd like an outline for an alternate approach, involving buffered semi-period measurement...
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.