LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

False Peaks with Threshold Peak DetectorI

Paul,

You would likely still have quantization errors, but they would be 2^6 smaller.

I put together a simulation (LV8.2) of your setup with the linear regression to find the location of a threshold on the rising edges. The data sets are much smaller than what you are collecting, but you could adjust that if you want. I used random noise to simulate the quantization error and a clipped triangle wave to represent the encoder output (so the rise time would not be zero). It does not do any error checking and has a problem with the first edge, but it may give you some ideas. According to the profiler, the regression takes an average of 2 microseconds.

Lynn
0 Kudos
Message 11 of 23
(2,017 Views)

Tried to look at your code, but came up mostly blank since I don't have ni-Scope or ni-Switch installed, nor have I ever used either one.

Stepping up from 8-bit to 14-bit would *probably* reduce the high-freq noise that I attributed to quantization error.  It appears you now have 8 bits = 256 quantized levels covering a 10 V range.  10 V / 256 gives a quantization resolution of about 40 mV.  An extra 6 bits on the A/D converter changes the resolution by a factor of 64, putting it under 1 mV.

However, the real-world is probably putting some noise on that signal and it's difficult to predict how much.  At some # of bits, the noise will be the dominant concern rather than the A/D resolution.   Still, the extra A/D resolution will *definitely* be helpful to any software filtering or other post-processing you do.

You've expressed concern about software processing time -- can you put it in detail?  What is a desired amount of software post-processing time?  What is the most you'd be willing to live with, grudgingly?  How much data must be processed in this time?  Given what I've gathered thus far, I'd suspect that it's possible to do this stuff in single digits of seconds, if that.  May take some work though...

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 12 of 23
(2,016 Views)
Thanks Lynn, I'll check the code out. One of the issues I have to deal with is that some encoders will have over 16,000 edges in one revoultiuon (the quantum of my inspection) and some encoders may only have 1000. I can't use a fixed sample rate and fixed "Region", because for the high count of encoders I may only have 125 samples to chracterize the waveform, and for low counts it could be thousands.

I can program in variation in sample rate andf "Region" based on expected count, but strange things could happen when during fauilure scenario's, and I have some (Becnh Tests) that are independent of encoder part number and count, so I would have to figure that out as well. Thanks for your help.


Kevin,

Regarding test time, the shorter the better. Without using software filtering (Bessel), I can test the encoder in both directions, all 12 channels, in about 15~20 seconds. That's fine. But with filtering it easily doubles the test time and that's not good. Sorry I don't have a more succint answer. Test time has always been predicated by computation time, the aquisition only takes .03 s per pair of channels.


~~~~~~~~~~~~~~~~~~~~
Paul Johnson
Renco Encoders, Inc
Goleta, CA
~~~~~~~~~~~~~~~~~~~~
0 Kudos
Message 13 of 23
(2,008 Views)
Paul,

Can you put a control (possibly an enum) in the tester which allows the user to specify the encoder model? Then use a case structure to select the number of samples, region size, and so on.

Since your output waveform is nominally a trapezoidal shape, it is probably fairly easy to detect the transition regions by simply detecting the deviation from min and max and analyzing the slopes of those regions. Find the first place where the signal is above the low level by 3 times the noise (or other suitable parameter). Then find the data point where the signal is less than the high level by 3 times the noise. Run the linear regression on the segment between those two points and calculate the intersection of the line with the threshold. Repeat for successive transitions. This algorithm does a comparison or two on each data point then grabs an array subset, does the regression on only the relevant data and proceeds to the next transition. I think it can be fast enough for your tests. It would adapt to any reasonably clean data without user intervention.

Lynn
0 Kudos
Message 14 of 23
(2,004 Views)
You've got me thinking, Lynn!

Unfortunately, I'm not a full time LV programmer, I have an engineering group to manage. So, I will continue to cook these things in the grey matter and apply them as soon as they are medium rare.

The main test program (its so huge!) already has the part number and data count of the UUT, so, I can vary parameters based on these values. Also, with regard to my "bench tests" which do not have that information, I can put some enums for various paramters, or I can have the program take a quick look at the beggining and decide for itself what paramters to use.

Ah yes... If only people would stop bothering me so I could get this all done Smiley Wink
~~~~~~~~~~~~~~~~~~~~
Paul Johnson
Renco Encoders, Inc
Goleta, CA
~~~~~~~~~~~~~~~~~~~~
0 Kudos
Message 15 of 23
(1,995 Views)

I'll second Lynn's suggestions re: linear regression on transition regions.  The only possible tweak is if the physics of the situation dictates fitting to something other than a straight line.  (For example, an RC charge and discharge ought to be fit to an exponential).

I can't back this up with statistical theory, but you'll probably be wanting somewhere in the range of 25 to 100 data points to use for the curve fit.  Too few points makes you susceptible to individual outliers while excessive points consume CPU with very little benefit in the calculated result.

0.03 seconds of acquisition?  Is this 1 or 2 full revs at medium to high speed?   Also, I'm surprised that software filtering seems to double your computation time.  That sounds like the penalty one might pay when allocating new memory for a data copy of a very large (tens of MB) data array.  Does the raw data go straight into the filter function or do you have a wire node that might cause LabVIEW to make a copy?

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 16 of 23
(1,993 Views)
Lynn,

I've been looking at your sample program.

My orginal question was "Why not have the Threshold Peak Detector "width" paramter apply to both sides of the potential transition, then my problems would go away. We seem to have strayed from that a little and, believe me, I don't mind. Maybe I need to ask NI directly, but;

When I run your sample VI I get some false edge detections. Not every time, but sometimes. The frequency of false edges goes up if you increase the noise a little. Was it your intent that these would be avoided in your sample? I suppose it was just grist for the mill.


~~~~~~~~~~~~~~~~~~~~
Paul Johnson
Renco Encoders, Inc
Goleta, CA
~~~~~~~~~~~~~~~~~~~~
0 Kudos
Message 17 of 23
(1,992 Views)
Kevin,

Or what about applying the Bessel Filter to the subset, instead of linear regression. Maybe I'd end up with something strange or invalid since its such small segment of data?

Actually, I mispoke regarding the aquisistion time. Its actually closer to .01 seconds based on 1 revolution of data with a motor spinning at 6000 RPM. However, I have to run the aquisition VI several times for one test, because I'm only checking 2 channels at a time. Obviously there are many options here, some better than others, and undoubtedly I did not settle on the absolute best, but...here I am with lots of hardware testing away regardless!

Some channels have to be aquired twice. For instance, if I want to confirm the count of a particular channel (and the test must do them all) I need to aquire the Index (I) and the channel in question (A, B, U, V, W). So there are six aquisistions right there.

If I want to charaterize phase, I need to aquire A and B,  A- and B-,  U and V, V and W, U- and V-, V- and W-, so there's another 6. Also, I need to charaterise the index, I and I-, 2 more. Now that more or less completes the CCW test, just double it to cover CW.

The filtering, when applied, occurs right at the aquisition, which you would have noted had you been able to open the VI I sent. So the raw data is filtered only once, therefore its impact on processing time is directly determined by observation.
~~~~~~~~~~~~~~~~~~~~
Paul Johnson
Renco Encoders, Inc
Goleta, CA
~~~~~~~~~~~~~~~~~~~~
0 Kudos
Message 18 of 23
(1,987 Views)
Paul,

I know how it is when real work interferes with getting things done!

The Threshold Peak Detector.vi is implemented as a .dll or CIN, so we cannot tell what is going on inside. Since it only works for peaks greater than the threshold, and only when the number of samples above threshold is greater than the specified width, it won't handle your situation. So you have to make your own.

I know my VI finds some spurious peaks. You could look at the average frequency or period of the square wave and only allow one positive and one negative transition per period. If the transitions are usually 1000 samples apart, then ignore any additional transitions within 500 samples of the first one. Notice that when two or more transitions occur close together, the calculated values of the threshold crossing points are usually quite close together.

I am not familiar with your DAQ hardware, but if you can acquire all 12 or 14 channels at once, then just analyze the ones you want, it would speed the testing. I think you were using NIScope which I do not have. A 16 channel DAQ board might be a better choice.

Lynn
0 Kudos
Message 19 of 23
(1,966 Views)
Lynn,

The use of a scope card versus a DAQ card has probably been the oldest question I ever asked any LV salesman that ever showed up here. It has always seemed that the best solution was a scope card.

Can the DAQ cards aquire the wave form in the same fashion as a scope, voltage as a function of sample number using an edge event as a trigger? What is the data rate supported on each channel during simulataneously aquired signals? Do you have a piece of hardware (part number you wouldrecommend I look at?

My hazy recollection is that the scope cards are "High Speed Digitizers". That is, the DAQ cards can't aquire at the same rate on simultaneous channels. As far as I'm concerned its 220, or 221, whatever it takes (see "Mr Mom" with Michael Keaton)


~~~~~~~~~~~~~~~~~~~~
Paul Johnson
Renco Encoders, Inc
Goleta, CA
~~~~~~~~~~~~~~~~~~~~
0 Kudos
Message 20 of 23
(1,942 Views)