LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

FFT on triangular and square waves.

this comes to conclusion, that the narrower distance between frequencies, the higher fs is needed.
Given 1kHz,1.05KHz,1,1KHz results in  fs=1,155 MHz.
Conclusion is very sad : What is very simple in hardware, is not possible in LV.
Daq card effort to generate pretty slow (1Khz) square waves, requires extremly fast AO.

And most of AO energy will be lost (most of the time AO will be updated with the same value).
Wouldn't it be better to use DIO clocked at 1;1.05;1.1Khz and resitor ladder for summing up the signal
Do You also need 1.155Mhz clock to update DIO?

I think, that what You ment about fs = integer multiple of all waves, related to square wave generated through AO.





0 Kudos
Message 11 of 12
(806 Views)


this comes to conclusion, that the narrower distance between frequencies, the higher fs is needed.
Given 1kHz,1.05KHz,1,1KHz results in  fs=1,155 MHz.
Conclusion is very sad : What is very simple in hardware, is not possible in LV.
Daq card effort to generate pretty slow (1Khz) square waves, requires extremly fast AO.


I don't think I've digested all the details of this thread, but perhaps your sad conclusion isn't the end of the story?  Here are some very layman-like thoughts. 

1.  Shouldn't your least common multiple for 1.00, 1.05, and 1.10 kHz be only 231 kHz?  The multiples would be 231, 220, and 210.

2.  AO doesn't need to be generated 1 point at a time with most data acq boards.  Many of them can handle high update rates in hardware when you set up a buffered output task.

3.  The high sampling rate may be necessary for *ideal* reproduction of your waveforms, guaranteeing that you produce an exact integer # of cycles of each freq component in a specific integer # of sample intervals.  Is that something you definitely need?

4.  I'm not enough of a signal processing expert to predict what'll happen in the following scenario, but it's one I'd probably try for myself to investigate.  First imagine generating an array representing the signal as generated with the high ideal sampling sampling rate.  Now suppose that you next make a new array containing only every 5th data point and generate it at 1/5 the ideal sampling rate.  Every point you generate still falls exactly where the ideal waveform would have fallen.  You still have >40x oversampling for each frequency component.  Perhaps the result will still be pretty good?   My guess is that by subtracting information, you'll raise up your noise floor, kind of like a broadband spectral leakage effect.  But you may still have a usuable overall signal-to-noise ratio.

Just some thoughts.  Hopefully one of the better signal-processing folks will comment further.

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 12 of 12
(789 Views)