LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Tone Measurements Phase

Hello,

 

I am an undergraduate mechanical engineering student at the university of minnesota.  I have been tasked to create a Labview program that will calculate the speed of sound, and once that is found, be able to determine the range from speaker to microphone.

 

The frequency is know and the phase needs to be extracted to determine wavelength.

 

I have successfully determined the speed of sound portion of the lab because I can determine the change in phase as I move the microphone from the speaker.

 

However range is giving me issue.  When I run the VI, a phase will be assigned to the microphones signal.  As long as I keep the VI running, the phase will be accurate relative to neighboring points (what I mean as the phase change in a given length is consistent)

 

As soon as I terminate the VI and re-run it, a new phase is assigned to the microphones signal (I didnt change any parameters at all.) Again relatively, the phase change is still consistent with the first run through.

 

Can anyone fill me in on the reason that a new phase is given every new run of the VI?  And maybe any solutions?

0 Kudos
Message 1 of 6
(3,120 Views)

Hello Erik!

 

I'm not sure I understand your explanation fully, but it sounds like you're starting the microphone at a fixed distance from the speaker and moving it away as the VI runs?

 

If that's the case, are you running into issues when you run the VI with the microphone further away?

Caleb Harris

National Instruments | http://www.ni.com/support
0 Kudos
Message 2 of 6
(3,098 Views)

yes as the test runs i move the microphone.  this changes the phase (ie amplitude difference from im assuming a sine wave and what the microphone is detecting.)

 

My issue is that say on test #1 I start the run with the microphone with 6 inches between the speaker and the microphone.  Ill get a phase of 60°.  I end test #1.  I start test #2 (not changing anything)  and ill get a phase of -120°.  etc...

 

One thing that is happenning is that say I start at 6 inches distance and i have a 60° phase.  I move the microphone 2 inches back and ill notice a difference of phase of about 16°.  This is actually a value that would give me within 6% of the theoretical speed of sound.

 

So the phase difference is calculating correctly, just the initial phase is random.

 

I hope this helps...if more clarification is needed let me know!

 

Thank you!

0 Kudos
Message 3 of 6
(3,095 Views)

Erik:

 

I think I may have an idea of what's going on. If I interpret this correctly, the phase is being calculated under the assumption that the microphone acquisition and speaker output are starting at the same time (relative to each other).

 

Unfortunately, we're dealing with a non-deterministic operating system, which means there's no guarantee of what order the input and output will start in, or how far apart their start times will be. This basically means that the phase is correct each time we start, but we can't guarantee what the phase will be. The only way to keep our experiments consistent is to find a way to do all of the tests without restarting the VI. Would it be possible to switch off the speaker and microphone, perhaps?

 

The other option we could pursue (depending on what DAQ device you have) is to manually program the DAQ device to pass the audio waveform through an analog output and acquire through an analog input. We could set up an external trigger to start both tasks, which would reduce the variability significantly by letting the card start the tasks in hardware (rather than relying on the operating system to start the tasks).

Caleb Harris

National Instruments | http://www.ni.com/support
0 Kudos
Message 4 of 6
(3,055 Views)

Erik,

 

Can you put a second microphone on a different input channel and leave it at a fixed distance from the speaker?  Use that as the phase reference rather than the signal driving the speaker.

 

If you position the two microphones at known distances from the speaker, you can compensate for changes in the speed of sound due to temperature and atmospheric pressure also.

 

Lynn 

0 Kudos
Message 5 of 6
(3,045 Views)

 

Just as Johnsold suggested you should use the second channel for a reference signal. That could be a second mic or a two resistance voltage devider with the speaker voltage and calculate the phase difference.

(in both cases you have to make a calibration run to compensate the individual phase lags). Remember @ 20kHz a 0.1 deg error is 13.8ns witch equals about 3m 50Ohm coaxcable!!:o

 

I found the Sinus approximation method (SAM), a least MSE fit for asin(wt)+bcos(wt)+c   with a,b,c and w as parameters give better results.

 

 

Message Edited by Henrik Volkers on 04-20-2010 02:04 PM
Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


0 Kudos
Message 6 of 6
(3,024 Views)