LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How do you open a .wav file without playing it immediately using the sound output start and stop vi's?

Hi,

 

I would like to use the sound start and stop vi's to play a .wav file since they have very minimal time delay in activation of the respective vi's. For instance, they allow for the ability to start and stop the .vi with very good precision in time. The play sound file.vi opens and plays the file immediately with some loading delay, unfortunately that delay is exactly what I'm trying to avoid.

 

What I like to do would be to open the .wav file and create a task ID at activation of the .vi so that when the tone is played there is no delay(using the sound output start.vi). Does anyone know how to do this???

 

I don't want to use the read and write sound output.vi's as the write sound output.vi has a delay and I can't stop it with high precision.

 

I need to be able to have this high precision since we are measuring neural responses to a tone in awake rats. We need to make sure the tone is starting and stopping with very precise timing, down to the milliseconds. 

 

Thanks!!!!

 

Please help!!! 

0 Kudos
Message 1 of 5
(3,320 Views)

Timing to millisecond precision will not be reliably possible using any desktop OS.

 

When we have done sound stimulus/response measurements (since LV 1.2), we always use external hardware for the timing.  Generate a continuous tone and gate it on and off with hardware timed digital output or something similar. Run continuous measurement of the response or start teh acquisition before the stimulus.  Put a marker in the measured response at the time of the stimulus. Some caution about the quality of switching may be needed to avoid spurious spikes which the rats can hear but the experimenters cannot.

 

Lynn 

Message 2 of 5
(3,310 Views)

Thank you for the reply! The timing is very important, and we want to keep everything as high resolution as possible. Although I don't think we need to do external timing since our data acquisition is through different hardware. We just send a TTL output to the data acquistion box when the tone starts and stops. This minimizes are need to have sub millisecond resolution and great precision. That being said, we'd still like our tone to play for 200 ms plus or minus a few milliseconds. Does a desktop OS have worse than millisecond resolution? I was under the impression that the clock timer was accurate to the millisecond.

 

Anyway, I think we're ok with a little bit of error in the tone duration as long as we record the onset and offset of the tone in our data, which we are.

 

 

 

 

0 Kudos
Message 3 of 5
(3,282 Views)

The tick counter has a resolution of 1 ms.  Resolution, accuracy, and the responsiveness of the OS are three different things.  The issue is that OS latency can be 10s of milliseconds or occasionally longer.  If the OS decides to index the hard drive between the time you read the tick count and send the start audio command, your tone could be quite late. Sending the TTL pulse is a third call to the OS.  So you have two latency times for each trial. Unless you have a real-time operating system, this latency issue will introduce randomness into your data.

 

Try sending TTL pulses of 50 ms duration every 200 ms, software timed, for a few minutes and look at the variation in the edge timing.  Then try it again with tones thrown into the mix and see if the variation changes. 

 

That is where hardware timing and synchronization pays off. 

 

Lynn 

Message 4 of 5
(3,273 Views)

Or... assuming you need no closed loop control of the timing (you know before-hand what the duration will be), just get a cheap NI card with analog out. Buffered AO will let you time the tone durations perfectly. Use another AO channel (which will be naturally syncronized with the tone) for the external DAQ enable and you'll have the whole thing whipped.

 

Randy

Message 5 of 5
(3,259 Views)