05-02-2009 12:42 AM
Hi all,
By no means am I a Labview ninja, and I have tried the search feature, but I cannot find an answer to my question; perhaps I am asking the wrong questions, or at least the right question worded wrongly. I would like to know how to implement the audio output VIs in the following situation:
Background:
my current VI captures data over a TCP protocol. The data that is captured is received in a loop (~85 values/loop-iterations) and is indexed as an array of double. As this data is captured, some of the captured values (e.g. certain elements of the captured array) are plotted on the front panel in an XY graph as they are being captured. Each set of 85 data values captured is called a data-frame. (I don't want to say they are being plotted in realtime, but rather the goal of the whole VI is to plot these values as fast as possible after being captured via TCP). The capturing loop graps complete frames (input arrays) at a rate of approximately 100 times per second.
What I want to do:
Instead of plotting these values in almost-realtime on an XY-graph, I would like to modulate an audio tone (sine-wave) that is continuously playing through the soundcard with as least latency as possible.
Furthermore:
During the data capture loop, all of the doubles in the array of captured values are normalized to have a maximum of 1, and minimum of 0. As an example of what I would like to do, consider the following... Given a pre-defined audio tone sin-wave frequency of 450Hz, based on the input state of one of the captured values I would like to modulate this audio tone between a 400Hz minimum and 500Hz maximum. So, upon each successful capture of a data frame, I would like to update the frequency of the continuously playing sin-wave (via soundcard) with as little latency as possible.
To myself, fhis problem sounds simple in theory. I have used the search function of the NI Labview forums, and turned up much information about using FPGAs or RIO devices. I believe that I understand the implications of those acronyms, but I would like to do all of this in software and with my soundcard.
Is this possible? Are there existing NI forum topics regarding similar VIs that I have missed?
-Brian
05-04-2009 08:08 AM
Brian,
Let me summarize to see if I understand correctly: You are acquiring about 8500 data points per second. You want to frequency modulate a 450 Hz "carrier" with a 50 Hz deviation with the 8500 data points per second. That means that you will be changing the "frequency" about 19 times per cycle of the carrier wave? Usually the modulating waveform is at a much lower frequency than the carrier.
What is the waveform and frequency content of the data received via TCP?
It may be possible to do what you are asking, but I am not sure how meaningful it is.
Lynn
05-04-2009 09:56 AM
Hi Lynn,
Your understanding is very close to what I'm doing but I'll try to clarify a few points and my application.
Our research lab carries out human research on sensory and motor functions. We are using a Vicon motion capture system to sample motion from human movement tasks via passive reflective markers and a multi-IR-camera system. In the current movement task, there are 28 markers that are attached to the arms/fingers of the subjects to calibrate the apparatus with which they are working with. The movement task is very simple, and requires a subject to move a finger towards and away from a point (marker) on a table. So, the only two markers I am interested in are the one on the finger and one target marker on the table.
The vicon software allows for almost realtime streaming of marker positions (in XYZ coordinates) over a TCP protocol, and the position of all markers are transmitted in each data frame. So each data frame received has XYZ coordinates for each marker (28 markers * 3 dimension coordinates = 84 data points) and one timecode (sample number) value, for a total of 85 values per data frame. So, even though I have to capture all 85 values, I am still only interested in 2 markers (or 6 positions) plus timecode. Even though the motion capture system can sample at a very high rate, my VI requests a new data frame from the realtime server only when it is finished processing the values in the current frame. My VI thus requests data frames about 100 times per second.
What my code currently does is calculate the euclidean distance between these two points by subtracting the coordinates ofthe two markers and then using pythagorean's theorem. This value is returned in millimeters, and, since the movement task is constrainted to have a maximal/minimal distance between these two points, my code then normalizes the value between 0 and 1.
The point of this human interface is to feedback this distance information to the research subject with as little latency as possible. Currently, my VI plots the distance between the points on an XY graph. I haven't yet measured total machine delay, but it is within acceptable limits.
In addition to feeding back the distance between these points using a visual modality, I would also like to be able to modulate an auditory tone. So, given a neutral-state tone of 450Hz, I would like to modulate the frequency between minimum of 400Hz and maximum of 500Hz according to the distance between the two markers. These values are arbitrarily chosen, the point is that I would like to modulate the pitch of a tone with minimal latency. This way, our subjects will be able to associate the pitch of an auditory tone with the distance between their finger and the target point on the table. Smooth changes in the pitch, and changes to the pitch made with minimal latency are the main goals.
So far, I have not had success in making smooth changes to pitch, and the sound buffer incurrs a latency to the pitch change that is too long.
Do you still think this might be possible?
Thanks,
Brian
p.s. as an alternative idea:
If this is not possible in software modulation of soundcard output, I can probably build a tone generator circuit with a 555 timer and small speaker. In this case, modulating the tone would only require changing the resistance of one resistor in the circuit. This could perhaps be accomplished with PWM via parallel port and a resistor-capacitor low pass circuit and transistor. Do you think using an independent software managed PWM could modulate an externally generated tone with less latency?
05-04-2009 10:09 AM
Brian,
Interesting project. I work with several researchers who have done similar studies, although more looking at gait and balance than small scale movements.
I have not worked with sound output much, but I think it can be tricky to update the frequency and maintain continuous output with low latency. The human hearing system is unfortunately rather sensitive to the breaks. Your idea of an external circuit modulated by a PWM output has several advantages. It guarantees a continuous output, and with appropriate filtering, smooth frequency changes. I would probably use a CMOS 4046 phase locked loop chip for its voltage controlled oscillator rather than a 555. It is quite easy to get the frequency modulation. You will need to buffer the output to drive the speaker.
Lynn
05-04-2009 10:45 AM

05-10-2009 05:02 PM
Great... thanks for all the suggestions! I will try and work things out from here.
Best,
Brian
05-11-2009 01:22 AM
If you do not have the old labview 7.1 sound system located here ...\National Instruments\LabVIEW 8.6\vi.lib\sound. At least on my computer. You may try this http://www.zeitnitz.de/Christian/index.php?sel=waveio. It works very good on my computer running XP and labview 8.6
