12-03-2012
03:14 AM
- last edited on
03-05-2024
03:46 PM
by
migration-bot
Hi again Andrew,
[apologies for a late response, but while I typed this last Friday already I got a bit stuck on the Jitter/timing stuff; tried to turn that into a useful question today]
Does this help explain?
Not sure whether to say yes-but or no-but. By now it seems more confusing BUT this will be because of me being not knowledgeable in this specific field. But okay, that's why the thread.
By now we must be careful not to miscommunicate because of my English, because I'm sure that's part of it.
The 10s file example was only an example to make clear the THD being off when the captured stream is cut each "memory fill" in the digitizer. So, drop the 10s because it is MINUTES. Or complete music tracks. I had hoped to make this clear in post #1 and otherwise in my second.
There is no way that whatever digitizer with the largest amount of memory will do the job here. Not on its own. Not an 8MB version and not a 256MB version. This is how the 8MB version will do; it has to be "infinite" anyway, so solutions have to work with any amount of memory. I didn't tell that the data is 32 bits per sample per channel, so it's already two times worse than you thought. For me no problem, because I knew from the start that it has to be some asynchronous solution, though still real time.
A "yes-but" comprised of it now suddenly looking like nothing is possible (reading your last post). But now I won't believe that, and this is the but part.
A "no-but" comes from me not understanding why my so-normal needs are not clear, and so this is just about communication.
I think all mixes with a "community" like this not knowing much about audio analysis, while it is my fault to ever dive into it this (DAQ) way. It could be for a reason that nobody does. Still it seems to have potential, and that's why I won't give up yet. "Yes-but" Andrew says now, because he can see it working too. Only the effort to get there is a bit unclear.
For fun, let me try to tell what the potential is, or "opportunities" if you like. Btw, notice that I may pose a couple of untruths because I am talking about stuff which is normally so expensive that "nobody" can afford it, which includes me - and because of never having had the real thing at hand, I can only guess what it may bring. So :
In the audio world, max S/s looking at the A/D part is 384K (used by recording companies). This is about 24 bits, because less than 24 bits is no good anyway. More S/s is deemed not possible because the "THD" would be too poor to be useful. On the other side of the game is the D/A (in the consumer's homes) and this logically doesn't need to exceed 384K. Additional remark : while 384K for A/D is the maximum possible, this is actually not used at all. A handful of recordings of this sampling rate exist. Today it is common to have 24/192K recordings hence D/A converters must be able to at least cope with that. This means that analysers should be able to deal with 384K at least (Nyquist) *if* we foresee "digital analysis" (which I do). Well, 384K capable audio analysers (of thus 24 bits !) already don't exist. For the higher sampling rates (A/D in-analyser) It's all 16 bit trickery with smart dithering and such being able to reach 19 bits and something capable cost 10 times more *at least*.
This doesn't explain yet why out of all I like to have the higher sampling rates ...
Well, what we did at Phasure was creating a 24 bit R2R D/A converter which can take 24/768 for input. The "R2R" part means it is 24 bits not-SDM (Sigma Delta Modulation) and this is important because it allows "no oversampling at all" in-DAC. So, notice that any common higher sample rate and higher bit rate DAC is SDM based, and this oversamples because of the principle (it won't work otherwise). This is in-DAC DSP/FPGA stuff.. Not so here with our Phasure NOS1 DAC, which doesn't do a thing. Still, this "reconstruction" of 16/44100 (CD) material is needed, which implies that now the "oversampling" (but which is sheer upsampling) now needs to happen in front of the DAC - in the playback software. The "filtering" means to do this are numerous and the one is better than the other. What I created is a genuine interpolating means, and it needs looking at the results to improve on it. Not only listening (which would be subjective).
And I can't look at the results because it would need a 1.5MHz A/D for it. Okay, 1.5MS/s you call it.
To be hopefully clear : No DAC like this exists. All DACs have this stuff on-board. They use DSP chips or SRC (Sample Rate Converter) chips with datasheets (thus with known results) and they are glued together and will work. But, in fixed fashion. Not so here.
What could be confusing per my above explanation, is that we have audio files of 24/192K on one side, and the upsampled 16/44.1K files on the other. The former don't need anything special because the filtering has already been applied during the (A/D) recording, or during any production (mastering engineer) process otherwise. It is these 16/44.1K files which NEED the attention, because the "reconstruction" has to be applied because of too few samples being in there.
Do notice that in this audio world it is this "upsampling filtering" which close to 100% determines the sound (quality). We say that DACs sound different, but it really is that upsampling means and in the end the genuinity to THD.
To be complete : This "upsampling" is not about outputting 384KHz *frequency* from 16/44.1 material (this frequency is just not in there) but to observe the harmonics beyond the audio band (theoretically up to infinity) and the high energy the super high frequencies could carry when the filtering is not performed correctly (this can destroy amplifiers and loudspeakers).
So here we are. I found a 24 bit A/D up to 500KHz (KS/s) and up to the sampling speed I "require" I loose a bit here and there. Really good enough !
I think I am as far that what I like to have can be done. Then *with* that programming. Maybe I don't care already because of the challenge. Especially when the challenge is about (throughput) speed and latency, the job should be for me ...
The JAT natively accepts the waveform data type. You can use the NI-SCOPE driver with any NI digitizer (including the NI 5922) to return this data type (simply use this specific version of the fetch function).
I am trying to see through how this can work reliably; I followed your link to "2D DBL" (is that the one ?) and dug out this from it :
wfm is an array of waveforms; that is, a two-dimensional array. This output can be wired directly to the LabVIEW waveform graph, but each waveform is plotted without timing information.
But also from the 1D WDT underneath it :
Retrieves waveforms the digitizer has acquired from multiple records or multiple channels. Returns a two-dimensional array of LabVIEW waveform data types that includes timing information.
... while further reading into 1D WDT unveils that this is about time stamps ...
Now, where I got stuck earlier on (see first line of this post) was my own reasoning about how 3ps could be a real spec to work with in this "off line" analysis situation. I figured that no time data could do this because of a lacking resolution when it would be about the Windows OS (where max theoretically achievable would be 1/14 million while 3ps implies something like 3000 billion). There could be a timer on the board but then I thought that I myself wouldn't make it like that. Instead I'd assemble a super high frequency oscillator and count the ticks. From there time could be derived. Does such an osc exist ? probably not. Anyway ...
The Jitter Analysis Toolkit is a software package that runs stand-alone and separate from the 5922 digitizer altogether. In other words, you could purchase a digitizer, OR you could purchase the JAT, OR you could purchase both.
Here you seem to tell that I could come up with a wave from which is allowed to originate from anywhere (I could have created a nice sine digitally from a program) and give that to the JAT ? And yes, still it could work because the JAT can analyse distances and so forth. One small problem ... it will operate in the Windows environment with far from sufficient granularity timers ... And NOW the sampling speed comes into play because if that were sufficiently high (3000 billion) it would work from there (e.g. measure were peaks ought to be and register the offsets which are time related (granularity is 3ps now)).
You see ? this is full of inconsistenties and impossibilities. IOW, I don't understand much of it, or it can't work. Allow me to think the former.
How ?
Regards,
Peter
12-03-2012 04:55 PM
Peter,
24bit with a R2R ..... 48 (or 72) Rs matched to less than a ppm ... over temperature ... over time .... customer affordable ... max Samplerate x 20 bandwidth (or was it about times 200 for 22bit? )
Tell me more!
12-04-2012 06:06 AM
Henrik,
Your message is a bit cryptic to me. But reading into that spoiler makes me having this (possibly totally off) response :
From the DAC I am talking about I can show you a -1.5dBFS 16 bit (!!) signal attenuated with 141dBFS still sticking its neck out of the field.
No dither ...
Inherent output noise is 6uV (at the end of 2m balanced interlinks).
Might you be referring to 24bit multibit D/A's not being able to resolve to 24 bits anyway (net-analogue) - you must be correct. But this doesn't prevent us from getting closer than we think possible or what datasheets depict.
Anyway, the article about the Josephson Voltage/Junctions/Arrays is an interesting read and looks much to how DSD audio (from SACD) might have emerged (which is a 1-bit encoding scheme, much working like PWM (Pulse Width Modulation)). DSD generally operates at 2.8MHz, but 5.6MHz is also possible. The challenge here is to "shape away" the high frequency quantization noise which Josephson seems to addres. This is not only fruitful for the 1 bit "SDM" encoding scheme, but just as well for 24 bit PCM (Pulse Code Modulation) (if you look at figure 7 in there, you see the "transients" in the stepping of the wave, which imply high frequencies = harmonic distortion -> oversample that and the steps get smaller and modern SDM D/A chips do just that).
Regards,
Peter
PS: I'm still trying to decypher your message. It might just be so that you are telling me "the heck get that 5922 !".
?