04-17-2018 02:14 PM
I am using a program called High Speed Data Logger2(DAQmx).vi, which was a program I did not design but inherited when I joined the lab. I am sampling 9 channels at 40kHz, using NI USB 6212 multifunction DAQ device. Parameters of interest in the program:
Max scans: 300,000,000
Buffer size: 1,000,000
Around data point 16777216, perfectly normal looking signal transforms to this on every channel:
I don't understand why.
04-17-2018 02:20 PM
Maybe sharing a VI would help to understand what's within the diagram 🙂
04-17-2018 02:20 PM - edited 04-17-2018 02:24 PM
Hi kmgurt,
I don't understand why.
Nor do we as long as you don't show your code…
did you check for error messages in the DAQ loop?
I am sampling 9 channels at 40kHz, using NI USB 6212 multifunction DAQ device.
This makes 360kS/s…
Parameters of interest in the program:
Max scans: 300,000,000
Buffer size: 1,000,000
Do you fiddle with DAQmx buffer size? Why?
Do you set your task to read 300M samples?
Around data point 16777216
So after about exactly 2^24 (!) samples you get into trouble? There must be a reason for this special number…
04-17-2018 02:24 PM
The VI is posted below.
04-17-2018 02:25 PM - edited 04-17-2018 02:32 PM
No errors appeared in recording. Is there a log to check?
As for the Max Scans, I just tried to make it large enough so that it would exceed the time I wish to collect data. The buffer size I just made large.
The program has a user interface screen that looks like this:
04-17-2018 02:42 PM
Note that on that screenshot above, the buffer size is not what it was in the recording... I was playing around with some settings just to try things out, and forgot to set it back.
04-18-2018 01:04 AM
Hi kmgurl,
this VI uses some weird logic to set the required number of loop iterations!
All this could be a limiting factor, as this logic uses the filesize including type conversion to I32…
Another point is file access within your DAQ loop: even when Windows buffers the data before actually writing to disc this may also be a problem…
Why don't you run the loop just the required amount of iterations, calculated before the loop by simply dividing "number of scan" by "buffer size" (which should NOT be wired to DAQmxTIming!). Set a reasonable buffer size of about 1/10 of sample rate…
04-18-2018 10:04 AM - edited 04-18-2018 10:11 AM
Surprisingly enough, in Continuous Sampling mode DAQmx pretty much ignores the buffer size input. Instead it calculates a size "for you" based on your sample rate. (You *can* override this default behavior with an explicit call to configure the buffer. This caveat can be found if you dig down into the detailed online help, but it's far from obvious that you should *need* to.) Anyway, your actual buffer size is 100 kSamples, which strikes me as just fine for this application.
Looking over your code, here are things I'd focus on:
1. confirm whether you want/need the 2D transpose operation.
2. perform fixed-size # samples to read *every* iteration. If you slightly exceed your Max #, so what? It was an arbitrary user input anyway.
3. consider reading a scaled 2D array of DBLs. The upside is calibrated voltages instead of uncalibrated A/D counts, and unambiguous storage layout on disk (no endian-ness to deal with like with integers).
4. consider putting all the file write stuff into an independent consumer loop and feed it data via queue.
All that being said, my hypothesis is that your DAQ and file writing code are not the (direct) cause of the symptom you show. The binary file writes should keep up with your data rate and even if they don't, your program would terminate and inform you of an error.
Your graphs have a Matlab-y look and I suspect there's a problem in the script you use to read, *interpret*, and plot the data from the binary file. I further suspect that part of the problem derives from some aspect of the following things:
5. data format layout on disk. There's both "endianness" and the specific layout of 2D array data that was written in chunks. Presumably your header defines the # channels, but nothing defines the # samples written per chunk. Your transpose operation *seems to* make you relatively insensitive to this fact, but this is an area I'd look at.
6. data offset on disk. Unless your header-writing subvi embeds info about the header length, how does the reader know where the header ends and the data begins?
I'd recommend you do a quick experiment to read one of these problematic binary files in LabVIEW and see what the graph looks like there. I'm expecting that once you read it the right way (correct data offset, correct endianness, hopefully known 2D chunk sizes, correct use/nonuse of transpose), the graph won't show the same anomalies.
-Kevin P
P.S. I strongly suspect that the 2^24 "magic number" that GerdW noticed in msg #3 will turn out to be relevant somehow. I don't have any specific theories about how, but coincidences like that typically turn out to be meaningful...
04-18-2018 10:48 AM
Thanks everyone for your time and input, it's very helpful. Kevin_Price is right on the nose that I'm reading these binary files in Matlab. There is actually a header-writing subvi that embeds info about the header length, which I will now include. I'm also going to include the matlab script that I'm using to read these guys. (I tried including one of the data files, but it's just too big. Here's a google drive link to one: https://drive.google.com/open?id=12zXi3IebDc1xRVF3ZHnvVN626piG4Qer) If there was something wrong with offsetting the header, I would expect that the problem would occur from the beginning of the graph. I don't understand why it occurs at 2^24 samples in. Like you and GerdW suppose, I'm sure it has some meaning... I'm just still working on what that is..
I've tried reducing buffer size, lowering the sample rate, increasing and decreasing the max samples, and specifying the file extension, all to no avail.
Unfortunately the guy who used this program in lab before me is gone and not returning communications. I know he had it working at some point at 25kS/s for 9 channels in 80min files, but that's about it. I'll keep working at it, and let you all know if I stumble across any solutions.
04-18-2018 11:29 AM - edited 04-18-2018 11:57 AM
Not at a LV machine to look at the header writing. Troubleshooting tip:
Cook up a vi that writes the same kind of header, but writes a 2 sample x 9 channel array with magic values. Each value is defined as: 24576*(sample_index - 1) + 3072*(channel_index). These values will span the range from -24576 to +24576, and will be friendly to inspect in a binary hex editor.
Post that binary file and tell us the header length. Meanwhile, you can run your Matlab script in debug mode to verify that this very small example gets unpacked correctly. Also try to make a simple LabVIEW reader that you can debug until it unpacks the file correctly.
Just this much is likely to provide some further clues.
-Kevin P
P.S. You've got a pretty fragile system here. Bunch of secret handshakes and magic sauce between the LabVIEW code and the Matlab script. They're very interdependent on things that are unenforceable. (For example, the script hardcodes an assumption of exactly 9 channels, the LabVIEW code makes that choice changeable by the user.)
If this was my lab, someone would:
1. Define a written file format spec or use a known standard like TDMS.
2. Create a Matlab script to read the new format. It would be flexible about things like # channels
3. Create a utility to convert all existing binary data files into that new format.
4. Confirm these conversions
5. Modify LabVIEW acquisition code to write to the new file format
6. Confirm the new acquisition files
Just sayin'.