03-13-2007 07:08 PM
03-14-2007 10:48 AM
Does anyone know if the block diagram I posted has decent code in it (see previous post)? I am wondering if I am doing something wrong, since I only get intermittent displays of data.
Thanks for any help you can provide.
03-14-2007 11:47 AM
Briefly: the code could be improved. Key points of concern:
I half suspect that those 2 things alone might fix your problem, but I can't honestly say I tried to follow all the code details. A little wire neatening would help readability for mods and debugging purposes... ![]()
-Kevin P.
03-14-2007 12:07 PM - edited 03-14-2007 12:07 PM
Message Edited by Erik J on 03-14-2007 12:10 PM
03-14-2007 02:40 PM
Thanks for the great advice everyone! I knew I would forget something. I have attached an image of the DAQmx task builder subVI I am using to create my task. This is sort of a datalogger application, and a fixed task in MAX would not be feasible. A setup VI allows the user to select channels for logging, and the sampling rate, and then fires off the attached code to build the DAQmx task.
I believe the fix being recommended is to wire in a "-1" to the timeout terminal of the DAQmx Read function, and to specify "Sample Rate/5" as the number of samples to read (I want to have display updates around 5 times a second). Then, I should ditch the millisecond wait function, correct?
I guess I am still puzzled by the fact that the millisecond wait function did not allow 20 or so samples (100Hz * 200 ms) to accumulate into the buffer on every loop iteration, so that I would always have data to read as I came around for the next loop iteration. I wish someone could explain to me why the "software" method of timing the loop didn't work, because it sure seemed like it should work to me. If the hardware method is the recommended way to go, though, I will do that from now on. I really like sticking to best practices, and other who stick to best practices, because it makes it easier to share and debug code with each other. But the fact that the "software" method didn't work...what is it that I don't understand about the millisecond multiple wait function?
Thanks again to everyone who is helping me upgrade my programming skills. I am definitely famous around work for having the most beautiful wiring diagrams. I will definitely work on this and I will add a stop on error to my loops. The attached block diagram is a little prettier to look at.
03-14-2007 03:28 PM
Yeah, as you and Erik said, just specifying (Sample Rate / 5) as the '# to read' could do the trick. Then you can ditch the 'Wait (msec multiple)' function. I don't think I'd recommend the -1 = infinite timeout though. I'm pretty leery of stuff that can lead to an infinite wait or an infinite loop. Even a 1 sec timeout should easily be way more than enough. Note however that this method depends on all your processing code executing in under 200 msec on average. Otherwise, your reads will fall behind and you'll eventually get a DAQ buffer overflow error. My earlier suggestion to first query for # available samples and then read the MAX of (# available, SampleRate/5) will prevent cumulative fall-behind effects.
There's too many unknowns to speculate with any confidence on exactly why the software timing method didn't work. I can point out some additional things that bear greater scrutiny though.
1. You've got some sort of function dealing with file streaming that takes a path input and produces a path output. This probably means that every loop iteration you're using the path to open a file, write data, and close the file again. This actually may consume more than 200 msec, at least some of the time. Because this runs in parallel with the msec wait function, something like the following could be happening:
A. Iteration 1 proceeds as expected. The DAQ read collects 20 samples, the file write consumes only 100 msec, and the wait msec function ends after 160 msec in order to end on a 200 msec multiple. The wait function took longest, so your whole loop ends on a 200 msec multiple.
B. On iteration 2, the wait msec function ends after 200 msec on the next 200 msec multiple. The DAQ read collects another 20 samples (because it's been pretty much exactly 200 msec since the previous loop iteration started) right away. However, Windows was busy messing with the file cache so this time your file write consumes 375 msec. The file function took longest so your whole loop ends 175 msec into the next 200 msec multiple.
C. On iteration 3, the wait msec function ends after 25 msec on the next 200 msec multiple. The DAQ read collects 37 samples that have come in since the prior call 375 msec ago. The file function consumes only 50 msec this time, ending the loop 25 msec into the next 200 msec multiple.
D. On iteration 4, the wait msec function ends after 175 msec at the next 200 msec multiple. The DAQ read collects the 5 samples that have come in since the prior call 50 msec ago. Uh oh, not enough samples...
One possible fix is to open the file outside the loop and leave it open until after the loop is done. Opening and closing files has quite a lot of overhead. Inside the loop, you'd be passing around the file refnum. Doing this one thing alone might also be a way to fix your timing problem. Along similar lines, you could write the data to a queue and then do your file writes in an independent loop that reads the data out of the queue. You can search here for "producer consumer" for more info.
-Kevin P.
03-14-2007 04:45 PM
D'oh!
You guessed it right on the button about the file open/close. Nice one.
I can't believe I forgot this. It makes sense now. I thought closing the file reference would free up resources, but now I remember learning something like this, even back in BASIC and PASCAL in high school. Open the file, but don't close it until you are absolutely finished with it. I also recall encountering this recently with Excel VBA. The recommendation is that if you will be accessing the VBA object model frequently at some point, or some points near each other, to set an object variable to point to that particular point in the object model, or use the With...End With structure. I will try to implement the file open and close outside the loop.
Also, my thinking was clouded a little bit by assuming the loop would always take about the same amount of time on each iteration. I've timed loops before, and thought that that was true. But I forgot about Windows every now and again coming in to mess things up, and other random, unpredictable overhead. If the loop ends near a millisecond multiple, it could take very little time to loop over again, and after having "read all", the buffer could be empty (less than 10 ms since last read). I must have been alternating between no data and having some.
If using "MAX(Samples Available, Sample Rate/5)" alleviates this problem, I will be glad to forget all about the attempt to software time this. What I won't forget is how helpful you guys have been. Thank you very much. I will also remember to wire better, use stop on error for loops, and use hardware timing whenever possible. Maybe I should stop letting LabVIEW decide where to plop the wires and take more control! I also need to take that course on LabVIEW design patterns (producer/consumer).
03-15-2007 08:37 AM
03-15-2007 09:00 AM
I just wanted to report back that I tried the advice offered here, and it works beautifully! I couldn't be more delighted. Using the DAQmx Read function to time the loop is indeed the way the application should be programmed. I am indebted to the fine programmers who offered their help here. I want to sincerely thank both of you. Hopefully, someone else will find this thread when needed and save themselves a lot of trial and error.
Thanks again, and best wishes!