05-11-2013 02:04 PM
I am looking at using the Continuous Measurement and Logging sample project as a starting point for my project. The template shows the use of a single data source at a single rate. In my case I have multiple data sources that will arrive at different rates and need to come together and be logged to the same file with the fastest arrival dictating the log entry and the slower data would be their last values. It is a slow data collecton process ranging from 1 second to around 10 second sample rate. More of a monitoring application with logging than a data aquisition application. Since the waveform data type is base-t plus delta-t and my data arrives at varying rates, my first task would be to remove the use of it and define a typedef for my data. I would use a single logging message loop, but what approach can anyone recommend for the aquisition? I imagine separate aquisition message loop vis for each data source but what about placement? Placing them all at the top level vs placing them within a the existing aquisition message loop vi? Any other thoughts or recommendation are appreciated. Thanks in advance!
05-12-2013 01:32 AM
if you want to sample one channel at 10/s and an other one at 1/second.... How is your file going to look like? Does this mean that for the slower channel you just want to keep the same value for the other 9 values?
The sampling speeds that you are talking about are really slow. It would certainly not hurt your hardware to sample eacht channel with the same speed. And it you really just need one sample every 10 second, you can always decimate you array of values.
For instance :
channel 1 : 1 point per second (sample ratio) => 10 points in 10 seconds
Channel 2 : 1 point per 10 seconds => 10 points per 10 seconds (same sample speed) => decimate array (keep 1+/10 values) => 1 point per 10 seconds
This is going to be the easy way, or you want to start with parallel loops running, but that's "not that simple" anymore 🙂
05-12-2013 11:41 AM
Thanks Bjorn. I forgot to mention, the process that needs 1 second captures isn't always running. The 10 second capture is always running and monitoring. When the user decides to run that particular process then we capture its data every 1 second. This may occur maybe once or twice per day at most. So to answer your question, the file would have updates every 10 seconds typically. When the "faster" process is running, then the other 9 values will simply have their last know values. I like your suggestion of collecting at the faster rate. However, it will be a continuous running system that will retain the historical data for at least a week and maybe months and I am presently using an ASCII file format. At present the system only runs for a day or two. The code creates a new time stamped file if the size gets too big. I am definitely planning to switch to TDMS but it brings some other issues/questions perhaps for another post.
I am still looking for thoughts on "folding in" acquiring data from multiple sources/instruments. Each data source/instrument needs to "interact" with only portions of the top level UI, such as send commands from user and update graphs etc. So there is still the question of separate acquisition loops, display loops, separate Qs. I can certainly make it work but I'm looking for any wisdom and pitfalls. Any thoughts are appreciated. Thanks!
05-13-2013 05:45 AM
Realized it is also important to note that I have a 3rd party DAQ as well as serial sensors and gauges. The DAQ can be configured for single sample or free running (sample every N seconds) but the other devices must be polled. Thanks to anyone for sharing your thoughts!
05-13-2013 05:52 PM
I would use a producer consumer architecture with multiple producer loops all using the same queue. I would also create a new type def that contained the data you are storing as well as a value that describes where the data is coming from, that way, when you dequeue the data you can check to see where the data is coming from and write to a file recording all that information. That should be enough stored info for you to do whatever you want with the data afterwords.
05-14-2013 01:51 PM
Thanks Tim-C. Yes, I envisioned multiple producer loops. I like the idea of tagging the data with a source and maybe even a timestamp. Do you or anyone have any ideas on the UI & user interaction portion? Suggestions for tying the various front panel controls (clusters or typdefs and individual controls) belonging to specific device/data sources to their corresponding acquisition loops? Lump all UI events regardless of device in one event handler loop and one UI message loop? Separate UI Qs?
I was thinking about separate lvlib for each device containing a front panel ctl and a core vi. For now the devices will be determined at design time as will the location of their controls on the front panel. At startup, I could "connect" the various front panel controls (be it clusters, typedefs or individual controls) to there respective core vis by passing a reference.
Thanks again for all the great feedback!
05-15-2013 01:58 PM
You could have each acquistion loop that triggers on an event be a state machine with the start of the state machine having an event handler for the control. It sounds like you have a good handle on the issue. I would just keep trying things using stanard architectures like event state machines and producer consumer loops.