Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Best way to log data.

I've got a minor problem here with data logging. The code snippet shows two tasks of Analog 1D N chan N samp waveform capture reading 10 samples per iteration. I'm splitting the signals into two outputs and then averaging that before I display it on the chart during the test. The way I have it logging now is collecting the 1D array of waveform data at loop output before converting back to Dynamic Data and writing that to my TDMS file. The problem is this only seems to log the last sample of the ten samples for each iteration. Is there a better way to do this where I either log all the samples per loop and then process the data later or calculate the average first before writing to the file. I've tried writing to the file each loop but that creates a worksheet tab for each iteration. I would like to have the file output to remain in the format I have with the attached converted to Excel file.

 

As usual, there was no time to develop this test (It needed to be done before we new we needed to do it) and I know I should be using a Producer-Consumer Data Design (haven't quite got that figured out yet) to do the final file processing.

 

Thanks in advance.

Download All
0 Kudos
Message 1 of 9
(4,854 Views)

Hello Denslen,

 

What does the Recreate Channel names in DD.vi do? What is the data Out coming from that subVI?

 

Best Regards,

 

Alina M

Applications Engineering

National Instruments

0 Kudos
Message 2 of 9
(4,812 Views)

Thanks for replying Alina. Since I don't log the data from the Task input, I lose the channel names I created in the task when I convert to array of data. Also, since I reordered the channels, I wanted the headers for the columns to have the correct channel names. 

0 Kudos
Message 3 of 9
(4,809 Views)

DD is shorthand for Dynamic Data.

0 Kudos
Message 4 of 9
(4,808 Views)

Hello Denslen,

 

I recreated a VI similar to yours. I used a DAQ Assistant inside of a while loop and indexed the signals at the boundary. I merged them and then wired that into the Select Signals.vi. I created an indicator at the Signal Out output. I can use either a Graph Indicator or a DBL Numeric Indicator. I put probes in the wires and observed that I do get an array of waveforms which then are converted to dynamic data. It has an array of data both in the input and in the output, but when I use the DBL Numeric indicator I only get one value and it corresponds to the last value in the dynamic data. Could double-check with probes on your VI and see if you get the same behavior as me after the Select Signals.vi?

 

Best Regards,

 

 Alina M

Applications Engineering

National Instruments

 

0 Kudos
Message 5 of 9
(4,787 Views)

Hi Alina, yes you are getting the same behaviour I am getting.

 

Upon further inspection with probes, it appears that when I convert the 2D array of waveform at the loop output to Dynamic Data, I used the [2D array of scalars-columns are channels] (see the attached code snippet) DDT conversion. I didn't really give it much thought until now because LabVIEW let me wire it that way, although it does show a coercion dot on the input which should have been a clue to the problem. So the question is why isn't there a convert to a [2Darray of waveform-columns are channels] DDT convertor polymorphic choice and/or why does it pick the last value in the arrray instead of say the first value, which would almost make more sense.

 

That said I can still work with it being the way it is although it still isn't what I truly want to do. Also I would like to know why it behaves this way.

 

This still doesn't answer my original question of what might be a better way to log the data. Is it better to log the raw data using the task logging feature? I'm trying to get away from writing to a file each loop iteration and I realize the indexing feature on the output of the loop can create problems if the size gets to large. Also, I haven't quite gotten used to queueing data to send to a consumer loop for processing and file writing although I know that is probably the optimum way to do it. I also haven't worked with arrays of waveform and am apprehensive of manipulating the data without corrupting it by losing the timing relationship. I guess I have a bit of practicing to do.  

 

 

0 Kudos
Message 6 of 9
(4,777 Views)

Producer/Consumer is probably the best solution.  It allows you to choose suitable timing for the daq acquisition separately from the suitable timing for file writing.  With the examples available and the many posts (especially on the LV Board) you can find lots of help to get you started.  IT may take a little more time up front to learn how to use it, but over the long term you will save countless hours by knowing how to use that powerful tool.

 

I avoid the dynamic data type (also known in some circles as the evil DDT) as much as possible.  You just have no way of knowing what is in there or how to convert it to what you want, or if it is even possible to do so.

 

Lynn

0 Kudos
Message 7 of 9
(4,765 Views)

Thanks Lynn. As I mentioned I was pretty sure I would have to go that route. Most of my experience with LabVIEW has been controlling discrete test instruments to do semiconductor characterization. Collecting the data into an array or cluster and writing to an Excel file wasn't a problem. This DAQ hardware is still new to me and lately I've been working in an environment of always being told "There is not enough time to do that" with that being the right way to do it, but somehow there always seems to be time to do it over.Robot Frustrated

 

Regarding the matter of queueing the data to send to the consumer loop, I'm guessing I could just shove the 1D array of waveform data into the queue. After I figure out how to extract the data from the array and average it then I could probably keep the DDT for writing to the file for now without having to spend to much time on that part. 

 

I hadn't heard about evils of DDT before. I can see why it might get that connotation. I obviously had no idea that my data was being altered. I really wasn't paying to close attention to it since I originally was only taking one reading per iteration, but then decided to take more readings and filter it for an average. That's when I noticed a discrepancy in what I was getting on the front panel vs. the log file.

 

I fell into the DDT trap using the express vi to get started and liked having the ability to output it to a waveform and log to a TDMS file without my having to code it (see above). If I am working with relatively small amounts of data it was just too easy to then open it with the Excel importer and hand it over to someone else. No matter what I suggest as an alternative to dealing with the data, I get the "can you just give it to me in Excel" question, so I guess that is something else I'll have to work on.

 

I don't have Diadem, but have used a program called Spotfire which is quite powerfull especially if you have huge amounts of data. Again, getting tuned up on SQL would be helpful there also.

 

Thanks again for your input.

 

Dan

0 Kudos
Message 8 of 9
(4,760 Views)

Dan,

 

When faced with that kind of management, you sometimes have to go into stealth mode.  Sneak a bit of time here and there to learn how to do it right and then incorporate that when you can.

 

Producer/consumer is a high level architecture so you will not be able to sneak that in.  But things like moving away from Express VIs and DDT are easier.  You can always point to your lower level code and emphasize that it is faster, more efficient, more robust, does exactly what is required, or whatever "more" is most likley to be pleasing to management today.

 

Before Producer/Consumer became popular or used that title I had created a subVI which handled all file operations.  It consisted of a parallel loop containing a state machine.  The states were Idle, Create, Write, Close, and Halt. The data was passed via a queue. The queue data was a cluster of two elements. One was a state value and the other was data.  That was before the waveform data type was available, but you could use a similar structure with arrays of waveforms.  Because all the data is passed via queue, there is no data dependency between this VI and the main VI.  Just make sure that the main VI sends a Halt command before attempting to shut down or it will wait forever for the File VI to finish.

 

In the past it was easy to move people away from Excel.  You just gave them a dataset which was too large for Excel to open. When the limit was 16000 rows and 256 columns (I think), it was pretty easy to do.  Now the limit is ~1E6 rows and ~16000 columns.  So it is harder to overload it.

 

Good luck.

 

Lynn

0 Kudos
Message 9 of 9
(4,749 Views)