12-06-2012 06:43 PM
Hi,
I'm trying to understand the best way to acquire a single value from a data file (or possibly an array?). I have a calculation that determines the number of events before present to search for. I need a way to return the value associated with this event, (see attached jpg).
Please note this is simplified. The VI I would like to create must be able to "look back" more than 10,000 events before the current event.
Could someone advise me as to the best method to achive this? Am i best writing my data directly to an array rather than an LVM file? Or should i read the data from the LVM file (or TDMS or TDM?) into an array, and then have a look for the value within the array?
Also which Labview VIs are best to use to perform the query which finds the output data?
Many thanks
Kim
12-07-2012 01:05 PM
Try to check all in the information using just arrays is not a good idea. A better way is to take pieces of information from the LVM file and put it into arrays, and then you can look for your value, and repeat this process. If you want to do it faster, you can implement TDMS files.
12-07-2012 01:15 PM
Well, if you only have two colums like this "Event #" and "ID" you could create a lookup table using varient attributes. However, this table is tied to the lifespan of your VI and you wouldn't be able to reference later unless you then wrote your varient to a file. Pulling values out of a varient lookup table is faster than searching arrays.
However, have you considered using an SQLITE database? You could write queries directly into your database. This method might work better if you have several comluns of data and if fairly fast.
I have found TDMS a little clunky for pulling data back out. Since its a binary write you can write to them very quickly, but to query the information back out you have to read the entire file which can be slow when working with large data sets.
12-09-2012 07:21 PM
Hi, TDMS do need to scan the file when querying the information back. However, it will only read the meta data (segment headers) out and skip the raw data sets during the scan process instead of read the entire file out. Thus, whether it's slow or not depends on the "headers" in TDMS file instead of its size.
03-21-2013 07:18 PM
Ok
I worked out a way to achive the desired result using a cascading sequence of shift registers. As each new value enters the shift register sequence it pushes the old values down the list. A simple equals function then takes care of working out what the lag depth is. See VI.
(Set the pump stroke rate to 60 SPM and the lag strokes to 120. All you need to do then is click on the depth advance button to simulate drilling. The lag depth then changes after ~2 min)
This method works very well, however I'm wondering if this is the best way to achive this. Are there any disadvantages to using the shift register sequence rather than querying a file or array?
Cheers
Kim