LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

datalog, Binary and Binary+Flattened formats

Hi,

My application is a combination of a control  system and a data logger using Labview RT which requires high precision timing , concurrent data acquisition & streaming from various communication buses.

Initally, I  was  using a data log format where I had my data cluster defined of various string controls, int controls, etc. The file IO works in normal circumstances but, when there is frequent disk writes, the system will slowly loose determism / behave a but erratic after sometime(usually 2-3mins into a test case).

To counteract the above behavior, I rewrote the VI's by replacing the datalogs with Binary writes just by converting the clusters into a string(No flattening,just a custom VI to write to ascii) and stream to disk. The result was slightly promising because the system was much more deterministicSmiley Happy but the side effect was that the data was randomly getting corrupted after a few records.Smiley Sad

Another version was written where rather than converting the cluster to a string using a subvi,  flatten to string was used where the data clusters were connected and then streamed using binary write VI's. Once the execution is done, the binary streamed data is converted to ASCII  for further use using the unflatten VI with the appropriate cluster. The data integrity and behavior was similar to the datalogs but it did not resolve the timing issues.Smiley Sad

My question is that has any one ran into similar scenarios and how did they resolve them. I can post snippets/screenshots of the VI's if required but, I think this should be enough to give a brief idea.

Also, has anyone used Datalogs extensively to determine its throughput and limitations.

Thanks,
Ashm01
0 Kudos
Message 1 of 10
(4,631 Views)
Just to add to the above.

Using LV RT 8.5, PXI 8106 Dual Core RT controller.
The frequency of Disk writes would be every 50-100 ms or as per data availability CAN/LIN in buffer.
0 Kudos
Message 2 of 10
(4,625 Views)
Have you tried using a tdms file, those have very good write speeds (I believe it is supposed to have deterministic timing while saving) in my experience as long as you write data in big enough chunks, or use the NI_MinimumBufferSize property (see http://forums.ni.com/ni/board/message?board.id=60&message.id=6719&requireLogin=False for how to use NI_MinimumBufferSize). The problem would be efficiently converting your clusters into something you can put into a tdms.
Message 3 of 10
(4,600 Views)

Hi Matt,

I implemented the TDMS where I am actually passing on the converted cluster to the data input as a string. So far the result has been an improvement over all the methods applied. However, its still not ideal because the test now fails 4-5 mins where as it used to fail 2-3 mins in to it.

I also introduced the property for the buffer allocation and the outcome is a bit more erratic than without it. It is observed that the system looses some time inbetween (assuming disk IO) and then the time is once again out of skew.

Although the efforts seem not to go in vein due to the duration improvement Smiley Happy, there must be some way to make the streaming transparent on the RT system.

Regards,

Ashm01 

0 Kudos
Message 4 of 10
(4,585 Views)
How is it failing, are the writes just taking too long, or something worse?

If it's taking too long are the periods growing as time goes by or are you just getting the occasional hiccup?
0 Kudos
Message 5 of 10
(4,581 Views)

Sorry for the long hiatus. Smiley Sad  (Been busy, replicating this on a smaller scale)

It has been observed that the hiccup occurs for one cycle due to HD write. During the absense of updates we modified our code and :

  • tried all three formats (datalog,Binary,TDMS) and it really didn't matter much.
  • Removed and created a smaller VI which shall poll our CAN communications channel and stream it to disk
  • Also put a timestamping / delta measuring mechanism to check whether the API was taking longer or not.
  • Create two timed loops 1) for changing CAN data 2) Another for polling the bus (each on individual processors) 

Conclusion: The RT hiccups after 30 secs for 10 ms and then resumes. Has anyone done simultaneous control and high speed data logging on the RT and encountered this behavior? As time goes on, the hiccups accumulate in to a huge stack/skew.

Regards,
Ashm01

 

0 Kudos
Message 6 of 10
(4,232 Views)
Have you seperated your DAQ loop from your File IO loop. Given a big enough buffer, and an average file loop time that's less than the daq time then you should be fine.

Here's a basic example
http://zone.ni.com/devzone/cda/tut/p/id/3934

Matt W
Message 7 of 10
(4,214 Views)
Excellent advice from Matt. Keep in mind that the HD is a shared resource in your system. If your time-critical process needs to access the HD, and a lower priority process is using it, the time-critical process has to wait, resulting in priority inversion.

Matt's suggestion avoids this by allowing the time-critical process to transmit the data in a buffered manner to a background process to write it to disk.
Jarrod S.
National Instruments
0 Kudos
Message 8 of 10
(4,196 Views)

Hello,

Thanks for all your inputs. As of now, I don't have any High priority loops accessing the HD directly. As mentioned before, I am polling a few 3rd Party Communication cards which have a buffer of 2MB each. I try to frequently poll them so I don't loose much data due to accumulation. However,the quantum of data is sporadic and in burst so there is no steady rate to stream it to disk.

I tried the following experiments:

  •  I tried the new shared variable / having a FIFO scenario but that really slows down the system.
  • Modified the Smart buffer example which would store an array of Strings (This is returned by the cards)

The most interesting was that I converted my write to disk portion to a Sub Vi and made it Time critical. This seemed to have corrected the problem but in theory this is just a wrong practice. Smiley MadThere must be a better way than to make the write thread as high priority.Smiley Surprised

Any advice?


Regards,

Ashm01

0 Kudos
Message 9 of 10
(4,119 Views)
I would guess that your other time critical sections are starving the io thread enough to cause problems (perhaps from polling too often, or something in them is eating up more cycles than expected).

If your doing some string processing that might be allocating and deallocating a lot, which can cause a slow down. Since LabVIEW's memory handler is apparently single threaded (at least according to http://zone.ni.com/devzone/cda/tut/p/id/4537 ), then even low priority processor could affect the higher priority ones due to priority inversion.

Setting to the shared variables to single process may help with the speed (assuming you didn't try that already). Did you try the low level level shared RT FIFO as well?

Matt W
0 Kudos
Message 10 of 10
(4,102 Views)