Real-Time Measurement and Control

cancel
Showing results for 
Search instead for 
Did you mean: 

Can tdms streaming vi's be used in LabVIEW Realtime?

I want to store a data acquisition to the tdms file format using LabView Realtime 8.5 and a cRIO-9012 by using the tdms streaming pallette and storing the resulting file in a folder onboard the unit.   It is to store 2 channels, 200000 samples each, at 100khz.    I am streaming my data from the FPGA using DMA FIFO.    I found example code on the site that used this method and the binary format to write 50KS/s  so using it as a template I simply replaced the binary file storage vi's with tdms file storage VI's thinking it would be that simple but it did not work, the file never gets created.   The binary method creates a file but causes buffer overflow.    My question is basically what am I doing wrong?   Am I trying to store too much data?  Am I pushing the hardware to its limits?  
0 Kudos
Message 1 of 25
(12,031 Views)
If you get a buffer overflow with the binary file I/O functions, your hard drive (or flash memory) probably isn't fast enough to write data at the speed you're acquiring. You might need to buffer data in memory before writing it to disc. If you want to use TDMS instead of binary files, you have two options:
  • If you use PharLap, you can use the TDM Streaming API in LabVIEW.
  • If you use VxWorks, you can use a set of VIs that create TDMS with an extremely low overhead (http://zone.ni.com/devzone/cda/tut/p/id/6471). These VIs are platform-independent.
Hope that helps,
Herbert
0 Kudos
Message 2 of 25
(12,020 Views)
You seemed to have presumed that I'm not using Windows XP but I am, my fault for not specifying that. 
0 Kudos
Message 3 of 25
(12,016 Views)
Your original post says you are storing the results onboard the unit. Hence, I thought the code for file storage was supposed to run on the unit, which for a cRIO-9012 would mean it runs on VxWorks. If that is what you intend to do, you would need to use the TDMS VIs linked in my first reply. If that is not what you're doing, then how does your system work? Do you stream data from the cRIO to your XP machine and store it back to the cRIO?

Herbert
0 Kudos
Message 4 of 25
(12,014 Views)
Oh I see, OK I will try this tomorrow and let you know the results.   Thank you
0 Kudos
Message 5 of 25
(12,004 Views)
Thanks Herbert, good advice it worked (but you already knew it would)!!   Naturally I have another question for you.    Would you happen to know off hand the maximum depth of a DMA FIFO memory block for the cRIO-9012 FPGA?   I want to max it before reaching the conclusion that I have to reduce my frequency.
0 Kudos
Message 6 of 25
(11,966 Views)
Nope. But I forwarded it to our RT team. I'm sure they'll know Smiley Happy
Herbert
0 Kudos
Message 7 of 25
(11,956 Views)
There are two locations where you can specify depth for your DMA FIFO in a cRIO system.  The first is the DMA memory block you allocate in FPGA.  Typically, you would use 1024 bytes per about 8 or so channels of information.  There is not much memory on the FPGA, so you need to take care in large use of the DMA memory in FPGA.  The cRIO hardware automatically moves the data (DMA) into the RT controllers memory. 
 
In the 9012 you have 64MB of memory for the OS, LabVIEW run-time engine, and your program.  You can use the LabVIEW real-time tools (system monitor) to see how much memory you are using.  You will also get some notification when you download your VI.  Typically, you will have several MB you can use for the RT Memory buffer.  You will also want to leave some MB free so your program can create memory as it needs for auto indexing and so forth.  With LabVIEW 8.5 there are additional tools to execute operations in place, and to manage memory. 
 
You may also want to use a QUEUE to buffer the data from your DMA buffer read to your TDMS file write.  This will give your program some flexibility. 
 
Finally, you may note that the DMA buffer read is a blocking operation.  In other words, when you call read DMA buffer read, the call blocks all other RT LabVIEW code from executing until the read completes.  This means that if you are waiting for data to arrive, nothing else in LabVIEW happens.  I normally use a read DMA of 0 bytes and use the backlog number to compare with my desired read.  Once the backlog is big enough for the read to execute, I then execute the read.  Of course I have a sleep (metranome) in my check the buffer loop.  This allows your other calculation functions and data storage functions to work while the DMA buffer is filling. 
 
Hope this helps. 
Preston Johnson
Solutions Manager, Industrial IoT: Condition Monitoring and Predictive Analytics
cbt
512 431 2371
preston.johnson@cbtechinc
Message 8 of 25
(11,954 Views)
OK Preston, so far all i have been doing is reading voltages to the DMA buffer and storing them in a .tdms file onboard the cRIO, my final objective is to scale those voltages to pressures before saving them to file, pretty simple 2 channel operation, 100kHz for 2 seconds.   We do a similar operation on a PC more than likely not optimally programmed, mainly because it uses a lot of express VI's, it consumes massive amounts of RAM for just one acquisition,     So the DMA transfer block size is only limited by whats physically available on the FPGA and the controller?     You told me how much was on the controller but not what was on the FPGA.      Also what you are saying is that once the DMA read starts, the scaling won't occur?    I stop all of my other loops during the DMA operation so my only concern right now is whether or not I can scale or should I just do it on the FPGA without floating point.  
0 Kudos
Message 9 of 25
(11,949 Views)

Fwalker,

I'll add a litte more information on to what Preston said concerning DMA. Regarding how big of a FIFO on the RT side you can create, its really a hard question to answer to put a firm number on without lots of other variables because its like asking how large can my application.  The loose answer is as much RAM is available after the drivers, application, and any other data has been loaded into memory.

However, I would guess reason your asking is to prevent the FIFO from overflowing.  Increasing the buffer size will not necessarily help an application because if the FIFO is filling up than your just delaying the inevitable and its better to manage the data flow in the way the code is written.  By default, the host buffer size is 2 times bigger than the FPGA buffer with a minimum size of 10,000 elements. In general you should set it to at least two times the Number of Elements you plan to use. And generally 4 times the size works well but anything more is really un-needed.

Each DMA transaction has overhead, so reading larger blocks of data is typically better. The DMA FIFO.Read function automatically waits until the Number of Elements you requested becomes available, minimizing the processor usage. However, cpu usage may increase if the data is coming in at a slower rate.  This is because the heuristics used in the DMA API to determine when to sleep or to poll depend on the amount of data and number of elements still coming.  If its small it might still spin and drive up the cpu usage.  Its better to use some mechanism to ensure data \space is available, rather than relying on the blocking behavior on the host DMA nodes.  I manage this by using interrupts, timed loops, polling by reading 0 elements, or scheduling followed by polling.

Using interrupts with DMA works really well when the data is sent not very often, as an IRQ adds little overhead.  Using the Elements Remaining indicator to poll and then read those numbers of elements is not recommended because it eliminates optimizations built into the API but for simple applications it does work well as Preston suggested.  Using a shift register to pass # of elements to the next iteration during the read is okay, but it has a high overhead if there is a small number of elements.  It could be combined with sleep in the loop to keep processor burden low.

Hope that helps a little,

Bassett

Message 10 of 25
(11,935 Views)