LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Interleaved data saving and the TDMS Viewer VI.

Hello,

 

I'm using LabVIEW 2009 Service Pack 1 ( ver 9.0.1 32-bit ).

 

I was trying to take a piece of data and save it in interleaved mode.

The save operation seems to be completed correctly, but when the TDMS Viewer VI is launched it crashes, and I'm unable to browse the data.

 

I'm attaching the example I used.

 

What I want to know is:

1) am I using the write properly? What I'm trying to acomplish is to push down an array if I16 to a TDMS write block, recieve 30 channels of data this way.

2) I managed to use this VI and it crashes, on two PC's, one is a win7 32bit, the second one was an XP. But let me know if this runs properly for you.

  

Maciej


0 Kudos
Message 1 of 39
(5,046 Views)

The viewer is not broken. You are giving it an insane amount of channels and it is choking. Also, your example's # of chanenls does not match the # of points. I've tweaked the example you provided to make it work with a smaller data set. If your data set is actually this large, I suggest using diadem. It should be able to handle anything you throw at it.

Message 2 of 39
(5,019 Views)

Hi thanks for the example.

 

The thing is I do have 10000 channels to acquire, and I'm right now evaluating the feasibilty of using TDMS streaming to do that.

Interleaved saving doesn't work very well when the number of channels grows too much, so I'm thinking of shaping my data appropriately before I conduct a save operation. Not to save more than say 100 channels per write. We'll see how that goes. I hope to conduct saves only on specailly tailored pieces of data to maximize Streaming performance. ( I'm aiming for 250 -> 300 MB/s , don't know if I can get there in my particular case).

 

Using your example, using a 2D Array will actually allow me to open up the Viever.

I guess you were correct that the issue was that the data chunk I've passed down to the TDMS Write wasn't shaped correctly and this is why this failed for me to open later on.

 

Maciej

 

PS. Regarding Diadem we don't have a licece for it, but I'm familiar with the product and I do think it's preety good. It is good to know it can read in Huge data sets.

We might consider purchasing it in order to play around with the acquired data, for offline analisys, if TDMS format will prove to be feasible to use in our case.

 


0 Kudos
Message 3 of 39
(4,996 Views)

Are all of your channels coming in at the same data rate? What data rate do you expect?

 

I think your streaming rate to the hard drive may be higher than you think because, when using TDMS, channel information is written alongside the channel data every time you call the TDMS Write VI. I think the more times you call TDMS Write.vi, the more channel info gets written to disk. Because of this, i don't recommend splitting your channel sets up into chunks.

 

If everything is coming in at the same rate, then i suggest using the TDM Header Writer toolikt. It lets you write interleaved data directly to a binary file and then create the TDM header when you close the file. This will allow you to write to disk at the fastest rate possible with the smallest bandwidth. https://knowledge.ni.com/KnowledgeArticleDetails?id=kA03q000000x4PcCAI&l=en-US

 

Alternately, is this data coming from a DAQ device? LV 2009 has some TDMS sreaming to disk options that should be extra super fast.

0 Kudos
Message 4 of 39
(4,980 Views)

Hi Mac671,

 

I think TDMS interleaved layout mode is suitable for your use case, the only thing you need do is to use 2D array as the data input and a 1D string array as channel names. TDM Header Writer is only for TDM file, it shouldn't work for TDMS. As bazookazuz suggested, maybe you can also take a look at the the integration of TDMS with DAQmx.

 

Please let me know if you have any questions. Thank you.

0 Kudos
Message 5 of 39
(4,952 Views)

 


When using TDMS, channel information is written alongside the channel data every time you call the TDMS Write VI. I think the more times you call TDMS Write.vi, the more channel info gets written to disk. 

This is incorrect. TDMS writes incremental meta information, e.g. if you start writing 100 channels, each of which has 10 properties, the first write will store 100 group/channel names and 1000 properties, while subsequent writes will not repeat any of that. If you change a single property value on a single channel at some point during writing, TDMS will add the group/channel name for this particular channel to the next block of data, along with the one property that has changed. Other channels and properties remain unaffected.

 

Earlier versions of TDMS would prefix every new data block with constant-length header tag, but that was changed in LabVIEW 2009.

 

Thanks,

Herbert

 

 

0 Kudos
Message 6 of 39
(4,931 Views)

I have a buffer that can accumulate up to 65536 I16 values.

In that buffer I get for example 6x 10000 values.
Every "chunk" of the 10000 values is my interleaved data. One point is one channel.

My data is shaped like in the opening post:
http://forums.ni.com/t5/LabVIEW/TDMS-Streaming-in-interleaved-mode/m-p/1189201#M513943

 

Since I'm running a windows machine, sometimes I'll get 5x 10000 values, sometimes it will be 3x , sometimes 6x during one buffer swipe.

The data rate and the channel length is variable.

 

But let's talk about one case:
10k channels and 10 kHz rate of I16 vals.
That gives me roughly 200 MB/s of data that I need to get out of the buffer.

 

What I want to do on my PC is take that data, and save it interleaved to a TDMS file, while being able to safely stay above the 200 MB/s.

 

I would be intersted in aiming for 300 MB/s so that if we push our card further down the line, I'm not stuck with the streaming bandwitch.

 

I have noticed that unfortunately if you use an ammount of channles that is 10000 the writing performance drops dramatically.

 

What I figured out is that I'm not conducting 1 write on 10000 channels on a small ammount of data. But rather than that a large accumulated amount of data (like 8000 x 10000 of I16)

And I split the save of 10000 channels into 10x 1000 channels save.

First channs from 0-999 , than 1000-1999 etc.

 

All is done in interleaved mode.

 

This is Very Very fast indeed. I got rates of 360 MB/s. That's smoking fast!

 

Now the trick is... to handle the appropriate data shaping in the RAM, from my buffer, to the large shaped RAM chunk, that will be cut in to pieces for Streaming. LV is designed for data flow and copies data a lot, hence I get easlity in the out of memory problem, I'm going to have to use some ticks like storing the data structure via a VI Server call, or perhaps queues don't know yet. But I'm going to do some reading on that and it seems that this should be achieavable.

 

Thank you both for your input,

I will post if I get stuck.

Maciej

 


0 Kudos
Message 7 of 39
(4,922 Views)

LabVIEW can copy data a lot, but it doesn't have to.  Have you read Managing Large Data Sets in LabVIEW?  In addition, you should also look into the In Place Element Structure and Data Value References.

Message 8 of 39
(4,895 Views)

🙂 I've read it.

 

Plus I had one of your Syst Egnieers for a visit at our place just today and he suggested to use just those two components.

 

Thanks!

Maciej


0 Kudos
Message 9 of 39
(4,892 Views)

Hi I have an aditional question.

 

I've managed to stream at 350 Mb/s a data set that is similar to the one I'm going to use.

 

The technique I'm using is as follows : I accumulate in RAM approximately 160 Mb of data from my 10000 channels.

And than conduct 10x a TDMS write on it ( first channels 0-999 , 1000-1999 and so on ).

Just as a reminder this is done in interleaved mode.

 

I'm quite sadisfied with the result of streaming, but I do have an issue in reading.

The index of the file I get isn't that bad, I mean the index file is quite small compared to the.

The index file is about 4,5 MB for 1,5 GB file of Streamed data.

 

But it does take quite a bit to load for example extracting 8000 points from one channel with the TDMS Read took about 600 ms.

 

I've used the defragmentation feature and the index file size dropped to 600kB.

This has also reduced greately the TDMS Read function time to actuired the data.

 

To approximately 15ms in the case mentioned above.

 

Defragmenting seems to get me where I want to be, but I just have a question is there any hint anyone can give me on how best to save the data?

As defragmenting of a 1,5 GB file mentioned above, took about 35 minutes on a Dual Core AMD Turion.

 

I'm going to have a lot of data, I would like to reduce the pain of defragmentation that would take days or weeks before you can work on it.

Any usefull tips?

 

Cheers,

Maciej


0 Kudos
Message 10 of 39
(4,865 Views)