LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Interleaved data saving and the TDMS Viewer VI.

Hi Maciej,

 

To read a channel out from a TDMS file. The read speed depends on the data layout of that channel in the file, not necessary the number and size of the heads in the file. For performance consideration, TDMS will read a bunch of data with unnecessary data between data chunks if the sparse is less than a predefined value.

In your case, since the raw data is interleaved, the distance between neighbouring samples of the same channel differs for different streaming methods, which affects the times to fetch from the disk to read the whole data of a channel.

Case#1,

The distance between samples of a channel or the sparse is 1000*2 Bytes since you write 1000 channels at a time, from my profile result, TDMS will fetch about 80 times to read all the channel data out.

Case#2,

The samples is scattered evenly in the file, however, the sparse is 10000*2 Bytes, TDMS uses about 80000 calls of read from disk(one read per sample) to read the whole data out.

 

That's where the performance difference exists.



Message 31 of 39
(2,014 Views)

Ok,


I'm currently away from the office so I would be able to check this next week but my question is.

If I transpose my data before conducting the write, and than save in in Decimated more as opposed to interleaved, would that enchance the speed of reading?

 

I'm not sure if in my case I have enough processing power to conduct transpositions.

 

Thanks,

Maciej


0 Kudos
Message 32 of 39
(2,004 Views)

I think so.

Message 33 of 39
(2,000 Views)

Now that it is officially released - LabVIEW 2010 comes with a set of advanced TDMS functions that give you better performance (especially for high channel counts) and more control over the file structure (among other things). While performance is a major goal for the existing API, an even higher goal is safety: The API will always create valid TDMS files, regardless of input parameters and calling sequence. That safety comes at the price of a performance hit, where performance decreases with the number of channels. The new API doesn't have these safety measures, in return it can go however fast your hardware can.

 

It seems like your application could be modified pretty easily in order to use the new functions. Basically, you would write your meta information (group/channel names, properties etc) first. From there, you would simply keep appending numeric arrays. LabVIEW will not perform any validity checks against your meta information, nor will it add any new headers to the file unless you specifically program it that way. The new functions should significantly increase performance of your application for writing and reading.

 

Hope that helps,

Herbert

 

 

 

 

Message 34 of 39
(1,993 Views)

I'm back at work after my holiday.


Thank you Herbert and thank you DeepSu for your posts.

 

I've just finished installing LV2010 and am going to check out the features of the Advanced TDMS API for sure, as the implemented changes seem very intesting from my point of view.

+ I've allready invested a considerable ammount of effort and time to learn the TDMS structure and I belieave it is the way to move forward in my project.

Luckily I still am at a good moment to migrate to LV2010 without too much of a fuss.

 

Regarding saving the data transposed in Decimated form in order to speed up reading, I'm going to plan to test this today or tommorow and let you know.

 

Thank you both,

Maciej


0 Kudos
Message 35 of 39
(1,952 Views)

I used the TDMS - Advanced Streaming Benchmark.vi, and modified it a bit to suit my case.

The streaming speed now seems to be only limited by the speed of my hardware, and was about 600 MB/s which is a great result.

 

I have saved approximately 10000 channels of data, 2048 samples each with OS buffering switched off, this should be something I will be able to do in my real case. 

I've not yet used the Advanced reading VI.s but what I did is I used the normal Read VI to see if the file streamed this way performs better than the one I had previously.

 

Can anyone help me out.

When I try to read out a single channel the first read attempt takes longer. Even if I swamp channels.

The next read attempt is preety fast and satisfactionary. 

Why does it take longer to conduct the first read?

 

The VI I used for reading is the TDMS Load.vi attached earlier in this thread.

 

I've saved my data in decimated mode, as I've observed that transposing the data before the write actually does not mess up my performance, so I guess I won't be using the interleaved saving mode.

 

Thanks,

Maciej


0 Kudos
Message 36 of 39
(1,940 Views)

Hi Maciej,

 

According to your description, I set the "# Scans" as 10, "channels" as 10000 and "# Samples per Channel" as 2048 in the TDMS Advanced - Streaming Benchmark.vi and use the TDMS Load.vi for reading.

I do observe the "Jitter" as you mentioned for the first read, however, it only happens at the first time of launching LV. I can't reproduce the "Jitter" for the first read after closing and relaunching LV. Besides, it only happens on the RAID-array according to my test. Since I have limited time of using our RAID-array today, I'm not clear about the root cause so far.

0 Kudos
Message 37 of 39
(1,921 Views)

Hi I'm attaching the two VI's I used.

 

I've modified the Streaming Benchmark VI from the Advanced TDMS examples.

I've changed the values to 10240 channels and 3072 as the number of samples per channel.

The reason for that is that I'm using I16 Values, and if I try to save more than 64 MB in one shot on an XP operting system it won't work. (this was clarified thanks to YongingYe)

I'm also transposing the data and saving in decimated mode which seems to save it quite well ( I mean reading is speed is satisfactionary ).

 

And I've also modified the load data VI to keep the file OPEN while I read out the channels via a Control on the GUI.

This way the first read does take longer, as long as I keep reading arround the same area ( increment/decrement the numeric by one channel )

 

If you hop the channels far like 1000 you will see a performance decrease with a read, just as if the read would depend on the proximity of the next channel if you jump too far it takes long for the firs time.

Anyway the reading seems to be fast enough, I would just like to understand the inconsistency, that I belieave could be related with the long juming among the file.

 

Cheers,

Maciej

 


Download All
0 Kudos
Message 38 of 39
(1,910 Views)

One thing come to our mind is that, Advanced Write will not accept transposed array as data input (the data wire will be broken). That's because LV uses different structure to represent transposed array or subarray instead of producing a new array for the sake of performance consideration. So you have to use an Always Copy.vi before wiring to Advanced Write.

0 Kudos
Message 39 of 39
(1,878 Views)