07-26-2010
09:27 PM
- last edited on
11-12-2024
10:14 AM
by
Content Cleaner
Hi Maciej,
You mentioned after defragment, the index file's size got reduced, that means you maybe you have too many "TDMS headers" in the file. You can have this link for reference:
TDMS File Format Internal Structure
I'm not sure how you write the TDMS file, but I suggest you can do like this way: firstly, if you can, write all the properties for all the channels and then write the data, and each time you write the data, if you can, please with the same data layout, the same channel orders, the same number of values, if you do like this, you'll write data like "one head only", that means you'll always use the same TDMS header.
07-27-2010
09:42 AM
- last edited on
11-12-2024
10:14 AM
by
Content Cleaner
if you do not want to defrag, i again suggest using the TDM header writer toolkit. Both your binary file and header file will be the absolute smallest size. I have written terabytes using this method with good results. Using TDMHW you can save as interleaved or end to end. https://knowledge.ni.com/KnowledgeArticleDetails?id=kA03q000000x4PcCAI&l=en-US. Note that this only works if you are streaming a set amount of channels at the same rate.
-a
07-27-2010 10:05 AM
Can the TDM Header Writer handle streaming data at 300 MB/s +?
07-27-2010 10:20 AM
YongqingYe thanks for your post,
I have set all my properties before writing the data with TDMS.
I have been saving the same ammount of channels with every write.
I have been saving the same ammount of data points with every write.
The rule I've broken is that with every save, I have been saving to different channels. See attached VI.
The reason I've been doing this is when I'm saving more than 3000 channels like this i get the
2505 error - LabVIEW Failed to Write to TDMS File.
This was a Dual Core Turion AMD running the NI Raid Array ( 8264 ).
When I run this VI on my my Intel Dual Core Laptop it ran OK with 10000 channels.
But obviously becaouse of the HDD speed I could not exceed 60 MB/s.
Any idea why I would get this error? Perhaps I would be able to use 10k channels in 1 write if I had a more powerfull controller unit ? Like NI 8133 controller for example?
When save 2000 channels per write it's better, I mean the file is obviously less fragmented, but still I'm poluting the file with too much header information and making it difficult to read afterwards.
And the defragmenation takes a bit long for me to be able to accept it at this point. If I could cut down the time for defrag by 50 % I would be fine.
1 Thing I want to correct as I see I've made a typo above. For a data file of 1.5GB the defrag took me 25 min on a Dual Core AMD Turion controler.
Maciej
07-28-2010 03:15 AM
Hi Maciej,
I'm afraid if each time of writing, you write different channel names, then it cannot reduce the headers in TDMS file. About the error 2505, do you mean you run that attached VI on you machine and got the error? I run that on my machine in LabVIEW 2009 and didn't get any error, my machine's speed is about 70 MB/s. If it cannot reproduce the error, is that convenient for you to share the VI which can reproduce? And, 2505 is about writing failure, there are a few reasons which can cause that, if you see this error again, would you please change "disable buffering" on "TDMS Open" as false and try again?
Thank you!
07-28-2010 04:11 AM
Hello Yongqing,
The VI I've sent is the one I get the error on.
But like I said the error happens only on my PXIe-computer.
It has the NI 8264 RAID Array, and the PXIe-8130 Controller.
The 2505 error happens allways for me on that machine whenever I try to feed it too large pieces of data to stream.
It is true that disabling the buffering flag stops it from crashing, but it also brings down streaming down to 85 MB/s, which is unacceptable in my case.
What do you think is causing the 2505 in my case?
It would be fantastic if I could pass down pieces of data of about 160 MB to the TDMS write and tell it to interleave 10000 channels as I think I would than be able to stream with minimal defragmentation.
Is that possible? Do you thinkg it's something with the RAID array or the controller?
Just like you I can run it on my laptop with no issues but the speed is about 64MB/s and I do not get this message.
Thanks,
Maciej
07-28-2010 04:16 AM
I see. I'll do some testing on a NI-8264 RAID Array tomorrow and give you the result of running the VI.
How big of the data you feed TDMS Write each time? I don't have any idea whether it possible related to the controller or not, but it shouldn't technically.
07-28-2010 04:34 AM - edited 07-28-2010 04:36 AM
The data size on which it broke was for example an 80 Meg write.
In my VI that would be:
4000 in the number of batches field and
10000 in the number of channels per save field.
As I'm writing I16 which are 2 bytes each.
I need to be able to save about 160 Meg + arrays in one Write,
if the saved file is to remain well fragmented , and streaming in the 350 MB/s range.
Generally the larger the array better the streaming performance.
Cheers,
Maciej
07-29-2010 03:58 AM
Hi Maciej,
I did reproduce the error of -2505 on my 8264 RAID array today as well. It seems if the data size is more than 64 MB, it would return this error. The number of 64MB reminds me some limitation of the C write function in Windows API:
http://msdn.microsoft.com/en-us/library/aa365747(VS.85).aspx
In the above article, at the bottom, it talks about the limitation of data size.
However, it seems it cannot explain why the same VI with 160M data size can work on common machines, maybe it's because of the difference between RAID array configuration and common harddisks. We need more investigation, I already filed a CAR for this issue for R&D side to look into.
Thank you.
07-29-2010 05:09 AM - edited 07-29-2010 05:10 AM
My Dual Core Laptop was running Windows7 and 160 M worked on it.
The PXIe-controller ran Windows XP.
What OS was your laptop running?
Maciej