SignalExpress

cancel
Showing results for 
Search instead for 
Did you mean: 

Signal Express Large TDMS File Recording Error

Hello,

 

I have the following application and I am looking for some tips on the best way to approach the problem with Signal Express:


I am attempting to using Signal Express 2009 (Sound and Vibration Assistant) to collect random vibration data on three channels over an extended period of time -- about 20 hours total.  My sample rate is 2kHz.  Sampling at that rate over that period of time invovles the creation of a very large TDMS file, which is intended for various types of analysis in signal express later or some other application later on.  One of the analysis functions to be done is a PSD (Power Spectral Density) plot to determine the vibration levels distributed over a band of frequencies during the log. 

 

 

My original solution was to collect a single large TDMS file.  I did this with Signal Express recording options configured to save and restart "in current log" after 1 hour worth of data is collected.  I configured it this way because if there is a crash/sudden loss of power during data collection, I wanted to ensure that only up to an hours worth of data would be lost.  I tested this option and the integrity of the file after a crash by killing the SignalExpress process in the middle of recording the large TDMS file (after a few save log file conditions had been met).  Unfortunately, when I restart signal express and try to load the log file data in playback mode an error indicating "TDMS Data Corrupt" (or similiar) is displayed.  My TDMS file is large, so it obviously contains some data; however, Signal Express does not index its time and I can not view the data within the file.  The .tdms_index file is also present but the meta data.txt file is not generated.  Is there any way to insure that I will have at least partially valid data that can be processed from a single TDMS file in the event of a crash during mid-logging?   I don't have too much experience dealing with random vibration data, so are there any tips for generating vibration level PSD curves for large files over such a long time length?

 

My solution to this problem thusfar has been to log the data to seperate .TDMS files, about an hour in length each.  This should result in about 20 files in my final application.  Since I want to take a PSD, which ends up being a statistical average over the whole time period. I plan on generating a curve for each of these files and averaging all 20 of them together to get the overall vibration PSD curve for the 20 hour time period.

0 Kudos
Message 1 of 8
(8,823 Views)

Hello,

 

I have tested your inquiry on my end and any hard crash e.g process being killed will not allow the meta file to be created. The best approach on preventing data loss incase of a power failure or a system crash is by logging your data periodically or using the Save To ASCII step (this method takes up more space per data point than a TDMS, but is called in every iteration of the program and can be used to write a new file). I have included a link below that shows how to implement these methods.

 

1. Logging Continuous Data to Multiple Files in LabVIEW SignalExpress

http://digital.ni.com/public.nsf/allkb/393BA6B1F74DA8AB862572910008661D?OpenDocument

 

Regards,

 

Ali M

Applications Engineer

National Instruments

0 Kudos
Message 2 of 8
(8,808 Views)

Hi Ali,

 

Thank you for testing out my problem.  I read over the article and I believe my current solution is pretty much the first implementation in the link that you suggested.  Basically, I configured recording options to restart the logging, which saves data to a new tdms file, every hour.  If I configure recording to restart in a new TDMS file, there is no problem -- I would only lose at most an hour of data (or a single file's worth of data) in the event of a crash.  The problem occurs if I choose to restart the recording in the same TDMS file.  In this event, if there is a crash, the whole file becomes corrupt because it seems that the meta file is not created after each restart of the log?  Logging to ASCII is an option, however it is less desirable when recording 20 hours worth of data due to the extra overhead and larger file sizes.

0 Kudos
Message 3 of 8
(8,805 Views)

JMat,

 

Based on the description of your application, I would recommend writing the data to a "new log" every hour (or more often). Based on some of my testing, if you use "current log" and S&V Assistant crashes, the entire TDMS file will be corrupted. This seems consistent with what you're seeing.

 

It would be good if you could clarify why you're hoping to use "current log" instead of "new log". I'll assume an answer so I can provide a few more details in this response. I assume it's because you want to be able to perform the PSD over the entire logged file (all 20 hours). And the easiest way to do that is if all 20 hours are recorded in a continuous file. If this is the case, then we can still help you accomplish the desired outcome, but also ensure that you don't lose data if the system crashes at some point during the monitoring.

 

If you use "new log" for your logging configuration, you'll end up having 20 TDMS files when the run is complete. If the system crashes, any files that are already done writing will not be corrupted (I tested this). All you need to do is concatenate the files to make a single one. If this would work for you, we can talk about various solutions we can provide to accomplish this task. Let me know.

 

Now there is one thing I want to bring to your attention about logging multiple files from SignalExpress, whether you use "current log' or "new log". The Windows OS is not deterministic. Meaning that it cannot guarantee how long it takes for an operation to complete. For your particular application, this basically means that between log files there will be some short gap in time that the data is not being saved to disk. Based on my testing, it looks like this time could be between 1-3 seconds. This time depends heavily on how many other applications Windows has running at the same time.

 

So when you concatenate the signals, you can choose to concatenate them "absolutely", meaning there will be a 1-3 second gap between the different waveforms recorded. Or you can concatenate them to assume there is no time gap between logs, resulting in a pseudo-continuous waveform (it looks continuous to you and the analysis routine).

 

If neither of these options are suitable, let me know.

 

Thanks, Jared 

0 Kudos
Message 4 of 8
(8,762 Views)

If you have a corrupted file, you may want to consider recovering it yourself.  TDMS is a chunked format with headers for each chunk.  During a power failure, the final chunk will not be complete, so you will get an error.  You may be able to recover the rest of the data by either parsing the file and deleting the incorrect chunk or trimming the incorrect chunk to a data boundary and correcting the header information.  Note that the operating system caches a lot of data, so you may lose a lot more than you think you should.

 

You can find the specification for TDMS files here.

0 Kudos
Message 5 of 8
(8,736 Views)

Hi,Jared,

 

i'm new learner about SignalExpress,  i saw here that you said there are various solutions to concatenate files to make a single one, so could you help me how to do this,

 

thanks in advance,  Jack.

 

0 Kudos
Message 6 of 8
(7,159 Views)

Hi Jack,

 

I think you'll find this article helpful.  It describes the required project steps for appending your data to a single file across multiple runs.

0 Kudos
Message 7 of 8
(7,125 Views)

 i have saw your replay, and the interlinkage, it seems like helpful,

 

                                                                                        thanks.

0 Kudos
Message 8 of 8
(7,117 Views)