LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

A Vi to read and split large .lvm file?

Good morning. I was suppled a very large 700Mb .LVM data file.
 
Is there a VI out there that would allow me to open it and break it into smaller more mangageable pieces?
 
How would this best be accomplished?
 
The .lvm file is from 7.1 
0 Kudos
Message 1 of 10
(6,110 Views)
It would be nice if somebody would respond to this!
0 Kudos
Message 2 of 10
(6,091 Views)
LVM files, being text files, were never meant for this amount of data. For optimization purposes, LVM files read all data into memory when opened. This may cause you out-of-memory errors with a file of this size. I don't have a single VI solution, but can give you a fairly simple method to write your own.

Since the LVM file is a simple text file, it can be manipulated with the standard text file tools. You can take advantage of this to split the file. The LVM file consists of a file header followed by data segments. Each data segment may or may not have a header (usually only the first segment has a header). You can get the full spec here.

Open your file in a text editor. Since it is so big, notepad will not work. Word, OpenOffice, NoteTab, or Notepad++ will all work (OpenOffice and Notepad++ are free). Determine if you have multiple packet headers or not. If so, you life will be a bit more difficult, but not greatly so.

The general procedure is to open the file with the normal LabVIEW file functions, find the data portion(s), and split them into multiple, simple tab-separated text files. You can then read those tab-separated text files using either the LVM block in text mode or the Read From Spreadsheet File VI.

Open the file using Open/Create/Replace File. Read about 65000 bytes from it using Read From Text File. Search this for "***End_of_Header***", which will show you the end of the segment header. The next line is the labels for the data. Find the end of it. You now have the offset in the file to your data. If you do not have multiple segment headers, at this point you can read your data line by line from the file and stream to a different file, changing files at the data sizes you feel are convenient. If you have multiple segment headers, you will need to keep track of the number of lines you stream so you can skip the segment headers.

That is the general idea. If you have problems implementing it, let us know.
Message 3 of 10
(6,073 Views)
I'm still having major problems getting this to work.
0 Kudos
Message 4 of 10
(6,043 Views)
Zip up a sample file and your code, post them, and we can take a look at it. Please include specific problems you are having so we don't fix things that don't need fixing.
0 Kudos
Message 5 of 10
(6,036 Views)

I'm trying to figure out how "read big Bin file" works, so I can use it to break down the large file.

I downloaded Notepad++ (which is excellent)

So I know the data file has 9 columns, and 124,736 rows.

 

I am only interested in the first three rows.

 

I would like to use the VI to read and split up the first three columns of data. I would like it to split the 124,736 rows into manageable groups of 30,000 rows in length, and not write the other 6 rows into the new files.

How would this be accomplished. I was thinking The vi would need to read the correct cells into a buffer and then write a new file, clear the buffer, and then continue on.

Im just not sure how to deal with the unwanted 6 columns.

 

 

 

 

 

0 Kudos
Message 6 of 10
(6,027 Views)
Do you want to put headers on the new files? If not, here is a fairly simple procedure to do what you want to do. It is very inefficient (lots of unnecessary disk access), but will get the job done.
  1. Open the file using Open/Create/Replace File.vi. This will give you a file handle.
  2. Use Read File to read the file line by line. You can do this easily by setting the count input to some number larger than you expect a line to be and setting the line mode (F) input to TRUE.
  3. At each line, search for the text string "***End_of_Header***". Use Search/Split String for the search. It is faster for this than Match Pattern.
  4. When you find the string, the next line is the column headers. Read it. If you want the information, keep it; if not, discard it.
  5. In a loop, one iteration per new file, do the following:
    1. Create the new file using Open/Create/Replace File.vi.
    2. Read in a line from your old file.
    3. Find the third TAB character in the line. The easiest way to do this is looping three times with Match Pattern, using the offset output of each loop as the start index for the next loop. Use a shift register initilialized to zero to pass the index from loop iteration to loop iteration.
    4. Write the portion of the line before the third tab into your new file using Write File. Don't forget the end-of-line character(s). Use the EOL constant from the string palette.
    5. Repeat until you have saved 30,000 lines or you run out of original file
    6. Close the new file and create another new one if needed
  6. Close the original file.
Note that this procedure does not put headers on the new files. You can get this information from the file header you read at the beginning of the procedure.

Good luck!
0 Kudos
Message 7 of 10
(6,020 Views)

So, completely ignoring the concept of headers for a second....

The read big bin file asks for the number of points to read each loop. What constitutes a point?

There seems to be 5.025 points per number written when viewed in excel?

Im trying to get an even 65,000 rows to maximize the file size in excel.

I tell it to read 2900000 points and i get 577,069 cells in Excel

 

 

 

 

0 Kudos
Message 8 of 10
(6,002 Views)
One sample row of the LVM file:
 
-0.058409 0.012300 -0.138633 0.031087 2.005433E-5 -0.000227 -0.000199 -0.000276 -0.000557
nine numbers tab delimited with a CR and LF on the end of each row.
 
so {(number, tab)x8}+(number,cr,LF) 
 
How many "points" is one row?
0 Kudos
Message 9 of 10
(5,994 Views)
If you are using Read File, you are not reading in points, but raw characters. "Number of points" does not exist. "Number of characters" is what the primitive is reading. Use the setting I mentioned above to read in the files line by line. Set the number of characters high enough so you get the entire line. Loop over that 65000 times to get 65000 lines. As an aside, why do you want to load 65000 lines into Excel? If you have a macro to process it, great. If not, that amount of data is hard to deal with. My personal limit for Excel is about 2000 lines. After that, I go to LabVIEW, Mathematica, MathCAD, ... Finally, please post your code if you have further questions. I am essentially shooting in the dark with my answers, because I don't know what your real problem is. Thanks.
0 Kudos
Message 10 of 10
(5,973 Views)