LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Reading a Spreadsheet with lines of different size characters

Salutations,

I am attempting to synchronize data files, by doing this I need to fast forward my way along spreadsheet data.  An example of the file data is shown below:
2,59,1,247,0,202
2,68,2,0,0,188
2,68,2,4,0,197
2,56,2,8,0,216
2,47,2,11,0,252
2,39,2,26,1,22
2,22,2,34,1,59
1,250,2,50,1,76
1,219,2,47,3,255

As you can see, the data has a different amount of characters in each line.  Thus, when I attempt to fast forward a certain number of lines in the data, I end up at the incorrect location when using the Read From Spreadsheet File.vi (utilizing the offset characters location).  Since I can't simply state something like "16*200" where 16 would be the bytes per line and 200 would be the last line I want to skip  I was hoping someone had a clever approach to solving this issue.

Thank you for your time and assistance,
E. Smith.
0 Kudos
Message 1 of 9
(3,148 Views)
Your data seems to contain carriage returns or line feeds (or both) as line separators. Use those to count lines, then look for the commas to separate fields.

The Spreadsheet to Array function will do this in one step. Then just index the array to get the row you want.

Lynn
0 Kudos
Message 2 of 9
(3,134 Views)
You could look for the 200th occurrence of the line feed character and begin from that point.  If it's spreadsheet data then you could also convert it to an array.  You could then pick out any line of the data by looking at the corresponding row of the array.
0 Kudos
Message 3 of 9
(3,137 Views)

As I understand it, you need a crystal ball to tell you what value to use for the Read From Spreadsheet File.vi offset characters location.
You don't want to have to read the preceding 99,999 disk file blocks, watching the bytes stream by, when your crystal ball tells you that the line you need is in the 100,000th block. 

If you don't know where you are in the file without looking at all the data in the file up to that point,
then, by definition, you have to look at all the data read it from the file up to that point.

I think you are stuck.

If you cannot alter the routine that writes these files to either use constant-width fields or line numbers or provide an auxiliary file of pointers then you have to resign yourself to reading these files at least once the slow way. Whether or not you have to read these files a second or third or fourth .... time in an equally slow manner depends on whether or not you rewrote them after the first reading in a more reader-friendly fashion.

0 Kudos
Message 4 of 9
(3,127 Views)
"You could look for the 200th occurrence of the line feed character and begin from that point."

How exactly do I find the 200th occurance of the line feed character?  Once that is found, how do I know how many bytes (characters) were read before that value?

All of this requires me to open the file and investigate the information, doing this approach I could just as easily create an array of numbers that describe the amount of characters, and then take a subarray, add the array elements and get the amount of characters up to my point of interest?  In doing either, I'll end up opening the file multiple times.  I presume the best way is to force the data to be written utilizing the same amount of characters in each line?

Thank You,
E. Smith
0 Kudos
Message 5 of 9
(3,125 Views)
You placed that post while I was writing mine.  Alright, it all makes sense now, I figure I'll force the person writing the file to change it or I'll go through a slow process, similar to the one I described.

Thanks everyone for their assistance,
E. Smith
0 Kudos
Message 6 of 9
(3,121 Views)


@StudentSmith wrote:
All of this requires me to open the file and investigate the information, ....  In doing either, I'll end up opening the file multiple times


FWIW, If you have to do this the slow way (& I understand you will be trying to get the file writer to change to avoid that) then if I were you I'd read the entire file into memory once (it's amazing how much you can stuff into modern computer memory) to avoid opening the file multiple times. Perform all your line searching and synchronization on the memory image and then (if necessary) write the image back to disk.
0 Kudos
Message 7 of 9
(3,119 Views)
I agree with Warren.  I misunderstood the question earlier and was assuming that you would read the whole file into memory.  Is the file so large that memory is an issue?  I would avoid reading the file character by character if possible.
 
Another possible way to do this would be to read the first n lines and then use the "mark after read" to get your offset.  I realize that this still doesn't do exactly what you want, but it is an easy way to find the line character.
0 Kudos
Message 8 of 9
(3,112 Views)
Sadly the file size can become rather cumbersome.  Although we haven't pegged down the exact file size (waiting on the writer), the minimum file size should be around 200 MB+.  I am using the mark after characters part of the VI to check and see where to start reading the file again.  I just need to be able to do an initial search through the files so they all start reading at the same time; thus compariing similar activity for each file.

Thanks for all the help,
E. Smith
0 Kudos
Message 9 of 9
(3,102 Views)