LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

spreadsheet string to array slow?

We have an apllication here where we would like to save all our data 120
files with 4M each and then read it back.
I ve changed now to zipped files, but reading from the disk seems not to
be the problem. I used 'profile' to analyze the time consumption. I
think it might be the 'spreadsheet string to array VI' which makes it so
slow, 1 second is a lot to sort 450k numbers. (BTW its a win2000 PC of
this year)

Any hints how to improve it?

Best Regards
Urs Bögli

0 Kudos
Message 1 of 11
(4,224 Views)
Hi,
don't you testing 'Write To I16 File.vi' or 'Write To SGL File.vi' VIs? They use VI 'Write File.vi' and input data are variabile.

Or, when you have LabVIEW 7.1 You can test speed of 'Read LabVIEW Measurement File.vi' or 'Write LabVIEW Measurement File.vi'

Have a nice day
JCC
0 Kudos
Message 2 of 11
(4,216 Views)
Hi,

we had to use double precision, so we were not able to use the standard
VIs for spreadsheet read/write which work with single precision only.
The mentioned VI "spreadsheet string to array" is part of the latter,
but can be used on its own, too.
The writing i not a problem (there is enough time), so we did not care
about that side.
Thanks
Urs

JCC (SK) schrieb:
> Hi,<br>don't you testing 'Write To I16 File.vi' or 'Write To SGL File.vi' VIs? They use VI 'Write File.vi' and input data are variabile.<br><br>Or, when you have LabVIEW 7.1 You can test speed of 'Read LabVIEW Measurement File.vi' or 'Write LabVIEW Measurement File.vi'<br><br>Have a nice day<br> JCC

0 Kudos
Message 3 of 11
(4,213 Views)
hi,
I do this small program. Program generate, write and read array with double precision. I generated array with 1.000.000 rows and 10 columns. Read is fast in this program, because file is 'cached' in Windows Virtual Memory.

When you close LabVIEW and try read file with second VI on my PC I need 3 second for this array [on the disk have file 80MB].

It is programs in LabVIEW 7.0
0 Kudos
Message 4 of 11
(4,204 Views)
Thanks JCC,

your VIs are 10 faster than ours.

The problem with your solution is that it was specified the data has to
be stored in 'comma seperated format' (seperator:tab). Like this it can
be read in worksheet programs like Excel.
I think the time consuming step in my read program is the conversion
from worksheet string to array.

Urs

JCC (SK) schrieb:

> hi,<br>I do this small program. Program generate, write and read array with double precision. I generated array with 1.000.000 rows and 10 columns. Read is fast in this program, because file is 'cached' in Windows Virtual Memory.<br><br>When you close LabVIEW and try read file with second VI on my PC I need 3 second for this array [on the disk have file 80MB].<br><br>It is programs in LabVIEW 7.0
>
>
> readArrayFile.zip:
> http://forums.ni.com/attachments/ni/170/97601/1/readArrayFile.zip

0 Kudos
Message 5 of 11
(4,184 Views)
JCC,

can you explain me how to cache in virtual memory?
Thanks
Urs

JCC (SK) schrieb:

> hi,<br>I do this small program. Program generate, write and read array with double precision.

I generated array with 1.000.000 rows and 10 columns. Read is fast in
this program,

because file is 'cached' in Windows Virtual Memory.<br><br>

When you close LabVIEW and try read file with second VI on my PC I need
3 second for this array [on the disk have file 80MB].<br><br>It is
programs in LabVIEW 7.0
>
>
> readArrayFile.zip:
> http://forums.ni.com/attachments/ni/170/97601/1/readArrayFile.zip

0 Kudos
Message 6 of 11
(4,182 Views)
Is there a reason you are using text files? With that amount of data, binary is usually a much better way to go. Check out NI-HWS, available on the driver CD. It has native compression, in addition to being hierarchical and fast. If you need to access the data from another program, the underlying technology is HDF5, which is supported by many analysis programs (but no Excel).
Message 7 of 11
(4,176 Views)
Hi,

The reason is that engineers will have a look at these data for
debugging the tested units or doing reengineering. They are used only to
Excel, but not to Labview (!) nor to any of the math/analysis tools.

My basic question is why is the spreadsheet string to array VI so slow?
Are there any better VIs for that?

Thanks anyway
Urs

DFGray schrieb:
> Is there a reason you are using text files?

With that amount of data, binary is usually a much better way to go.

Check out NI-HWS, available on the driver CD.

It has native compression, in addition to being hierarchical and fast.

If you need to access the data from another program, the underlying
technology is HDF5, which is supported by many analysis programs (but no
Excel).

0 Kudos
Message 8 of 11
(4,145 Views)
Urs,

I am guessing a bit here, but it may be something to think about. The spreadsheet string to array has no way of knowing in advance how big the array will be. It probably allocates enough memory for a small array and then as the array outgrows that it needs to re-allocate. This is a known to be slow process in auto-indexing loops. The alternative would be to preallocate a large array and use the replace array subset function inside a loop which processes the string one line at a time. Preallocated arrays and replace array subset are known to be faster than auto-indexing. I have not tried it in comparison to the spreadsheet string to array.

Lynn
0 Kudos
Message 9 of 11
(4,138 Views)
John,
But as 'Spreadsheet String To Array' is a VI from the library which can
not be altered your idea means the way to go is to write a VI for the
same functionality with known expected array, what *is* the case here.
agree?
Thanks Urs

johnsold schrieb:
> Urs,<br><br>I am guessing a bit here, but it may be something to think about. The spreadsheet string to array has no way of knowing in advance how big the array will be. It probably allocates enough memory for a small array and then as the array outgrows that it needs to re-allocate. This is a known to be slow process in auto-indexing loops. The alternative would be to preallocate a large array and use the replace array subset function inside a loop which processes the string one line at a time. Preallocated arrays and replace array subset are known to be faster than auto-indexing. I have not tried it in comparison to the spreadsheet string to array.<br><br>Lynn

0 Kudos
Message 10 of 11
(4,108 Views)