LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Long input string of +300.000 values, need syntax to match and return E values

Hi Labster,

now you have an array of strings? But then you say you read the data from file - so why don't you read the file as a big string and convert it as described before (thanks to Altenbach)?

Or use the "concat strings" function to convert the array of strings back to a single string...
Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 11 of 13
(518 Views)


Labster11 wrote:
Comparing with high-low limits is fairly easily done, but it takes a hell of a long time to read me the array values I need. The largest file is about 1.200.000 data points. Yes, a 1,2 million here. Smiley Happy
How do I break this long list up or how can I get the maximum speed while scanning?

Can you tell us who made that file? Where is it coming from?
 
If this is something that you generated yourself in another program, you should rethink your approach. Formatted text files are extremely inefficient once the datasets get big and you should really consider using binary files. They can be read and written with basically no processing and are thus orders of magnitude faster. If memory is an issue, you can easily process them in chunks of equal size.
 
Are the high-low limits a function of position or constant across the entire dataset? Is range checking all you need to do with the data? Processing 1.2 Million data points should be nearly instantaneous (< 1 second)  unless your code is inefficient and you generate many extra datacopies, for example. Right now, you probaly spend most of the CPU time parsing the text file.
 
Can you show us your code?
0 Kudos
Message 12 of 13
(502 Views)
Hi,

Thanks again for the response, the answer was staring me in the face but I didn't see it until you told me to look at the data coming from the machine. 
-The machine does very low strain testing on mono filament fibres.  The length of the file depends on how much samples are to be tested in one batch and of the precision needed-  The software designed for the machine wasn't capable for this low strain testing so another company wrote a new software program to comply to our demands. The result was thus a very long string of all sorts of elements.
The eventual answer lay in the negative numbers which I chucked away since they told me they had no value or were false data.  Every (approx.) 1200 data points it returns a -100 and a -2000 value which is the start of another fibre and sampling set.
So I just checked for these values and could brake up the file into separate workable chunks.

Works like a charm now.

Thanks again for pointing this out to me.  Like you stated, sollutions often lie in the things not looked for.

Labster11

0 Kudos
Message 13 of 13
(475 Views)