05-30-2019 12:02 AM
To be clear, that's not the "first data for each second".
However, you can use the solution I provided above with the modification I mentioned - that is, multiply the value of "i" as the input to the Threshold VI by 0.0001, and increase the value of N by the inverse (reciprocal) factor.
As altenbach mentioned, if you can use LabVIEW 2019, you can use Maps to remove duplicate elements with the time values as a Key and the data values as the Value, but take care to get the expected value (i.e. first, not last).
If the interval is known (always every 4th point) then using the Decimate Array node with 4 outputs and choosing only the first output will be more direct.
Here's an expanded example showing the Decimate Array functionality (for a known number of values per point, i.e. if you have 4 sensors and that won't change).
05-30-2019 10:15 AM - edited 05-30-2019 10:37 AM
Most of the above "solutions" assume that the there are exactly four duplicates in a row, which is certainly not true for your attached datafile.
Here is something that will only keep the rows for the first instance of each unique x value.
(... of course you want to get the array using "read delimited spreadsheet").
(BTW, this is what I meant by "All you need is a FOR loop, a shift register, and a conditional indexing output tunnel." in message #5 above)
05-30-2019 03:53 PM
... and yes, averaging all y values for each unique time value would not be much harder. Try it!