LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
FermWorks

DSC / Citadel: smart, memory-saving decimation and/or interpolation

Status: Declined

Any idea that has received less than 2 kudos within 2 years after posting will be automatically declined. 

It would be great to have the option to get decimated (and/or interpolated) data from Citadel, up to a specified maximum # points over a time interval, with these conditions:

1)  Buffer only the decimated/interpolated data, instead of reading all the raw data and then decimating. The goal is to retrieve data from a large time period without filling up memory!!  (That's why this is best done down in the Citadel code, not up in LabVIEW.)

2)  Use the same kind of smart decimation that the LV graph uses (maybe borrow that code). For example, if you display 100,000 values on an XY graph, and they are all =10 except ONE single value=100, the graph will show that spike no matter how small the graph is, i.e. no matter how much it has to decimate the data to fit it into relatively few pixels. It would be important to keep minmax, and NaN (break) values.

 

2 Comments
FermWorks
Member

Also, when reading/writing to Citadel completely through the LabVIEW API, the current "interpolation" option on the DSC Read Traces.vi in the LabVIEW API is actually "decimation" because numeric traces are "discrete" instead of "continuous" and return the value prior to the "interpolation interval" not an interpolated value. This is actually what I want, but it would be nice to have both options, and it would be good to name the options correctly.

Darren
Proven Zealot
Status changed to: Declined

Any idea that has received less than 2 kudos within 2 years after posting will be automatically declined.