12-22-2020 06:03 AM
I have some lookup 2D lookup tables (approx dimensions 25x25). I am hoping to perform linear interpolations using these tables at ~4100 times per second (Four different data channels been acquired at 1024Hz)
I have butchered the NI_AAL_Interpolation/Interpolate 2D Scattered VI and removed the array manipulation that is carried out on the X, Y and Z data (Which never changes and is stored as constants in my code) so that it is only carried out once on the first call of the my code.
This has improved the speed by about 90 times (65ms down to 0.72ms per four iterations (One for each channel) averaged over 1000 iterations, equivalent of one seconds worth of data)
Even so, 0.72ms per iteration at 1024Hz is 734ms so I would be using almost 75% of a CPU core. I am already struggling with CPU resource so I can't really accept this.
Has anyone got a more efficient 2D interpolation implementation?
(I can post some simplified code of what I have done soon, I have got to strip out the confidential lookup tables and make it a bit more readable first.)
Just hoping someone can point me in the direction of a more efficient method?
Solved! Go to Solution.
12-22-2020 08:28 AM
Not a solution here, just checking on what ground's already been covered.
1. Have you explored several levels down into the built-in interpolation functions to find the best place to "tap-in" and modify? At a quick glance, there seems to be quite a bit of potentially redundant "helpful" checking and verification going on. It basically looks structured for a 1-time call rather than repeated.
2. Have you tried any of your own brute-force methods for comparison? It "feels like" there's room to speed things up for a mere 25x25 lookup / linear interpolation. 2D interpolation isn't something I find myself doing, but it seems like 2 "Threshold 1D Array" operations on the spanning X and Y vectors would give you the fractional indices needed to identify the 2x2 subset needed for actual interpolating. And that will be a pretty cheap calculation.
3. This kind of computation seems like it lends itself to parallelization -- perhaps that's a way to spread the CPU burden?
-Kevin P
12-22-2020 09:29 AM - edited 12-22-2020 09:30 AM
1. Have you explored several levels down into the built-in interpolation functions to find the best place to "tap-in" and modify? At a quick glance, there seems to be quite a bit of potentially redundant "helpful" checking and verification going on. It basically looks structured for a 1-time call rather than repeated.
Yeah, the only bit I am calling repeatedly is this VI
Since my last post I have stripped the code in this which allows you the interpolate multiple points which has increased the speed by another 10 times. I am down to about 0.05ms per iteration which is probably ok now...just
2. Have you tried any of your own brute-force methods for comparison? It "feels like" there's room to speed things up for a mere 25x25 lookup / linear interpolation. 2D interpolation isn't something I find myself doing, but it seems like 2 "Threshold 1D Array" operations on the spanning X and Y vectors would give you the fractional indices needed to identify the 2x2 subset needed for actual interpolating. And that will be a pretty cheap calculation.
This is what I was going to try next. I don't do a lot of interpolation either and have already spent longer that I was on this project so was hoping someone could point me to something that I had missed
3. This kind of computation seems like it lends itself to parallelization -- perhaps that's a way to spread the CPU burden?
Yeah, everything is rentrant so I am calculating four interpolations simultaneously.
-----------------------------
As I mentioned, now that I have stripped a little more out of it I am down to an acceptable execution time. Was just panicking because for my first few attempts I couldn't get it quicker than 20ms an iteration.
12-22-2020 10:53 AM - edited 12-22-2020 12:20 PM
I am not sure why you are using the "scattered" version of the interpolation tool. If you say 25x25, I would imagine a regular 2D array where you can apply simple bilinear interpolation. Even if the data is scattered, you can convert it to a regular 2D array once before the loop.
Depending in the required resolution, you could even remap your data to a finer grid (e.g. 250x250) and just index into it (i.e. nearest).
If would really help if you could attach your benchmark harness and some typical raw data so we can play around. Currently, there are simply too many unknowns.
12-22-2020 11:12 AM
I really like altenbach's suggestion (big surprise, eh?) that you could do some offline pre-interpolations to expand your lookup table from 25x25 to 250x250 (maybe even as much as 2500x2500) so that in real-time you can merely lookup the nearest indices without further interpolation. Use memory to save time.
-Kevin P
12-22-2020 11:20 AM
Using this tool, I get a loop time of about 300ns on my ancient laptop (includes two random numbers, two multiplications, timing code, and the bilinear interpolation subVI and a chart). Probably fast enough for your purpose.
12-22-2020 11:20 AM
I agree, altenbach's idea is much better than what I have done. I will give it a go tomorrow.
12-22-2020 11:38 AM
Ok, why didn't I think of doing it like this. Now that I have seen it it is so obvious.
I saw some interpolation VI's in base LabVIEW install and got blinkered from thereon.
That is exactly the sort of easier solution that I was after.
12-22-2020 11:41 AM
Even when adding a larger intensity graph to validate the interpolation result by mapping them back into a larger 2D array, the loop time is ~30 microseconds. (same subVI as above, not included here)
12-22-2020 11:47 AM
To wrap this up, two points: