07-22-2011 04:30 PM
Sure, here are some screenshots from a while ago.
This VI generates some output signals and uses a counter to divide an external clock by 2 for the sampling clock. The data comes out as digital waveforms in this case. (I have tried all of the different options here with varying degrees of success.)
The digital waveforms then go into a remapping VI. This VI just rearranges the order of the array in order to map it back to the way that we visually see the sensor array. (We collect it in a completely different manner.)
Lastly, we perform the signal processing. This is an older version of the VI where we just used the Subset FFT.
Let me know if anything isn't clear or you would prefer the actual VI's.
07-22-2011 04:57 PM
It is difficult to really inspect a code image, because many things are hidden or ambiguous. Sorry, I am not really familiar with digital waveforms.
Some comments:
top picture:
...
07-22-2011 05:00 PM
The local variables and controls should all stay constant during the 8 loop interations. The waveform->dynamic data is just on the signal generation side, it was out of convienence. I can't remember the specifics of the rationale though.
07-22-2011 07:32 PM - edited 07-22-2011 07:33 PM
I think you do a lot of uneceesary memory reallocations, going from 1D to 2D to 3D to 1D arrays while transposing, remerging and reshaping.
Let's have a look at your code structure (corresponding to lower left or your bottom image). Your code involves 9 (!) buffer allocations, while my code does the same with a single buffer allocation. (see image below. Your code is on top, while my two alternatives are at the bottom)
Similarly, you could build the 1D array of digital waveforms right inside the loop where it is generated. You know the final size and the pattern of arrangment, so why all that autoindexing, reshaping, etc.
07-25-2011 08:56 AM
Altenbach hits on a key point in dealing with large data sets.
NO extra buffer allocations of big data!
In Drew's code, the buffer allocations are of small matrices, so not a big issue. When developers go through the process of scrubbing G code, especially to make it work for BIG data, the savvy developer will avoid unnecessary data copies.
07-26-2011 05:48 PM
On another look, one area of Drew's code that does make copies of big data is the indexing and rearranging of the computed spectra. Even though the subset from 1000 to 9000 is a fraction of the total bandwidth, each copy of the 128 spectra is equivalent to another 9.2 MB. After the For Loop, the array manipulation (reshaping*3, concatenating, transposing) allocates 5 buffers of big data for an additional 46 MB. Altenbach's in-place implementation of the spectrum packing used a grand total of ~10 MB of space for packing the FFT magnitudes which is near the absolute minimum needed.
There is difference in memory usage between the {FFT Subset (mag and phase)} versus the {SVPO Power Spectrum (full bandwidth) and Spectrum Subset} versus {Power Spectrum Subset}. Power Spectrum Subset is the most memory efficient. Even there, I think that there is room to make sure that no unnecessary copies of data.
07-28-2011 10:19 AM
Doug, Yes it is clear that the big savings will be in dealing with the arrays of digital waveforms. I made alternate code for that too (it needs to be slightly different than above), but I had some questions conceptualizing how these look in memory. A digital waveform is basically an array with some extra properties and thus an array of digital wavforms is basically an array of arrays. Is there a document how arrays of digital wavforms are stored in memory? What happens if one waveform element grows or shrinks in size? Does everyhing get reallocated?