02-06-2012 03:15 PM
Hello,
The slides and an example from the Feb. 7 2012 meeting have been posted in the documents section available here:
Slides:
https://decibel.ni.com/content/docs/DOC-20714
Example:
https://decibel.ni.com/content/docs/DOC-20715
If you have any questions or comments let me know. Enjoy!
02-08-2012 09:33 AM
The latest version of the presentation as presented has been uploaded. A solution to the Programming Challenge issued in the presentation has been uploaded and can be found in the documents section.
A note on Performance Optimization of Arrays for Advanced users. I had made the comment during the presentation that if you are working with 100+MB arrays, consider other options. Here is an example: in the software I demonstrated at the end of the presentation there was one large graph and 4 small graphs beneath it at certain points in the demonstration. The large graph showed an ultrasonic signal, which due to the digitization rate and the long record length is, by default, 200,000 elements. When I clicked on Channel Configuration to view all 4 channels, I could easily have 800,000 points being sent to a graph. If that same data or some permutation of it were also sent to the 4 smaller displays beneath suddenly I am working with a lot data taking up a lot of memory and slowing down my program.
Data display is one of the areas where it is most tempting to use massive arrays, since you want every point to be displayed on a graph. However, the way I implement display on the 4 small graphs of my code is much more efficient. Since data is stored in a TDMS format, instead of asking for all 200,000 points and graphing them, I instead only ask for as many points as there are pixels in which to display them. At the default size I believe that is somewhere around 200 pixels that one of the 4 graphs occupies. Thus, I read the TDMS file in a loop, only requesting a single data point across large chunks so that only the 200 points of data I actually need are placed into active memory. If I oversample that to 400, the graph might look a little bit more accurate, but the visual effect is nearly identical to loading all the data, since remember your computer can only draw one color in a single pixel.
This is one example of how there are other options to using massive arrays. If I optimized 200,000 points to best I can my code would still never be able to compete with even a poorly optimized 200 point array being displayed. Often times when you are faced with a massive array, try to consider if you have the option of only using a summary set of data or data subsets or chunks at any given time. The TDMS data format is very helpful in enabling this, as its indexing prevents you from loading too much data at any one time to get what you need.
Also, as a correction, I said that arrays can be built out of nearly any front panel object including graphs. Actually, the only way you can get an array of graphs is to place a graph in a cluster and place the cluster into an array. I apologize for the mistake. Likewise for most advanced front panel objects, you will need to place them in clusters and then build arrays, as demonstrated in the Example code linked to in the above post.
02-09-2012 07:36 AM
Joseph,
Thank you for your presentation on Arrays, I'm a beginner in LV with no experience and I'm teaching myself to program in LV. I have managed to write several programs that we use on a daily basis at work. You have made some things more clear to me.
Thank you,
Steve Kofoed