03-19-2015 10:31 AM
I'm attempting to use the performance profiler on my 9030 target, but when I start profiling I immediately lose connection to the target. This is what I've found:
1) Disconnect happens regardless of target CPU load. This can be recreated with extremely simple VIs that occupy a percent or two of the CPU.
2) Disabling the embedded UI allows me to profile the target... but I need that UI enabled to actually run my program.
3) I've tried this on two 9030s, two dev computers, the 2.0.0f and the 2.2.0f firmwares.
Anyone have any luck profiling with the embedded UI enabled?
Thanks
-Eric
03-19-2015 05:48 PM
We've reproduced this here and are investigating; the LabVIEW process on the target crashes when profiling is started (you don't even need to run a VI). Will keep you posted on resolution. Thanks for the heads up and sorry for the inconvenience!
03-19-2015 06:07 PM
Thanks, Scot
I wrote a test harness version of my code that doesn't require the embedded UI. This was successfully profiled with the Performance and Memory tool. I found a naively grown string in some of my code that was causing the original issue, fixed it, shaved 39.5 seconds off of a 40 seconds process.
I wanted to look for more buffer nonsense like this, and found that the Profile Buffer Allocations tool doesn't pick up any activity from my target. Is this expected behavior?
03-23-2015 08:50 AM
Profile>>Show Buffer Allocations is a static/offline analysis tool: it doesn't require running the code or being connected to a target, and doesn't "pick up" activity from the target. I just did a quick check (with a for loop auto-indexing tunnel, and then with a build array) to make sure this is working on my cRIO, and it is, although I did have to hit Refresh once after opening the dialog. The small black dots that show where allocations are happening can be hard to see (see http://labviewwiki.org/Buffer_Allocation for example), is it possible you just missed them? (Or that the area of code you're looking at doesn't have any?)
03-23-2015 10:25 AM
Ack, I misspoke. Sorry!
"Show Buffer Alocations" works fine. I was having trouble with "Profile Buffer Allocations".
There are a few parallel threads to this conversation.
1) I had a specific bug in my code - I wrote code that executed very slowly because I did a bad job of managing buffers. Specifically, growing a string in a loop. I had done this with a shift register and concatenation. Now I'm using an indexing tunnel and then flatten the resulting array after the loop is finished.
2) While trying to diagnose this issue, I found that the performance profiler didn't work on my target with the embedded UI enabled. You've reproduced it, so I'm happy.
3) I'm still trying to learn more about the type of mistake i made in (1), so I ran through the rest of the profiling options. When I got to "Profile Buffer Allocations" I noticed that it doesn't appear to work on the RT target. The tool will report buffer sizes if I set the Application Instance of the VI to Main Application Instance. It will not report if it is targetting the cRIO.
4) For completeness, I ran through the rest of the Tools -> Profile menu. Find Paralleizable Loops does not on RT targetted VIs unless I set them to Main Application Instance first. They instead report "Could not locate the VI to analyze. Load the VI and select "Find Parallelizable Loops..." again. Show Buffer Allocations and VI Metrics work well.
Now that (1) is resolved I'm good to move forward again. I just want to be sure that any info I found while hunting (1) makes it to the right people.
Thanks
- Eric
03-24-2015 12:36 PM
Thanks, Eric. I reproduced issue #4 with Find Parallelizable Loops and filed CAR 520215.
03-25-2015 08:59 AM
Regarding #3, you didn't misspeak, I misread, sorry!
Profile>>Profile Buffer Allocations isn't really supported on RT targets. I've asked the team responsible to clarify the RT support. I believe they are going to create a KnowledgeBase article about it at some point, and we should put something in the online help about it as well. Thanks for bringing the oversight to my attention.
03-25-2015 10:59 AM
Thanks! Is there a CAR on #3?
Can you recommend any e.g. KB resources for me to improve my LV writing style for performance? I've written plenty of performance critical C and assembly, and I've written plenty of non-RT LabVIEW... This is the first time I've had to write performant G.
Phrased another way - If you were to inherit my code tomorrow, how would you start improving it?
- Eric
03-27-2015 08:59 AM - edited 03-27-2015 09:00 AM
My understanding is that the team that owns the feature is following up, but just to make sure, I filed CAR 520712 and referenced this thread. As I said, my understanding is that there won't be a fix, just improved documentation, unfortunately.
As far as general advice on this, I'm probably not the best person to ask, but a quick Google search turned up documents like ftp://ftp.ni.com/pub/events/labview_dev_ed/2008/improving_performance.pdf (which despite being a few years old still looks reasonable to me) and http://digital.ni.com/public.nsf/allkb/D58C6375BC58A16586257194004950B8. There's also paid training on this apparently, see http://www.ni.com/white-paper/51900/en/