08-29-2012 09:56 AM
@User002 wrote:
Did that yesterday.
The performance went down even further (some 20%), since we are forcing the scheduler to run between each iteration I presume?
Br,
/Roger
@Ben wrote:
@User002 wrote:
The performance drop is most noticeable on more computationally constrained RT targets, such as the sbRIO 9606.
On my "real" c/sbRIO customer projects/programs where I previously was utilizing about 30-40% of CPU, now flatlines at 100%.
Br,
/Roger
Please try putting "0 ms wait"s in those loops.
Please.
The "0 ms wait" only marks the end of the loop so the task scheduler can do it job without preempting etc.
Ben
Ah, Ben had made text in bold.
Yes, I already got "Wait for next ms" (though, usually larger than 0) in all of my customer's code subsystem loops (feedback control, alarm, logging etc..). As a rule, I try to keep my CPU utilization below 40% for these systems, which wasn't a problem until LV2012...
Br,
/Roger
08-29-2012 11:35 AM
Wiat for ms multiple is somethign I rarely use becuase its only useful purpoase (set sarcasm mode) is to arrange for multiple loops to wake up at the same time and fight for the CPU. (Set Sarcasm False). The only way to avoid that collision with wait for multiple is to use unique prime number waits for every "wait for multiple".
What I was suggesting was a "0= ms wait" (note not hte ms multiple) inside the loop doing the incrementing. The "0" does not introduce a delay, but marks the end of the loop iteration (speculating now) by moving the thread temporarily out of the "Compute Queue" (stack of thread waiting to use the CPU). Since it is a quick switch, it introduce very little delay into the loop but lets the Process scheduler slip other threads into the CPU.
Take care,
Ben
08-29-2012 12:07 PM
Hi again, followed your recepie Ben.
This time I was running the program on my Win7 Parallells virtual machine.
Here is what I noticed (in LV 2011):
And for LV2012:
Br,
/Roger
08-29-2012 12:15 PM
Thank you for trying it!
I have a request into support to look at this. It is not a simple issue and I suspect they wil look, rub their heads, and hob-nob a bit before they get back to us.
I'll watch for what they say.
Ben
08-29-2012 12:19 PM - edited 08-29-2012 12:20 PM
Thanks Ben
Let's see what the enlightened got to say to us mere mortals!
Br,
/Roger
Edit: added the latest code
08-29-2012 05:07 PM
Hi Roger and Ben,
I'm looking into this now. I'll try recreating this to confirm that I'm seeing the same behavior that you're seeing Roger. Assuming I get the same behavior it may be a few days before I hear back from the LabVIEW tribunal.
David A
08-29-2012 07:33 PM
@David-A wrote:
Hi Roger and Ben,
I'm looking into this now. I'll try recreating this to confirm that I'm seeing the same behavior that you're seeing Roger. Assuming I get the same behavior it may be a few days before I hear back from the LabVIEW tribunal.
David A
Excellent!
Giving this some thought this could be really messy to evaluate.
If debug is on and a window to the RT app is open the numbers could be weireded out.
Changes in the execution order of when the "wait until" starts could playing a part in the numbers if the compiler change between 11 and 12 decides to flip-flop the order. What if the new version start the Wait unitl earlier in the new version and then does the rest. If the parallel threads run at the same rate but start later...
RT Trace Execution Toolkit run in both versions and compared would help give us some insight I suspect. it would make an excellent addendum for Aristos Queues post here.
Thank you for looking into this!
Ben
08-29-2012 11:24 PM
@David-A wrote:
Hi Roger and Ben,
I'm looking into this now. I'll try recreating this to confirm that I'm seeing the same behavior that you're seeing Roger. Assuming I get the same behavior it may be a few days before I hear back from the LabVIEW tribunal.
David A
Thanks David,
Let's see if my findings are reproducible....
I'll keep eyeing this thread.
Br,
/Roger
08-30-2012 12:34 AM
@Ben wrote:
@David-A wrote:
Hi Roger and Ben,
I'm looking into this now. I'll try recreating this to confirm that I'm seeing the same behavior that you're seeing Roger. Assuming I get the same behavior it may be a few days before I hear back from the LabVIEW tribunal.
David A
Excellent!
Giving this some thought this could be really messy to evaluate.
If debug is on and a window to the RT app is open the numbers could be weireded out.
Changes in the execution order of when the "wait until" starts could playing a part in the numbers if the compiler change between 11 and 12 decides to flip-flop the order. What if the new version start the Wait unitl earlier in the new version and then does the rest. If the parallel threads run at the same rate but start later...
RT Trace Execution Toolkit run in both versions and compared would help give us some insight I suspect. it would make an excellent addendum for Aristos Queues post here.
Thank you for looking into this!
Ben
Actually, no need to run it on RT targets. Windows experiences the same performance drop, though due to powerful hardware, less noticeable in those applications.
Which would perhaps make debugging easier?
Though, it would be interesting to know the debugging techniques of the labview runtime and compiler (if needed)!
Br,
/Roger
09-03-2012 03:03 PM
Hi Roger,
It's been a few days since my last post so I thought I'd at least let you know that I've been able to replicate the behavior with more or less the same results that you've already posted. I'll be following up with a few folks here in the office the next few days to see what their thoughts are on this one.
As a side note, when I try to use the VI Profiler to examine the performance the Lock/Unlocks drops to zero...curious. I'll post back later this week with further tests and possible words of wisdom from my colleagues.
David A