01-27-2015 11:38 AM
The way I think of and see OO at least in our products, I'm not seeing the need to constantly do DD in tight loops. I mostly see DD in (factory) creations, initialization cases or similar? So even if I did have a bunch of classes and I relied heavily on DD, what (common?) use-case am I missing that makes the DD happen inside a (tight) loop? why is the DD not happening only on occasion (on entry, on init case)?
HUGE CAVEAT to the above comment/question is my self-admitted lack of advanced (LV)OOP experience. I'm not saying that your (RogerIsaksson) use case is not proper, smart and legit AND has this problem. I am curious (for curiosities sake) if there is a simple'ish answer to why you end up with DD's in such huge numbers?
01-27-2015 11:53 AM - edited 01-27-2015 11:55 AM
@User002 wrote:
You are oversimplifying the impact.
A large application have more than one class.
A 3X performance drop in dyndispatch over a large number of classes is most noticeable in any reasonably complex LVOOP application.
I do not believe that I am oversimplifying. My statement is based on looking at the performance of many finished OO applications. The number of things that cause execution to pause/wait in most apps is quite high and the computation load of many algorithms is quite large, such that thousands and thousands of dynamic dispatch calls are small potatoes.
In short, the code that is inside a dynamic dispatch subVI almost always dominates by many orders of magnitude the call itself... dispatch measured in nanoseconds, operations measured in microseconds and milliseconds.
I do acknowledge that there are customers for whom this is significant, but empiric testing shows that they are few and far between once we get down to that scale.
01-27-2015 11:59 AM
@QFang wrote:
The way I think of and see OO at least in our products, I'm not seeing the need to constantly do DD in tight loops. I mostly see DD in (factory) creations, initialization cases or similar? So even if I did have a bunch of classes and I relied heavily on DD, what (common?) use-case am I missing that makes the DD happen inside a (tight) loop? why is the DD not happening only on occasion (on entry, on init case)?
HUGE CAVEAT to the above comment/question is my self-admitted lack of advanced (LV)OOP experience. I'm not saying that your (RogerIsaksson) use case is not proper, smart and legit AND has this problem. I am curious (for curiosities sake) if there is a simple'ish answer to why you end up with DD's in such huge numbers?
For example check out my feedback control (LV)OOP framework:
https://decibel.ni.com/content/docs/DOC-25242
From control theory we learn that there exists PID regulators, state space regulators, fuzzy logic, etc..
All low fruits hanging for overrides and dynamic dispatch. In my case I can have several hundreds of regulators (base class) in an array, that I for-loop over and call respective dyndispatch methods on. Imagine the impact if you run this on a sbRIO and then *blam* AQ commits and pushes an "insignificant" change that causes a triple the execution time of any DD method. Just wow!
I also have a quite elaborate SCADA application example project that utilizes many LVOOP frameworks:
https://decibel.ni.com/content/docs/DOC-41077
But hey, I am perhaps not providing any "net positive" for NI..
/Roger
01-27-2015 12:00 PM
AristosQueue wrote:
Keep in mind, please, that we are talking about the difference between 300 nanoseconds and 100 nanoseconds per dynamic dispatch call. Most users will never have this show up as a performance hot spot in their code.
I have asked a moderator to change that sentence to instead read:
Keep in mind, please, that we are talking about the difference between 300 nanoseconds for dynamic dispatching and 100 nanoseconds for megacluster case structure.
I want to clarify that this is the distinction I am talking about in that sentence.
01-27-2015 12:04 PM - last edited on 01-28-2015 09:05 AM by dcarva
@User002 wrote:
then *blam* AQ commits and pushes an "insignificant" change that causes a triple the execution time of any DD method. Just wow!
Actually, the right way to state it, is "AQ commits and pushes an 'insignificant' change that adds 200 nanoseconds to the call overhead of any DD method." The execution time of the method was 200 microseconds, so the new time is 200.2 microseconds. And in such cases, you would be hard pressed to notice the change.
01-27-2015 12:08 PM
In danger of speaking out-of-turn, I think in a way you are both somewhat agreeing with one-another?
RogerIsaksson had CPU usage in the ~40% with 2011 (I think thats what you stated earlier), and AristosQueue is saying that "I do acknowledge that there are customers for whom this is significant, but empiric testing shows that they are few and far between once we get down to that scale." and I believe the context/scale he is making this statement in is the performance seen on the 2015 alpha/beta build, not the 2014 performance level? So if performance returns to on-par with 2011 performance, at least that means it should return to being usable for Roger again?
At any rate, I've been badly procrastenating other things and speaking out of turn about stuff I have no grounds to speak about, so I'll try to leave this thread alone for now..
Thank you all for indulging my questions and comments, sorry if anyone felt I put words in their mouths, and as far as your posts go RogerIsaksson, based on this thread you're net positive in my book, raising concerns isn't by itself negative imho. 🙂
01-27-2015 12:12 PM - last edited on 01-28-2015 09:05 AM by dcarva
@AristosQueue (NI) wrote:
@User002 wrote:
then *blam* AQ commits and pushes an "insignificant" change that causes a triple the execution time of any DD method. Just wow!Actually, the right way to state it, is "AQ commits and pushes an 'insignificant' change that adds 200 nanoseconds to the call overhead of any DD method." The execution time of the method was 200 microseconds, so the new time is 200.2 microseconds. And in such cases, you would be hard pressed to notice the change.
Well then, fantastic.
It seems like my and others benchmarks and real-usage experience disagree with the arcane timing details that you so eloquently provide.
Come on, just admit that you botched up OOP performance real good with LV2012.
/Roger
01-27-2015 12:32 PM
@QFang wrote:
In danger of speaking out-of-turn, I think in a way you are both somewhat agreeing with one-another?
RogerIsaksson had CPU usage in the ~40% with 2011 (I think thats what you stated earlier), and AristosQueue is saying that "I do acknowledge that there are customers for whom this is significant, but empiric testing shows that they are few and far between once we get down to that scale." and I believe the context/scale he is making this statement in is the performance seen on the 2015 alpha/beta build, not the 2014 performance level? So if performance returns to on-par with 2011 performance, at least that means it should return to being usable for Roger again?
At any rate, I've been badly procrastenating other things and speaking out of turn about stuff I have no grounds to speak about, so I'll try to leave this thread alone for now..
Thank you all for indulging my questions and comments, sorry if anyone felt I put words in their mouths, and as far as your posts go RogerIsaksson, based on this thread you're net positive in my book, raising concerns isn't by itself negative imho. 🙂
Thanks!
Check out my community docs, maybe you can get some LVOOP inspiration from there if you decide to go down the override (DD) route (which I indeed think you should learn and try out):
https://decibel.ni.com/content/people/RogerIsaksson?view=documents
/Roger
01-27-2015 01:04 PM
RogerIsaksson wrote:It seems like my and others benchmarks and real-usage experience disagree with the arcane timing details that you so eloquently provide.Come on, just admit that you botched up OOP performance real good with LV2012.
I already did admit it, and I admitted it was a serious issue. Nonetheless, you are blowing the impact of that seriousness way out of proportion. It was serious for some applications. It was not significant for many others. The remaining timing differences affect even fewer apps.
01-27-2015 01:54 PM - edited 01-27-2015 01:57 PM
@AristosQueue (NI) wrote:
RogerIsaksson wrote:It seems like my and others benchmarks and real-usage experience disagree with the arcane timing details that you so eloquently provide.Come on, just admit that you botched up OOP performance real good with LV2012.I already did admit it, and I admitted it was a serious issue. Nonetheless, you are blowing the impact of that seriousness way out of proportion. It was serious for some applications. It was not significant for many others. The remaining timing differences affect even fewer apps.
The masters of labview silently wonders for themselves if the ivory tower has become too comfortable for the primadonna. And why shouldn't they, if everyone sings her tune and all dissidents are purged and ridiculed.
LV2012 -> 2015 for fixing this regression? Wow. I want to work at NI, it seems like a nice place for slacking off the final years for a semidecent programmer like me!
I know what, why not have your buddy the "banmeister" over here and lock this thread and kickban me? That _would_ make my day.
/Roger