LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Compiler is Too Smart for My Own Good

Solved!
Go to solution

@paul.r wrote:

That's not an optimization that breaks the fundamental paradigm of the language.


I agree it breaks the fundamental feature of dataflow, but I am leery to suggest fundamental changes to the compiler for what may be an extreme edge case that may have only surfaced now as no one seems to have seen this effect before.

0 Kudos
Message 31 of 55
(3,239 Views)

That no one has seen it before (and that I can't replicate it) leads me to believe the thread starter is wrong and this wasn't actually happening the way he thinks it was.

 

 

But assuming he's correct, you're suggesting the compiler disregard the fundamental language paradigm because it could potentially optimize it? If the language and compiler actually operated that way LabVIEW would be completely useless - you would have no idea how your program is going to actually execute.

0 Kudos
Message 32 of 55
(3,232 Views)

@Mark_Yedinak wrote:

@mcduff wrote:

@Mark_Yedinak wrote:

@Intaris wrote:

I'm referring to the original post. And this is just my opinion, but I would have thought we'd all be in the same page to be honest.

 

Bypassing structure with a wire passed through it due to compiler decisions is, in my opinion, not acceptable without clear visual feedback so that the programmer sees what's going on at the very least.

 

I would prefer the compiler to NOT override my specific decisions with regard to dataflow. If compiler optimisations change, this could have severs implications on the execution of the code.

 

There may well be some corner cases which require care, such as individual frames of a sequence structure, but focussing on any single frame, the dataflow should be maintained as programmed.


I am in complete agreement with you. This completely violates data flow. The code syntax very clearly indicates the FP.close should be invoked after the case has completed execution. Any optimization that changes the explicit data flow should not be occurring. Optimizations could change the order of operations of parallel nodes that have no data dependency between them but it should never override explicitly defined data flow.


Maybe the optimization should apply only to "non-UI type" wires, ie, references to property nodes. Yes I agree it violates dataflow, and "breaks" every rule LabVIEW stands for, but the optimization may be useful for other use cases that are not considered here. I would hate to lose optimizations when the work-around is only necessary for a few specific cases.


Completely disagree. If I as the programmer ran the wire through a structure (case, loop, frame, whatever) and that wire is untouched within the structure the node connected to it should NEVER execute before that structure is complete. It violates every rule of data flow. Maybe I had very explicit reasons I wanted that executed after the structure node and that is why I ran the wire through. I, as the developer, imposed data flow to the code. If the compiler is free to violate that than what else will it "optimize" on me? I can very easily solve this particular optimization myself by NOT running the wire through the structure. Leave the decision to me. If it really doesn't matter when it executes and I created some inefficient code that is on me. This is a fundamental syntax of the language. The compiler should not change my data flow. NEVER!


That's why this one really freaked me out.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 33 of 55
(3,228 Views)

@mcduff wrote:

@Mark_Yedinak wrote:


If the compiler is free to violate that than what else will it "optimize" on me? I can very easily solve this particular optimization myself by NOT running the wire through the structure. Leave the decision to me. If it really doesn't matter when it executes and I created some inefficient code that is on me. This is a fundamental syntax of the language. The compiler should not change my data flow. NEVER!


It optimizes multiple things, memory, thread management, constant folding, etc. (That is one reason why we like LabVIEW over low-level languages like C.) Obviously in the example provided, the optimization does not work correctly. I do not know how long the optimization has been around, maybe it's new, maybe not, but just now has an example been brought to light that shows problems with the optimization. What will happen to existing code/performance if the optimization is no longer there?

 

Below is screenshot from deep within the General Error Handler. Not sure if this optimization still applies or not, but it is an example of another optimization imposed on the programmer.

 

Snap27.png

 

 

 


What you are saying the compiler can do in the name of optimization is equivalent to letting a C compiler ignore braces in the name of optimization.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 34 of 55
(3,220 Views)

@paul.r wrote:

That no one has seen it before (and that I can't replicate it) leads me to believe the thread starter is wrong and this wasn't actually happening the way he thinks it was.

 

 

But assuming he's correct, you're suggesting the compiler disregard the fundamental language paradigm because it could potentially optimize it? If the language and compiler actually operated that way LabVIEW would be completely useless - you would have no idea how your program is going to actually execute.


No, I said if it is a bug, then it seems like an edge case since it has not been brought up before. Obviously we know how our programs execute, except maybe when the runtime library changes versions and the box "Allow future versions of LabVIEW to run this program" is checked. 🙂 If we are going to change the compiler for a specific edge case then I want to know what the benefits of the change will be; if the change only imposes language purity for an edge case and messes up other code, then I would be against it. If it has no effect on other code, then I have no problem with it. But I think it is drastic to impose a measure that may affect other things for a possible edge case that nobody has seen before.

 

For the case below should the compiler make any optimizations? Should the compiler be smart enough to know the last value is always 999 and there is no reason for a for loop? Is the programmer always right in every case, or should the compiler protect us from ourselves at times? (Isn't that the reason for the RUST language? The RUST compiler protects us from making bad security decisions.) To reiterate this seems like a bug to me, but I am against wholesale changes to the compiler for an edge case without knowing how those changes will affect code that I depend on. If the bug was widespread we would all know by now.

Snap28.png

 

 

0 Kudos
Message 35 of 55
(3,217 Views)

@paul.r wrote:

That no one has seen it before (and that I can't replicate it) leads me to believe the thread starter is wrong and this wasn't actually happening the way he thinks it was.

 

 

But assuming he's correct, you're suggesting the compiler disregard the fundamental language paradigm because it could potentially optimize it? If the language and compiler actually operated that way LabVIEW would be completely useless - you would have no idea how your program is going to actually execute.


A reminder that just because you couldn't replicate it doesn't mean it didn't happen.  This optimization creates a race condition that may not work the same way on your computer as it does on his.

 

And no, I wouldn't expect the optimizer to optimize something against its own paradigm.  Not without a warning - and I've already posted this - such as the hashed line it shows you for constant folding.  In fact, constant folding is a lot less dangerous than this one, and it gets a warning.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 36 of 55
(3,210 Views)

@mcduff wrote:

@paul.r wrote:

That no one has seen it before (and that I can't replicate it) leads me to believe the thread starter is wrong and this wasn't actually happening the way he thinks it was.

 

 

But assuming he's correct, you're suggesting the compiler disregard the fundamental language paradigm because it could potentially optimize it? If the language and compiler actually operated that way LabVIEW would be completely useless - you would have no idea how your program is going to actually execute.


No, I said if it is a bug, then it seems like an edge case since it has not been brought up before. Obviously we know how our programs execute, except maybe when the runtime library changes versions and the box "Allow future versions of LabVIEW to run this program" is checked. 🙂 If we are going to change the compiler for a specific edge case then I want to know what the benefits of the change will be; if the change only imposes language purity for an edge case and messes up other code, then I would be against it. If it has no effect on other code, then I have no problem with it. But I think it is drastic to impose a measure that may affect other things for a possible edge case that nobody has seen before.

 

For the case below should the compiler make any optimizations? Should the compiler be smart enough to know the last value is always 999 and there is no reason for a for loop? Is the programmer always right in every case, or should the compiler protect us from ourselves at times? (Isn't that the reason for the RUST language? The RUST compiler protects us from making bad security decisions.) To reiterate this seems like a bug to me, but I am against wholesale changes to the compiler for an edge case without knowing how those changes will affect code that I depend on. If the bug was widespread we would all know by now.

Snap28.png

 

 


This example and what the OP originally posted are not the same. Clearly the optimizer can remove the for loop and simply return the constant. There is no other code in the loop. Nothing is fundamentally changed. The compiler cannot definitively know the intent of the code in the original post and in cases like that it MUST defer to the language syntax. If the optimizer is free to alter data flow than how can we ever know how our code will work?

 

In your example above there is no violation of data flow. In the code the OP posted there is.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
Message 37 of 55
(3,207 Views)

@Mark_Yedinak wrote:

@mcduff wrote:

@paul.r wrote:

That no one has seen it before (and that I can't replicate it) leads me to believe the thread starter is wrong and this wasn't actually happening the way he thinks it was.

 

 

But assuming he's correct, you're suggesting the compiler disregard the fundamental language paradigm because it could potentially optimize it? If the language and compiler actually operated that way LabVIEW would be completely useless - you would have no idea how your program is going to actually execute.


No, I said if it is a bug, then it seems like an edge case since it has not been brought up before. Obviously we know how our programs execute, except maybe when the runtime library changes versions and the box "Allow future versions of LabVIEW to run this program" is checked. 🙂 If we are going to change the compiler for a specific edge case then I want to know what the benefits of the change will be; if the change only imposes language purity for an edge case and messes up other code, then I would be against it. If it has no effect on other code, then I have no problem with it. But I think it is drastic to impose a measure that may affect other things for a possible edge case that nobody has seen before.

 

For the case below should the compiler make any optimizations? Should the compiler be smart enough to know the last value is always 999 and there is no reason for a for loop? Is the programmer always right in every case, or should the compiler protect us from ourselves at times? (Isn't that the reason for the RUST language? The RUST compiler protects us from making bad security decisions.) To reiterate this seems like a bug to me, but I am against wholesale changes to the compiler for an edge case without knowing how those changes will affect code that I depend on. If the bug was widespread we would all know by now.

Snap28.png

 

 


This example and what the OP originally posted are not the same. Clearly the optimizer can remove the for loop and simply return the constant. There is no other code in the loop. Nothing is fundamentally changed. The compiler cannot definitively know the intent of the code in the original post and in cases like that it MUST defer to the language syntax. If the optimizer is free to alter data flow than how can we ever know how our code will work?

 

In your example above there is no violation of data flow. In the code the OP posted there is.


Agree to disagree. The optimizer as we know it has never violated data flow; all of your programs work as intended, otherwise you would not be in business. The only person that has seemingly violated data flow is Paul who reaches deep into the bowels of LabVIEW to make some pretty cool stuff. (99.999% of the people here have not seen this issue, or at least reported it affecting their code.) If he has found a real edge case of data flow violation, then it should be examined in detail by NI and they should make a decision with respect to their compiler. I rather not take a performance hit for purity's sake unless it affects a majority of cases which is not the present case.

 

NI has released compilers that "don't do what we believe they are doing." See this

As AQ said the "smartness" of the compiler is still developing.

0 Kudos
Message 38 of 55
(3,201 Views)

@mcduff wrote:

@Mark_Yedinak wrote:

@mcduff wrote:

@paul.r wrote:

That no one has seen it before (and that I can't replicate it) leads me to believe the thread starter is wrong and this wasn't actually happening the way he thinks it was.

 

 

But assuming he's correct, you're suggesting the compiler disregard the fundamental language paradigm because it could potentially optimize it? If the language and compiler actually operated that way LabVIEW would be completely useless - you would have no idea how your program is going to actually execute.


No, I said if it is a bug, then it seems like an edge case since it has not been brought up before. Obviously we know how our programs execute, except maybe when the runtime library changes versions and the box "Allow future versions of LabVIEW to run this program" is checked. 🙂 If we are going to change the compiler for a specific edge case then I want to know what the benefits of the change will be; if the change only imposes language purity for an edge case and messes up other code, then I would be against it. If it has no effect on other code, then I have no problem with it. But I think it is drastic to impose a measure that may affect other things for a possible edge case that nobody has seen before.

 

For the case below should the compiler make any optimizations? Should the compiler be smart enough to know the last value is always 999 and there is no reason for a for loop? Is the programmer always right in every case, or should the compiler protect us from ourselves at times? (Isn't that the reason for the RUST language? The RUST compiler protects us from making bad security decisions.) To reiterate this seems like a bug to me, but I am against wholesale changes to the compiler for an edge case without knowing how those changes will affect code that I depend on. If the bug was widespread we would all know by now.

Snap28.png

 

 


This example and what the OP originally posted are not the same. Clearly the optimizer can remove the for loop and simply return the constant. There is no other code in the loop. Nothing is fundamentally changed. The compiler cannot definitively know the intent of the code in the original post and in cases like that it MUST defer to the language syntax. If the optimizer is free to alter data flow than how can we ever know how our code will work?

 

In your example above there is no violation of data flow. In the code the OP posted there is.


Agree to disagree. The optimizer as we know it has never violated data flow; all of your programs work as intended, otherwise you would not be in business. The only person that has seemingly violated data flow is Paul who reaches deep into the bowels of LabVIEW to make some pretty cool stuff. (99.999% of the people here have not seen this issue, or at least reported it affecting their code.) If he has found a real edge case of data flow violation, then it should be examined in detail by NI and they should make a decision with respect to their compiler. I rather not take a performance hit for purity's sake unless it affects a majority of cases which is not the present case.

 

NI has released compilers that "don't do what we believe they are doing." See this

As AQ said the "smartness" of the compiler is still developing.


It's a bug.  The problems that it creates are much more problematic than the problem it is solving.  I think an actual compromise, and not an "agree to disagree" would be to make the developer aware of what is actually going to happen, for example, highlighting the code affected.  Compiler stays the same, but the developer can reprogram if needed.

 

As an aside, if you take the case structure from the original code and made it a subVI, it preserves the dataflow - i.e., the reference goes into and out of the subVI.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 39 of 55
(3,191 Views)

@billko wrote:


It's a bug.  The problems that it creates are much more problematic than the problem it is solving.  I think an actual compromise, and not an "agree to disagree" would be to make the developer aware of what is actually going to happen, for example, highlighting the code affected.  Compiler stays the same, but the developer can reprogram if needed.


Consider the following hypothetical optimization; I have no idea whether it exists.

 

Look at the loop below, Array A and Array B have NO data dependencies with each other. Both arrays go into a case structure where subVI A takes 300ms to execute and subVI B takes 3 ms to execute. After the loop each array goes into the other subVI.

 

For pure dataflow the VI will take 600ms to run, 300ms in the loop, 300 ms outside the loop. Now assume there is some compiler optimization that says data is preserved on the wire, the wires don't interact, and whether it exits the case structure at the same time is irrelevant as the data will not change. So now the optimizer allows the wire to leave when finished, so now the execution time of the VI could be as low as 303ms.

 

This seems analogous to the speculative execution bug, seems efficient to have the optimization but it may bite you in the butt sometimes as in Paul's case. Once again, for some cases I may like the optimization for other cases maybe not.

 

Snap126.png

0 Kudos
Message 40 of 55
(3,164 Views)