LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Compiled Code Complexity analyzer tool for LV 2012

I think those numbers in the VI properties are the same as the ones from VIA.  I suspect that what calculates this complexity is part of NI's secret sauce.  I'm also curious what factors go into calculating it.

0 Kudos
Message 11 of 18
(1,445 Views)

Yup if it's central to some compiler decisions, I reckon it's also a very interesting design metric.

I will do some comparisons with VIA I think, see what we get.

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

0 Kudos
Message 12 of 18
(1,442 Views)

Hi Steve -

 

There's no white paper about this specifically, but I'd be happy to share some information about it.

 

For a little background on the compiler, you can read about how the LabVIEW compiler works. The code complexity calculation happens after the DFIR transforms are done and the IL instructions are produced.  We then count the number of basic blocks in the IL stream, and then turn that number into a more intuitive scale.  As the documentation states, we reserve the right to change how we do this in the future, but right now to give you an idea:

1100 basic blocks = code complexity 1.0

8500 basic blocks = code complexity 5.0

48000 basic blocks = code complexity 10.0

 

We use the code complexity internally to limit the optimizations that LLVM runs. (this is what the "Partial compiler optimizations" vs. "Full compiler optimizations" values mean)

 

This is different than the metric that VI Analyzer uses - I believe VI Analyzer looks at the nodes and nesting level or something like that to determine complexity.  So the VI Analyzer metric is measuring the complexity of your source code, while the compiler complexity measures the complexity of the compiled code.

 

I hope this helps!  If you have more questions I'll do my best to answer them.

Greg Stoll
LabVIEW R&D
Message 13 of 18
(1,434 Views)

That's brilliant Greg, really appreciate it.

I'll take a look at the compiler link, then I fancy bench-marking some different VI's.

My interest is being able to predict if certain design choices are going to affect the IDE, so the approach of just having a number at the compiler level is pretty attractive.

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

0 Kudos
Message 14 of 18
(1,431 Views)

@gregstoll wrote:

We then count the number of basic blocks in the IL stream, and then turn that number into a more intuitive scale.


How different is that from measures of cyclomatic code complexity?

0 Kudos
Message 15 of 18
(1,420 Views)

It is similar; it looks like it would be equal to N (the number of nodes) in the control flow graph.  The reason we picked that is that there are some LLVM optimizations that seem to run in proportion to the number of basic blocks squared. (or something superlinear like that)

Greg Stoll
LabVIEW R&D
Message 16 of 18
(1,416 Views)

It's the differences that piqued my interest, + I wonder how in-lined or dynamic code affects things.

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

0 Kudos
Message 17 of 18
(1,413 Views)

If a callee is inlined into a caller, that will add to the caller's code complexity.  (if there are multiple calls to it, we will generate its code multiple times, so marking big VIs as inlined is a good way to make a caller have a large code complexity!)

 

Dynamic calls (like using a Call-by-Ref) will add a tiny bit of code complexity because of the extra code LabVIEW has to generate to make the call, but I doubt this would have much of an effect.

Greg Stoll
LabVIEW R&D
Message 18 of 18
(1,394 Views)