LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How can subVI timming from outside and inside not coindide?

Hello everybody,

 

I'm trying to optimize a project and there is a subVI that returns to me different timming if I mesure it from outside (openG tickCount -> error cable -> subVI -> error cable-> openG tickCount) than if I mesure it from inside (same as before, but inside the subVI with all its content inside a flat sequence structure, tick -> fss -> tick).

 

From the inside measure I'm getting a reasonably steady timming of 30ms. From the outside one it goes from 50 to 150 ms, with a lot of jitter. What can make a subVI behave like this? For me it's very strange. Inside it there is some call by reference VIs, but not asynchronous calls or something like this. Any ideas?

 

Thanks for your time,

 

 

EMCCi

0 Kudos
Message 1 of 10
(2,702 Views)

From looking at the VI that you posted, I can "imagine" where your error is.  Post your VI (not a picture of it -- I might need to actually run it) and I'll give you a more specific "guess" based on better information about what you are doing ...)

 

Bob Schor

0 Kudos
Message 2 of 10
(2,661 Views)

Hi, I uderstand your complain, but isn't always easy/posible to me to upload the code. Sorry for the inconvinience.

 

I have discovered that if I build an application, the timming goes down to 3ms(from "outside")... Wow. Also, the CPU consumption goes up from 10% to 30%. (I make all the for loops I can to run in parallel, so I don't know why it can't go higher).

 

The subVI is a "buffer manager" that uses a called by reference based on primitve LV queues API. It populates or flushes the buffers depending on some conditions. Also it can do some math operations on them. The max buffer size is a 2D array of 43k x 100, more or less, and there are 10 buffers aprox. So there can be "high" memory movements.

 

Sorry again, I know that it's very tricky to try to imagine what goes wrong like this, but it's all I can do for the moment. I only hope that all this make sense for someone of you because had experience it in the past, and knows which was the cause.

 

Best regards,

 

 

EMCCi

 

Edit: Also, I already had increased the "Limit compiler optimizations" to 10 before all this.

0 Kudos
Message 3 of 10
(2,627 Views)

@EMCCi wrote:

I have discovered that if I build an application, the timming goes down to 3ms(from "outside")


Debugging code went away when you built an application.  That is also likely your issue between the "outside" and "inside".

 


@EMCCi wrote:

Edit: Also, I already had increased the "Limit compiler optimizations" to 10 before all this.


That really only matters if your code complexity is really high.  Good use of subVIs will typically bring that way down.

 


@EMCCi wrote:

(I make all the for loops I can to run in parallel, so I don't know why it can't go higher).


Do you have non-reentrant VIs being called in your FOR loops?  If so, you will have threads waiting in an idle state.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 4 of 10
(2,613 Views)

You are measuring different things, so the results will differ too. Why are you using openg tools instead of just taking the difference between two readings of High resolution relative seconds?

Is the front panel of the subVI open or closed? Is debugging enabled or disabled? What else is running in parallel? What are the execution settings of the subVI?

 

Reliable benchmarking is very tricky and we will not be able to give further advice without seeing a simplified version of your code. Maybe you are making a simple mistake.

 

You talk about FOR loops and "parallel". Is that a single parallelized FOR loops or multiple independent FOR loops. What do they do? Are these loops in the caller or in the subVI?

 

Even 30ms is a very, very long time. What kind of data sizes and algorithms are involved? What is the time limiting step? Sometimes rewriting the algorithm for better inplaceness etc. can give you orders of magnitude speedup.

0 Kudos
Message 5 of 10
(2,597 Views)
Debugging code went away when you built an application.  That is also likely your issue between the "outside" and "inside".

 

 

With debugging code you mean the tick count VIs? Can some of them make this performance difference??

 

Do you have non-reentrant VIs being called in your FOR loops?  If so, you will have threads waiting in an idle state


Mostly preallocated clones called by reference.

 

Why are you using openg tools instead of just taking the difference between two readings of High resolution relative seconds?


Because it is an standard tick count (ms) VI but with error in and out included, that makes it easier integrate it with the code and control when it executes. I'm not using the high resolution one, because the execution time is far above the ms barrier.

 

Is the front panel of the subVI open or closed?


This made little/zero difference.

 

Is debugging enabled or disabled?


Enabled in the development enviroment by default in the mostly VIs. Disabled in the executable. Does this makes the difference?

 

What else is running in parallel?


Theoretically, nothing...

 

What are the execution settings of the subVI?


Mostly preallocated clones, called by reference with debug on.

 

You talk about FOR loops and "parallel". Is that a single parallelized FOR loops or multiple independent FOR loops. What do they do? Are these loops in the caller or in the subVI?


Multiple independent for loops. They are inside the subVI and make basically two things: 1) Call by reference pre-initialized clones of the "buffer manager API", and 2) make arithmetic operations on the buffer elements.

 

Reliable benchmarking is very tricky


There is some general advice?

 

We will not be able to give further advice without seeing a simplified version of your code. Maybe you are making a simple mistake.


I know. Thank you for the answers anyway.

0 Kudos
Message 6 of 10
(2,580 Views)

@EMCCi wrote:

Multiple independent for loops. They are inside the subVI and make basically two things: 1) Call by reference pre-initialized clones of the "buffer manager API", and 2) make arithmetic operations on the buffer elements.


I am not familiar with these functions. Does it involve external code? Do you have a link?

 


@EMCCi wrote:

is some general advice?


Start here.

 

Message 7 of 10
(2,578 Views)

No, there are simple and "silly" code. Similar to an action engine but for buffers of 1d array that are storaged in a queue. Previous version was working with 2D arrays and was more flexible. But memory allocation was also slower when appending new 1D arrays to the stores 2D one. I can try to upload a version if you are interested on it.

 

But regarding to the original thread, if the code is able to run fast in application mode, I don't know why has to be until x100 slower in developement mode. Is having debugging activated what makes the difference? Is the compiler in application and development different?

 

Best regards.

0 Kudos
Message 8 of 10
(2,547 Views)

@EMCCi wrote:
Debugging code went away when you built an application.  That is also likely your issue between the "outside" and "inside".

With debugging code you mean the tick count VIs? Can some of them make this performance difference??

 

Why are you using openg tools instead of just taking the difference between two readings of High resolution relative seconds?


Because it is an standard tick count (ms) VI but with error in and out included, that makes it easier integrate it with the code and control when it executes. I'm not using the high resolution one, because the execution time is far above the ms barrier.


For the first question, they're talking about things like probes, highlight execution, etc.  Whether you're explicitly using them or not, LabVIEW has hooks built in so you can debug your code. When you build the executable, these hooks are removed.  

 

Your second answer doesn't make sense.  Is your code execution taking enough ms the high resolution component is likely to be noise?  Sure.  But, that doesn't mean it makes sense to do something that's more difficult yet less precise.  When you start forming bad habits, you'll continue them.  You don't need error wires on instant time measurement.  Get yourself into better habits so they feel more natural when your code isn't inefficient to the point the extra precision is lost in the noise.

0 Kudos
Message 9 of 10
(2,509 Views)

@natasftw wrote:

@EMCCi wrote:

I'm not using the high resolution one, because the execution time is far above the ms barrier.


Is your code execution taking enough ms the high resolution component is likely to be noise?  Sure.  But, that doesn't mean it makes sense to do something that's more difficult yet less precise. 


I suspect the detail missing here is that because it returns time in seconds, the OP believed it would not give as much information as the Tick Count (ms) functions.


GCentral
0 Kudos
Message 10 of 10
(2,484 Views)