LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

varied execution time in image processing

Solved!
Go to solution

Hello,

I designed a vi to process eye-images and calculate it's center. However i seem to be getting different execution times sometimes 100 ms other times >200ms. I used the flat sequence structure and timers to measure the execution time of the vi . I had some parallel processes and had forced sequential data flow using error terminals.But the problem still remains. I thought that the issue could be due to different input images but executing the same image over and over still shows the same problem. Can anyone tell me what's causing this?

0 Kudos
Message 1 of 11
(4,063 Views)

I assume you are running the application on a Windows system. You know that Windows as General Purpose Operating System (GPOS) does NOT provide any determinism inherently? To be precise: To get a certain level of determinism on a Windows machine requires you to know exactly what you are doing and a deep "engagement" into the system itself.

LV does not provide that level of "engagement", so jitter of several 10ths of ms up to over 1s are..... "usual".

 

That being said, there are some performance parameters you can try to switch to improve the preformance. Still, jitter will be there and can literally go up to any value.

As you havent provide code, we cannot give feedback on how to improve the code itself. Using a flat sequence structure for benchmarking purposes only sounds reasonable, but if used otherwise could result in unwanted effects.

Also, you always have to keep in mind that processes running in parallel to your application can meddle significantly with performance, so you might want to tweak your system (no virus scan, no firewall, no HD indexing, ...).

As last point, i want to add that you have to make sure that you have configured the VIs for "no debugging". Only then, LV can optimize the code as best as it can, otherwise, it possibly has to leave time-consuming code as is....

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 2 of 11
(4,029 Views)

How do i disable debugging of vi's? Will there be any drawbacks due to this?

0 Kudos
Message 3 of 11
(4,024 Views)

You can do this in the VI properties >> Execution.

 

Drawbacks are obviously that there are no debugging options like

- Highlight execution mode

- setting probes

- setting breakpoints and single stepping.

 

Despite of these, there are no drawbacks for most applications. If there are, there is a significant issue in the application.....

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 4 of 11
(4,016 Views)

Thank you. I'm also using the matlab script node for some execution steps but it's execution time is varied. When i first open and run the vi it takes a lot of time but after the first run, execution time reduces to normal for same or different images. The support team tells me that this delay is due to labview communicating with activex server. Is there any way that i can overcome this?

0 Kudos
Message 5 of 11
(4,003 Views)

Depending on the interface, ActiveX might be involved and creating a delay.

If working with e.g. the MathScript RT module for LV (having m-script nodes), there is no ActiveX involved.

 

Note that ALL VIs are "slow" and have jitter for the first iteration as memory has to be allocated (requested from the OS). So even if you implement to re-use memory and remove additional re-allocations as possible, the FIRST iteration/execution will ALWAYS perform less deterministic and fast as the following ones.

 

Hence, when using a REAL TIME (read: deterministic) system, you have the first iteration as "warm up iteration". It is broadly acknowledged to break determinism; later iterations (if the code is done properly) will execute deterministicly.... (does that last word exist??)

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 6 of 11
(4,000 Views)

Yes after the 1st execution of the vi the execution time of the node is deterministic in nature. I'm planning to implement this program as a subVI in a larger program i.e. main VI such that the subVI will be called based on events at different times within the main VI. Will a warm up of the subvi at the beginning of the main VI be sufficient so that multiple calls to the subVI later on will result in real-time behaviour and not cause delays everytime the subVI is called?

0 Kudos
Message 7 of 11
(3,983 Views)
Solution
Accepted by topic author Prathiksha

Running code on a Windows system is NEVER deterministic.

That being said, designing the code in a way that all memory can be preallocated will be an important step to improve performance.

Playing with priorities can also improve things, but you can easily mess up if using priorities in a wrong way.

 

Additionally, running applications in parallel will induce unknown system resource allocation (CPU, memory, interfaces, ...) and therefore lead to increased jitter.

 

If you really have a hard cap for e.g. loop iterations regarding jitter (e.g. must not have a jitter >1us), the only way to address this is moving to RT or maybe even FPGA (depending on the complexity vs. interfaces vs. knowledge/skill).

If the determinism is required for hardware IO if not single point (as in control loops), hardware clocked IO and handling data packages in the application is sufficient for 99.99999% of all those applications.

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 8 of 11
(3,969 Views)

hello,

I don't have the hardware or software for FPGA and as i'm using matlab script node i guess the RT module is also not possible. However i like the idea of pre-allocating memory. How can I do this?

0 Kudos
Message 9 of 11
(3,965 Views)

Well, simple way: Pre-define all data types, limit "growing data" (e.g. array, string) to a maximum.

Essentially, all arrays and strings are build using "Create Array" with the maximum size, elements are later replaced per index.

 

Often, you cannot exclude dynamic re-allocation by 100%, but if you design and plan the code properly, you can be sure that dynamic re-allocation is done rarely and "limited".

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 10 of 11
(3,956 Views)