LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

More deterministic execution with Windows

Solved!
Go to solution

Does anyone have any hints on getting more deterministic behaviour out of LabVIEW code on Windows? The core of the program I am writing must grab an image on a hardware trigger every 100ms and process it. The image processing is taking 60 to 70ms on average and I don’t think I can optimise it much more. Trouble is regularly I will miss a frame. I have found that dragging windows around while the program is running is particularly likely to cause serious delays.

 

I have played around with the VI execution options - increasing priority and setting a different thread but these settings don’t appear to change the behaviour much, if at all. Perhaps I am missing something here.

 

I have come across some interesting finds that could help but I haven’t looked deeply into them yet.

 

First, I noticed that adding a wait to a while loop can cause the occasional longer than normal delay time. I have found it to be more deterministic to poll the hardware than relinquish the CPU for any amount of time with a wait.

 

Secondly, I found that running two independent projects side by side in the development environment ran slower than if I build one into an executable. It is like the executable gets its own core which is not shared with the development system.

 

There must be some more information out there about getting the most performance out of Windows and minimising the random pauses. Anyone able to point me in the right direction. BTW I have thought about LabVIEW RT but am a bit in the dark about setting up a performance workstation with it.

 

0 Kudos
Message 1 of 21
(3,148 Views)

Your description seems to hint at code where image capture & analysis are part of the same 100 msec loop.  If so, the very first thing to do is set up a producer-consumer arrangement so they can run in independent loops?

 

 

-Kevin P

 

 

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 2 of 21
(3,137 Views)

@Kevin_Price wrote:

Your description seems to hint at code where image capture & analysis are part of the same 100 msec loop.  If so, the very first thing to do is set up a producer-consumer arrangement so they can run in independent loops?


Just to add some more information: Producer/Consumer

 

The general idea is to have one loop acquire the image and pass it on to another loop that does the processing.  This passing of the data is generally done with a Queue.  You could also look at using a Channel Wire.



There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 3 of 21
(3,122 Views)

@rowdyr wrote:

Does anyone have any hints on getting more deterministic behaviour out of LabVIEW code on Windows? The core of the program I am writing must grab an image on a hardware trigger every 100ms and process it. The image processing is taking 60 to 70ms on average and I don’t think I can optimise it much more. Trouble is regularly I will miss a frame. I have found that dragging windows around while the program is running is particularly likely to cause serious delays.

 

We need to see your code, but as the others have said this should not be an issues if you are using a Producer/Consumer architecture.

 


@rowdyr wrote:

 

First, I noticed that adding a wait to a while loop can cause the occasional longer than normal delay time. I have found it to be more deterministic to poll the hardware than relinquish the CPU for any amount of time with a wait.

 

Secondly, I found that running two independent projects side by side in the development environment ran slower than if I build one into an executable. It is like the executable gets its own core which is not shared with the development system.

 

There must be some more information out there about getting the most performance out of Windows and minimising the random pauses. Anyone able to point me in the right direction. BTW I have thought about LabVIEW RT but am a bit in the dark about setting up a performance workstation with it.

 


1. Windows it NOT a realtime operating system so there will always be some jitter in any timing.

 

2. Well, yeah... When you run in the development environment all of your code is running in the same thread under LabVIEW.exe. When you compile and build an executable it runs in it's own thread just like any other Windows program.

 

3. You might want to let us see if we can improve your overall architecture before you go down the RT rabbit hole...

========================
=== Engineer Ambiguously ===
========================
0 Kudos
Message 4 of 21
(3,103 Views)
Solution
Accepted by topic author rowdyr

You should be very able to do this in Windows with a producer-consumer architecture without needing RT. It sounds like you have a hard requirement to grab images every 100 ms, but maybe the processing can be a little delayed (200ms maybe) as long as you grab ALL of the images.

 

I'd initially recommend producer/consumer. If you find the consumers regularly get backed up, you can launch multiple independent consumers that will be able to run in parallel. That's definitely a more difficult setup but it would work.

 

I'd recommend posting your code. Lagging during window drags isn't an unheard of problem, but it could point to some other issues. For example, overusing Property Nodes can slow things down since they require the UI thread.

 

And just FYI- LabVIEW RT basically only runs on cRIO or PXI* hardware. You can't just grab a license of LabVIEW RT and throw it on a random PC to make it realtime. There are (or at least were) some ways to get a desktop to do it, but it's a really involved process IIRC since the OS is designed around some very specific hardware. If you go RT you'll have to start developing all of your code to run on a special cRIO or PXI* chassis.

 

*I haven't worked with much PXI, and the hardware I used just ran regular Windows. I think they offer RT versions.

Message 5 of 21
(3,094 Views)

If it takes 70ms to process an image, that leaves 30ms to sense the hardware trigger, grab the image, and send it to post processing, saving, etc. That seems very tight! Windows might check for updates or initiate an AV scan, etc. and things fall apart. And no, getting a processor that is twice as fast will not get you out of the woods! How exactly does the code interact with the hardware trigger? If the drivers are dlls, make sure they don't run in the UI thread. How big are the images (dimensions, bit depth, etc.). How is hte camera connected (USB, Card, etc.)

 

Are you using built-in image processing tools or did you write your own? Are you sure it is optimized (inplaceness, parallelization, etc.)

 

I am sure whatever you are trying to do is doable, it just needs to be architected right as others already said. I recommend not to mess with VI priority settings.

 

It is hard to give targeted advice without seeing the actual code.

0 Kudos
Message 6 of 21
(3,077 Views)

This is certainly "do-able", unless the camera you are using is really slow, or you are taking a really large image.  Using multiple Axis cameras (I forget the model number, but they were acquired 7-8 years ago, and were able to take 640 x 480 color videos at the rate of at least 10 frames/sec), I was able to construct a "behavioral monitoring" system for up to 24 "subjects" who would perform a "behavior of interest" every 10-20 minutes.  We were able to handle up to 24 "stations" (camera, subject, "behavior detector") running simultaneously on a 7-10 year old Dell Windows PC (Xeon processor), with the images coming from the cameras via TCP/IP (that's how Axis made them, worked well).  Our typical recording session ran 2-3 hours, and we rarely had a camera "go bad".

 

I've also had some experience taking single frames on a "demand" trigger.  That's a bit more hardware (and hassle) than letting the Camera do the timing and saving data as an AVI (as opposed to an array of PNGs).  Note, however, that LabVIEW Vision is considerably more complex than, say, VISA or DAQmx ...

 

Bob Schor

0 Kudos
Message 7 of 21
(3,053 Views)

Sorry, I misled you in my original statement. It takes 60 to 70ms on average to perform an iteration of the loop - not just the image processing. This includes image capture, processing and sending output (position information) that needs to happen before the next trigger. 

 

Images are captured from a USB camera at about 1500 x 1500 using DLL calls. There is some image processing done with Vision such as convolution and morphological filters etc. but there is also a fair bit of regular number crunching done after that with large arrays. I've used parallel loops in places to speed this up and played around with replacing loops with array operators to get a bit more speed. The code will run reliably when I have a 150ms trigger interval but at 100ms it will not get there. I'm sure its doable too but is getting progressively harder for every small time reduction. I'm using 32-bit LabVIEW so potentially I could move to the 64-bit platform with some minor changes.

 

Thanks for all those ideas recommending producer consumer loops. I haven't tried that yet. My application is mostly sequential due to the data flow. I didn't see how producer consumer loops could help in such a case. Image processing can't happen until the image is acquired and data output can't occur until image processing is finished. Would producer consumer loops speed that up? Sorry I can't really post code due to our company's policy but it is a rather large event handler and state machine within a loop.

0 Kudos
Message 8 of 21
(3,016 Views)

Thank you BertMcMahan for the tip about property nodes. I do have a number of instances where I access property nodes. I will have to go through and do some cleaning up. Some of these I should be able to get rid of but some I will still need for correct UI operation. Will I have to redesign my UI so no access to property nodes are required? Or is it feasible to split the UI from the main program and message between the two?

0 Kudos
Message 9 of 21
(3,012 Views)

Another thought to consider:

 

What is the consequence of missing an occasional 100 msec deadline for updating the output signal?  What if the previous update remains there for 1 more interval?

 

You're right that a producer-consumer pattern will not guarantee that the entire control process (capture->process->output) will complete every 100 msec.  It can help your image capture run with more reliable 100 msec timing though, at the possible expense of an occasional missed output update.

 

The basic idea is that sometimes you have to make these kinds of trade-offs in the real world.  What's more important, faster capture speed with reliable timing or slower capture with certainty that each capture results in an output update?

 

Meanwhile, any UI stuff involving propety nodes should indeed *also* be split off into its own loop and controlled via messaging.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 10 of 21
(2,974 Views)