LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

(How) Can I change vi execution priority at runtime

Solved!
Go to solution

Hi all,

 

I am using Daemons (free running VI's) and I communicate to them through Queues.

 

They are part of my device driver architecture and use a producer architecture (For Acquisition) or a consumer Architecture (For Control)

 

I have a single Daemon VI to which I deploy a "Device Object" using a Polymorphic class implimentation.

 

This implimentation has one subtle shortfall,

I am not able to change the execution priority at launch

 

App Launch.png

 

There is a property node that taunts that it is possible but the help (And run time Error message) says it not available at runtime.

 

Does anyone know of an alternative method?

 

Here are what I have thought of to-date:

1. Have 5 different daemons each with a different priority [Distasteful for Code maintainance]

2. Make Priority Low, and ensure that at least 1 VI in the driver has a high priority [Not sure if it works, obscure implimentation]

 

Kind Regards,

 

Tim L.

 

 

iTm - Senior Systems Engineer
uses: LABVIEW 2012 SP1 x86 on Windows 7 x64. cFP, cRIO, PXI-RT
0 Kudos
Message 1 of 20
(5,344 Views)

Hi Timmar,

 

I seldom rarely almost never did I ever have to set the priorities.

 

WHat type of situation do you find yourself in that requires changing the priority?

 

Curious,

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 2 of 20
(5,304 Views)

Come on Ben! Just last week you were posting about using "Sub-routine" priority so you can use the sub_VI set-up option "skip if busy".

 

Yes I fogot about sub-routine.

 

But otherwise, in a non-RT app changes to priority are rare.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 3 of 20
(5,299 Views)
Solution
Accepted by topic author Timmar

You might think about putting a Timed Loop or Timed Sequence in your daemon and then passing in a numeric priority value to your daemon. That's about the best solution I can think of.

Jarrod S.
National Instruments
Message 4 of 20
(5,286 Views)

Ben,

 

I debated wheter or not to put more information in my post, I didn't want to bore my potential support Sensei's.

 

Here Goes:

 

As I hinted in my initial post, I am devoloping a set of drivers for my large application.

 

I am using a compact Field Point and am under some fairly aggressive rescource pressures.

Heavy RS422/485 Serial Comms @115200, Digital Acquisition and some heavy data processing.

 

My understanding of this type of system (Jump in if you have any improvement suggestions):

 

Priority #1 [High Priority](Producer) is to get data out of the Acquisition Buffers and in my case, perform Writes/Outputs/control activities as demanded.

So any "Hardware" Device Driver Daemons that I lanuch need to be run at high priority.  The drivers should do as little as is necessary so as not to hog this thread. Event based architecure is preferred over polled.

These drivers tend to be launched as Daemons so that they can run in a different priority and are not affected by other activities, they are inherently asynchronus.

 

Priority #2 [Above Normal Priority](Consumer->Producer) Protocol/state interpretation/Data Filtering.

Now that the data is out of the buffers, what does it mean? is it a valid communicationss message, was there a button pushed, is there an object in the transducer field.

These functions may take a bit longer (but not too much) to execute, but as they are on a lower priority, buffers can continue to be emptied.

These "filters" tend to be launched as Daemons also so that they can run in a different priority and are not affected by other activities they are inherently asynchronus.

 

Priority #3 [Normal Priority](Consumer) Number Crunching, Heavy Lifting , Control determination..

An event has occurred and some caculation interpretation and potentialy control needs to be performed, Do it.

Tend to be event based and as there are multiple stimulii tend to be best managed by an event structure.  This allows for interaction with the front panel as well (should the need arise).

 

Priority #4 [Low Priority](Slip-Scheduled) User interface/User Data/Report Generation.

Who Cares if it is a bit late, Work away in background.  In the case of user updates, they can slip later and later, no need to catch up.

 

For others reading this thread, I found This Module Help invaluable in understanding how Labview manages execution priority.

 

-----------------

 

Theory Done, I am essentialy a lazy programmer, and don't like to write and maintain too many different .vi's if I can help it, <Rant> ESPECIALY IF THEY ARE THE SAME VI WITH A DIFFERENT PRIORITY! </rant>.

 

So I have written one Daemon and one Daemon manager (to rule them all). For each driver I require, I launch a Daemon, passing the "Base" Class object to the Daemon prior to Run, I rely on the override capability of polymorphism to choose the correct methods.

My "Base" Driver class contains all of the functions required for operation.  I use it as an enforcable template for future driver development (It also auto fills in my Icon's wich is great for a Lazy Programmer like me).

 

I have established that my protocol driver (Priority #2) is compatable with the same daemon Architecture from above, Instead of looking at hardware, it monitors a Queue/Notifier/User Events from the lower level hardware driver.

I was very smug when I figured that one out, until.... I realised that they would need to be launched at different priorities to respect the RTOS, and thus my question.

 

I am not trying to change priority per-se, but to choose one prior to launch and then to leave it.

 

Cat Killed? (Curiosity Cured?)

iTm - Senior Systems Engineer
uses: LABVIEW 2012 SP1 x86 on Windows 7 x64. cFP, cRIO, PXI-RT
0 Kudos
Message 5 of 20
(5,251 Views)

Jarrod, your suggestion is insigtful and will give me an unsigned integer's worth (65535) of granularity for prioritising.

 

I do not have a clear view of how to apply this structure to mine (see attached screen shots) as I do not use polled architecture.

 

Daemon Command.pngDaemon Exec.png

 

I need a while loop without timing,

I could use a sequence, drop the entire code into it

I like the Idea of a while loop as it can dynamicaly change priority depending on what it's state it is in.

 

 

I will proceed with caution of course.

I abandoned timed loops in labview 8.6.1 when I found what I believe to be a Latent bug where a timed loop in a daemon would not work when compiled into an .exe.

This bug took me the better part of 2 weeks to find and to my knowlege was never fixed. I do not want to go back down that Rabbit hole if I can help it.

 

And now for the for Bonus Point round:

A collegue of mine told me that the event structure executes in the user interface thread (Normal priority) forcing everything around it into this priority.

Is this true?

Is this True if no front panel controls are used? I am using User Generated Events (For commands)

 

Thanks,

 

Tim L.

 

 

 

iTm - Senior Systems Engineer
uses: LABVIEW 2012 SP1 x86 on Windows 7 x64. cFP, cRIO, PXI-RT
0 Kudos
Message 6 of 20
(5,248 Views)

Jarrod.

 

Is this what you meant?

 

Daemon Controll With Timed Loop.png

iTm - Senior Systems Engineer
uses: LABVIEW 2012 SP1 x86 on Windows 7 x64. cFP, cRIO, PXI-RT
0 Kudos
Message 7 of 20
(5,244 Views)

I am sorry to reply so such a well written and thought out plan but...

 

What you said applies to a CPU bound machine where there is not enough CPU to do the work and your methodology. I amde aliving doing perfomance tuning back in the 1980's implementing just shuch rules for mainframes and the like.

 

With modern CPU's we have a lot more resources available. Combine that with a well implmeneted application that is coded with coorperative multi-tasking in mind, the need to explicitly pick winners and losers (in the CPU battle) is no longer required.

 

So...

 

What kind of load does your appliation exibit that has you wanting to use priorities to manage it?

 

Just trying to help (in my own crude manner),

 

Ben

 

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 8 of 20
(5,222 Views)

@Ben wrote:

I am sorry to reply so such a well written and thought out plan but...

 

What you said applies to a CPU bound machine where there is not enough CPU to do the work and your methodology. I made aliving doing perfomance tuning back in the 1980's implementing just shuch rules for mainframes and the like.

 

With modern CPU's we have a lot more resources available. Combine that with a well implmeneted application that is coded with coorperative multi-tasking in mind, the need to explicitly pick winners and losers (in the CPU battle) is no longer required.

 

So...

 

What kind of load does your appliation exibit that has you wanting to use priorities to manage it?

 

Just trying to help (in my own crude manner),

 

Ben

 

 


 

Ben,

 

Right Now I am using a cFP-2220 and with the 400 MHz Power PC Processor that it is using, I may as well be working on an 80's mainframe.

At the risk of insulting Wind River and National Instruments, the resources gobbled up by the RTOS task scheduler, Labview .vi manager and assorted services, are not negligible.

 

By the time I merge 2 applications with an architecture that I inherited from my predecessor (40 Threads in the one VI, [Priority Normal]) It is no wonder that my communications buffer overflows and event triggered threads slow to a crawl. They are straining under the weight of round-robin scheduling on low priority tasks that I just don't care about at that point in time.

 

Don't get me wrong, I am greatful that I don't have to code in C on an RTOS, custom or otherwise, but it's power is useless if you don't tell it what is important.

@in this case the priority is to get serial data @ 115.2k baud out of the buffer, into a proprietary protocol analyzer and then use it to decide whether or not to dump 20 tonnes of ore in the correct place.

I can't afford to miss messages or to take 3 seconds to make a decision while the processor grinds away compiling and FTPing an hourly manger's report over the ethernet, they can wait 5 seconds.

 

So I know that the CPU is periodicaly maxed out, that's fine, BRING IT ON! but I need to take the time to tell VxWorks what is important during these times, it may be good, but it aint psyhic. I should at least give it a fighting chance.

 

I guess I have brought some of the drama onto myself by using common code and Daemons, but it makes it so much neater, scalable and maintainable, an most likely less buggy.

 

I know I may sound like a dinosaur and it may be because I was applying the same logic earlier this week on it's emedded counterpart: an 8-bit Atmel Processor that was having a similar set of communications losses due to ISR thread locking.

 

The logic is sound and scalable, even by today's technology,  just have a look at a gaming engine and what sort of tasks it drops during high loading. Texture rendering and frame rate are the first to go when you start moving at speed.

(PS. I am not a Gamer but I do appreciate the coding and Technology behind it, 128 processing cells running in parallel - Now that is a rescource rich environment)

 

 

 

iTm - Senior Systems Engineer
uses: LABVIEW 2012 SP1 x86 on Windows 7 x64. cFP, cRIO, PXI-RT
0 Kudos
Message 9 of 20
(5,201 Views)

Yes back in the day of the 400 MHz PC, it was quite challenging. Code optimization was actually an apreciated art form. And yes I agree that under those conditions, tweaking may be called for.

 

I am working a similar issue but in my case its coordinating all of the pumps at a well frac'ing site that has me challenged.

 

You can back down on update rate in MAX and that will reduce the amount of CPU used to gather performance specs.

 

can you off-load any disk writting since that puts demands on the Kernal and all else waits.

 

We do use timed loops to control the priority but try to keep all of the VIs at normal priority.

 

Analyze all of your code to see what can be optimized.

 

Bottom line:

 

Tweaking the priority of the VIs

 

THe classic RT model suggested a single TC loop and everything else at normal. Put yout TC stuff in that loop.

 

Interaction with the RT loop can be done via AE's set as sub-routine and in the TC loop configure the sub-VI call for "skip if busy".

 

I am departing for a raod trip soon so please post what you have found.

 

I hope you figure this out!

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 10 of 20
(5,187 Views)