LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Do the handles passed to a dll have to be locked down to insure other threads don't move the data?

> Maybe Dr.vi would consider composing a mini-series on the LabVIEW
> execution model.
>
> Anyway, a problem hit my desk a couple days ago. Apparently on one of
> our systems, my boss was trying to print 30 pages in about a minute
> and found, despite my hopeful threading model, it was interfering with
> DAQ. I thought I was a stress tester, but he really gets into it.
> Anyway, I think my printing must be executing on my DAQ thread since
> it has top priority?
>

Perhaps Dr. VI will do something like this, you should suggest it on
that site. In the meantime, maybe I can give a quick summary.

VIs have a setting for execution system and priority. This becomes
their preference, and with no external influences, this is where they
will begin and end their execution. The primary external influence, is
their caller. A low priority subVI called by a high priority VI will
inherit its caller's priority and become high priority. In addition,
most VIs are set to run in "Same as Caller" execution system, meaning
they inherit everything. This keeps the overhead of context switches
from overwhelming your computations.

So, a VI controls where it starts and stops execution, but since it can
call other subVIs, these subVIs may switch to others. They could use a
separate system, or a higher priority. Additionally, there are certain
tasks that can only be carried out by the UI thread. I'll try to list
them in a second, but these nodes will switch to the UI thread. Since
some diagrams my loop continuously doing nothing but tasks that require
the UI, the execution system lazily switches back.

For example, if a VI is set to run in Other, and it calls the VI Server
to query a panel location. The VI Server property node must run in the
UI thread, so a context switch takes place. Upon return from the VI
Server, the choices are stay in UI until we are forced to switch to
something else, or stay in UI in case the next thing needs it too. So a
diagram that runs in Other and makes repeated calls to a VI Server
property node runs faster with the amortized/lazy method of switching.

So, what has to run in the UI.

Orange CINs and DLLs -- they aren't thread safe and must be serialized
to the same thread each time.

VI Server property, method, open, and close nodes -- they modify or
inspect data structures that must be protected. They also do things
like open a panel, which can only be done from the UI thread.

Control/indicator property nodes -- same as VI Server, but just to be clear.

Menu, help, and most other nodes in the Application Control palette.

And that is all that I can think of off the top of my head. Things like
reading a terminal, a local, or the event structure do not switch to the
UI thread, but they do have to mutex with it, so they can be affected
slightly by its execution, but then every node that allocates memory is
mutexed, so that isn't so special.

To get to your printing issue. Printing always takes place in the UI.
It will not run in the DAQ thread. They can affect your DAQ operation
at some mutex point, such as the memory manager, or they could affect it
where it relies upon the UI thread. Ideally, if your DAQ operation is
high priority, it will not allow UI to affect its operation, so it will
decouple the UI from the acquisition. Your printout could also be
affecting DAQ because it is spooling out to disk while DAQ is trying to
log to disk.

Finally, keep in mind that no matter what you do to your diagram, the
underlying OS does what it wants. You mentioned the site that talks
about hi-res timers and high-priority tasks. I was looking at the
measurement with the author of those papers and saw high priority tasks
suspended for hundreds of milliseconds of time so that the OS could do
something as simple as unminimizing another application's window. So in
the end, your are only giving the OS hints and suggestions, and it can
all be derailed by the OS.

Now that I've given you the gloomy view, many times the MS OSes are
managable provided you keep the high priority systems sufficiently
decoupled from the rest of the app and you test lots of other
interactions, such as printing.

Greg McKaskle
0 Kudos
Message 11 of 18
(1,230 Views)
Greg,

I've to study this and I'm still looking at my problem...

Thanks Greg,
Kind Regards,
Eric
0 Kudos
Message 12 of 18
(1,230 Views)
Greg,

You break me up. you said;

>maybe I can give a quick summary.

Judging by the length of your answer, I do not think you "CAN" give a quick summary.

Dispite the length I have to say that this is probably the best explanation of this topic I have read to date.

I will print this out and re-read every couple of days. I will get this stuff yet.

Thank you,

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 13 of 18
(1,230 Views)
Ben,

I'm not sure if you mean Greg is being verbose (unlikely), or whether there simply is not a short answer (likely).

I'm going back to the top with the following question: "Is this roughly how the labVIEW execution systems work?" In previous versions of labVIEW, I didn't really care. But with multithreading, it now becomes an issue as we have raised the complexity by 2^x. The short answers might be adaquate if I had a better overview.

Eric
0 Kudos
Message 14 of 18
(1,230 Views)
I thought it funny that Greg intoduced his statement the way he did.

I preinted out what he wtote and read it a couple of times.

I am willing at this point to venture a guess regarding your printing while doing acq.

It sounds like if you get your daq functions to run in the DAQ thread (with pleny of buffer set aside. i.e. enough to handle the duration of the print job) you may be able to span the gap produced by the printing. This is based on the assuption that the hardware can DMA your results into a waiting buffer.

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 15 of 18
(1,064 Views)
Ben,

In LV4.1 I did exactly that, created large a large buffer for the board. Unfortunately, the aquisition mode I'm using is IRQ based so aquisition has to rely on the OS to reasonably service the IRQ request. If the CPU get overloaded, the IRQ will not be serviced in a timely fashion and my board buffer overflows. The buffer that I assign only gives labVIEW more time to service it.

I found out later that before my boss began printing like a madman he had also maxed out the aquisition scan rate. I choose a max based on the machine on which I program. If a client supplies a slower machine, then they can't max out the scan rate. (I suppose I could test the machine to make some determination as to how fast it is and infer a reasonable upper limit to the scan rate myself...)

A frame print requires loading data from the drive, composing a couple of graphs, running some stats, and printing the frame. Once the frame is printed it is spooled to the drive. The spool server then sends the page to the network printer. So the OS is servicing the drive for labVIEW both logging and retrieving data, and the print spooler, both saving what is being printed and what is being sent to the network printer, servicing the network card, servicing the DAQ card, and servicing the video display, all at once. I don't know, I wasn't there, but I suggested that perhaps he might consider reducing the DAQ scan rate if we're overloading the CPU.

I'm still looking for a reason in my sofware. I'm still trying to figure out if there is a vi shared between my "normal" thread and my "time critical" thread that might get hung during the print. I only share a couple of vis between threads, but these vis are called by other shared vis and so you get a kind of nasty weave. At this moment I'm using a shared vi to call the print server to print the front panel, but to be safe, I'm going to isolate the printing. I don't know that this is a problem, but if I move it, I'll know it isn't.

Thanks Ben,

Kind Regards,
Eric
0 Kudos
Message 16 of 18
(1,064 Views)
Could a lack of memory be involved in this scheme somewhere?

If he is maxing out scan rate then the buffers that go with it would be larger.

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 17 of 18
(1,064 Views)
no, I got tons. Back when this app. was in LV4.0 I did put in a buffer-size/scan-rate relationship. I think, maxed out my buffer is 64K and I'm only scanning 8K/second. If I can't service the buffer in less than 8 seconds, somethings definately amiss. My vi is trying to service the buffer every 50 ms.

... actually it kinda odd because it used to be if I didn't service my aquisition buffer fast enough I got a buffer overflow error. But in this case apparently, there is no DAQ error, just a blib in the trend, showing an obvious space of time where there was no readings. I guess I need a signal generator on my desk to reproduce it.

I'm only using about 3-5% of
the CPU while my app. is just scanning. And I'm only using about 15% of the of the boards capacity. ...but of course while I print, the CPU is pegged.

Playing with problem I found out that printing from LV to my HP inkjet like a madman can screw up the communications on the usb bus. I get a print error and have to shut off and turn on the printer and then restart the document. One thing seems to lead to another...

HP has some issues anyway, so it doesn't surprise me. There is a usb update on their website so maybe its their problem...

...I'm still looking at it...

Thanks Ben,
Kind Regards,
Eric
0 Kudos
Message 18 of 18
(1,064 Views)