LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

liblvrtdark threading

I don't know if this is related, or whether this is a bug or an intentional feature, but in the dark library v8.2:

The "wait (ms)" block appears to ONLY yield control of the CPU if a 0 is wired to the input, not if any other number is used.  So a 10 second wait is done as a busy-wait rather than a thread sleep.  I have resorted to putting a "wait 0ms" in parallel with each "wait Xms" in order to yield control to another thread during the wait.

I know the documentation does say that it will yield if a zero is wired, but it seems a bit odd that ONLY a zero has this effect.  I haven't tested this on v8.5.
0 Kudos
Message 11 of 18
(1,188 Views)
There has always been a difference between a wait with a zero, and
everything else wired to it. On windows (and Linux too, I asume) a zero
causes a thread switch, and a non-zero a sleep.

What you are describing is probably related...

Regards,

Wiebe.


0 Kudos
Message 12 of 18
(1,175 Views)
Hello,

i test a little bit with the success.vi. With the normal runtime engine it takes only 10 seconds to finish. With the embedded engine it takes 60 seconds to finish.
After i set the prefered execution system in the vi settings from "same as caller" to "standard" and compile the application again, everything runs normal with the embedded runtime engine.
Maybe this is also a working solution for you.

Michael
0 Kudos
Message 13 of 18
(1,164 Views)
I compile both vi (success.vi and failure.vi) and make two new vi's (succes1.vi and failure1.vi).
Success.vi and Success1.vi runs fine (10 seconds).
Failure.vi and Failure1.vi runs not good(60 seconds).
I only change the Prefered Execution System in the Vi Parameter Settings. For a test you can run the test_main Program.

0 Kudos
Message 14 of 18
(1,130 Views)
Hi Tom,

Unfortunatly it appears that you have come across an inherrent error within the liblvrtdark runtime engine. A corrective action request has been opened with our Research & Development team in Austin, Texas so that this issue may be adressed in future releases/patches of the engine in question. I understand that this is not terribly helpfull for you personally so I will make sure that you are notified of any progress made in the near future and wish you the best of luck with your project.

Best Regards,

Ian Colman
Applications Engineer
National Instruments UK & Ireland
0 Kudos
Message 15 of 18
(1,078 Views)
Thanks very much everyone for looking into this.  I now have a solution that seems to work well enough:
- use version 8.5 of labview
- tick the "embedded runtime" box and link against the dark library
- set the execution system to "standard" in the top level VI (This makes a big difference - thanks michhoefer!)
- put a "wait 0ms" into each loop to ensure that the thread yields control at some point during each iteration.

Of course it would be great to fix the underlying problem, but this work-around seems to be good enough to let me start deploying compiled applications. Thanks guys!
0 Kudos
Message 16 of 18
(1,073 Views)

I face an exception(segmentation fault) while executing a C program on Ubuntu linux Intel chipset which links to a labview generated shared library libpid.so(attached).This  library is dependent on liblvrtdark.so(labview library).Upon strace we found that the segmentation fault happens when the program tries to load the liblvrtdark.so.

This crash is observed once in 5 or 6 times and not always.We have attached strace output(working.log and crash.log).working.log is output of strace when there is no crash.

Upon our analysis below strace output is seen when there is no crash

lstat64("/localhome/slxadmin/test/libpid.so", {st_mode=S_IFREG|0755, st_size=38357, ...}) = 0

access("/usr/local/lib/liblvrt.so.9.0", R_OK) = 0

 

below output is seen when there is a crash.

lstat64("/localhome/slxadmin/test/libpid.so", {st_mode=S_IFREG|0755, st_size=38357, ...}) = 0

access("", R_OK)                        = -1 ENOENT (No such file or directory)
--- SIGSEGV (Segmentation fault) @ 0 (0) ---

 

 

please help us to resolve this

 

 

 

 

0 Kudos
Message 17 of 18
(687 Views)

Surfacing an old thread but this is still a problem in LabVIEW 2019.

 

I found just setting the top level VI to standard wasn't enough. I had to set the loops that were blocking each other to have execution systems too. I set them to be different - not sure if that is critical or not. But now a network hold up in one doesn't block the other. (this is without the wait 0)

James Mc
========
CLA and cRIO Fanatic
My writings on LabVIEW Development are at devs.wiresmithtech.com
0 Kudos
Message 18 of 18
(567 Views)