11-26-2018 08:07 AM
I'm currently using NXG 2.1, and I've put together a front panel with a string indicator called "Messages" which I need to change periodically with a Case Structure. I've found two ways to do this, and I'm just wondering which one is "better" (i.e., which is the preferred, or perhaps more efficient method).
Method one involves creating a Reference to my Front Panel object "Messages" and attaching a Property Node, then writing my new string into that Node. The advantage here (FWIW) is that I can keep the "Messages" Reference outside of my Case Structure, so there only needs to be one, and the Property Nodes / string constants can be placed inside the individual cases.
Reference & Property Node
Method two involves right-clicking on the terminal for my "Messages" indicator and selecting "Create duplicate terminal". Using this method, each Case gets its own instance of the duplicate terminal and a new string constant.
Duplicated "Messages"Terminal
Is there any significant difference between these methods, or any advantage to using one over the other? Thanks for the help!
Solved! Go to Solution.
11-26-2018 09:21 AM
Okay, after some experimentation and benchmarking (with my somewhat rough-and-ready Benchmarking VI) I've determined that writing a constant directly to a duplicated terminal (method two) is significantly (about 17x over 200k iterations) faster / more efficient than the Reference + Property Node method. Hopefully this helps other people!
11-26-2018 09:32 AM
@JRiggles wrote:
Okay, after some experimentation and benchmarking (with my somewhat rough-and-ready Benchmarking VI) I've determined that writing a constant directly to a duplicated terminal (method two) is significantly (about 17x over 200k iterations) faster / more efficient than the Reference + Property Node method. Hopefully this helps other people!
In current gen, I have measured the property node method being 1000s times slower than a local variable. So I am a little surprised it was only 17x slower in NXG. Maybe there are some debug settings floating around to turn off (I do not have NXG, so I am not sure what settings are available).
11-26-2018 09:43 AM
The Property Node method being slower is certainly not surprising to me, but the speed difference between LabVIEW 2018 (I assume) and NXG does strike me as odd! That said, since I've yet to see an "official" benchmarking suite for NXG, I've just put together a VI with a Sequence Structure inside a For Loop. The sequence is as follows:
1: Get current "Tick Count" when the loop starts
2: This frame contains the VI I want to benchmark, and the Tick Count is passed directly through to frame 3
3: Get the current tick count and subtract it from the last tick count
This is repeated N times in the For Loop, and the results are averaged - some extra steps are done to remove outliers / glitches, get statistics, etc. My point being that I don't know if this is an acceptable way to benchmark things, but I figure consistency validates the method (if each VI is benchmarked using this same process) since I'm really only interested in relative differences in speed between VIs / methods.
11-27-2018 08:56 AM
Using property nodes in current gen LabVIEW (2018), generally involves a thread swap causing extra overhead. When you are just writing a value to a control, and you can use local variables (duplicate terminals), but if you had something like a subVI and you wanted to change the value of a control from another VI then local variables (duplicate terminals) wouldn't be an option.
Unofficial Forum Rules and Guidelines
Get going with G! - LabVIEW Wiki.
17 Part Blog on Automotive CAN bus. - Hooovahh - LabVIEW Overlord
11-27-2018 09:08 AM
That's great info - I'll keep that in mind! Thank you
11-27-2018 10:34 AM - edited 11-27-2018 10:35 AM
This thread prompts a new question.
As mentioned previously the current gen involves a thread swap for the property nodes to use the UI thread and prevent race conditions blah blah blah. I had understood that as being a hack. So on to the question...
Has NextGen fixed that issue and is capable of invoking property nodes without requiring use of the UI thread?
Just thinking,
Ben
11-28-2018 02:10 AM
@Ben wrote:
This thread prompts a new question.
As mentioned previously the current gen involves a thread swap for the property nodes to use the UI thread and prevent race conditions blah blah blah. I had understood that as being a hack. So on to the question...
I wouldn't say this is a hack. It's a fairly cheap way to make synchonization between multithreaded code and a single threaded UI. The alternative would have been to add a mutex to every possible UI object in LabVIEW and rewrite every part of LabVIEW to make sure to go through this mutex when accessing the object. This approach would have been a lot more work (several 100 sites in the code would have had to be visited, revised, tested and countertested) and very error prone as it is extremely easy to miss some instances where this mutex needs to be used. One single missed mutex access can make the difference between an application that simply runs and one that more or less frequently crashes at seemingly random moments.
As LabVIEW NXG is pretty much a complete rewrite for the entire UI part of LabVIEW, with a high potential to never work on other platforms than Windows aside from the headless operation on realtime targets, I'm sure they changed that part in the process too.
11-28-2018 08:00 AM
@rolfk wrote:
@Ben wrote:
This thread prompts a new question.
As mentioned previously the current gen involves a thread swap for the property nodes to use the UI thread and prevent race conditions blah blah blah. I had understood that as being a hack. So on to the question...
I wouldn't say this is a hack. It's a fairly cheap way to make synchonization between multithreaded code and a single threaded UI. ...
As LabVIEW NXG is pretty much a complete rewrite for the entire UI part of LabVIEW, with a high potential to never work on other platforms than Windows aside from the headless operation on realtime targets, I'm sure they changed that part in the process too.
OK, "hack" may be a little too far. "Kludge" maybe?
6i came out right after 5.1 when LV went multithreaded so there may have been a scramble implementing the change from attribute nodes to property nodes.
Where you still with NI at that time?
Just throwing out words, don't pay me much mind.
Ben
11-29-2018 02:05 AM
@Ben wrote:
6i came out right after 5.1 when LV went multithreaded so there may have been a scramble implementing the change from attribute nodes to property nodes.
Where you still with NI at that time?
Just throwing out words, don't pay me much mind.
Ben
I believe that it was actually LabVIEW 5.0 which came out with multithreading support. And no I left NI in fall of 1996 to move to the Netherlands, so that would be after the release of 4.0 as seen here.
I think the release of multithreading in 5.0 on top of so very different threading models like WinThreads, Posix-Threads or pthreads (Linux), Unix International Threads (Sun Solaris), and even an attempt to support Copland threads (a canceled version of MacOS 😎 was a real feat and it worked very well right from the start.
At that time there was the decision made to leave the controls to operate in the UI thread as on at least Windows calling the according APIs from other threads was sooner or later a sure way to get into trouble. The typical multithreaded application at that time consisted of a main thread which did the application message loop and all the user interaction and other threads only created to do specific background tasks which did not usually attempt to directly access the UI. I'm not sure if the Unix XWindows manager interface that LabVIEW was/is using on Linux platforms was fully multithreading safe back then but Windows wasn't and the message loop couldn't be put in another thread than the main thread that was created when the process was started.
Attribute Nodes or Property Nodes is not really different, the decision how the entire UI handling had to operate in multithreading LabVIEW had already been made for 5.0 and was working so well that any attempt to change anything after that would only have meant lots of work for a potentially much less stable solution for several LabVIEW releases.