LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

As LabVIEW evolves more and more, the compiler takes over an awful lot of code optimisation for us.  This leads us to situations where relatively large and important pieces of code can be evaluated at compile time and constant folded which can greatly aid execution speed.  This is good.

 

Constant folding can be a great aid when programming but at the moment, it's usage is a bit "hit and miss" due to the opaqueness of the process.  We already have constant folding highlighting, which really helps things (even if the feedback is sometimes very hard to understand).  But this doesn't always give us enough feedback.

 

What I would like is the option to declare a portion of code as "Requires constant folding" (like a "Precompile" structure).  In this way, I can, as a programmer, designate some code which is meant to be evaluated at compile time.  If the compiler is unable to constant fold this code, then the VI should be broken.  My motivations are three-fold.

  1. Sometimes we want to specifically make use of the constant folding capabilities of the compiler, but a small change can result in the code no longer being constant folded.  I would like explicit feedback when code I want constant folded is not constant foldable.
  2. I have no idea whether code complexity has an effect on the ability to constant fold.  Other compiler optimisations (Like unbundle-unbundle inplaceness) are dependent on code complexity.  Explicit declaration is not code complexity dependent.
  3. When looking at FPGA designs, the ability to perform constant folding of data otherwise requiring resources or affecting performance is very powerful.  In such a "Constant folding" code, we could also allow mathematical functions to be used which are otherwise not supported on the target (max/min of an array in a timed loop for example), or creating default data for an array (to be used as Block RAM) based on an existing equation where constants are defined as DBL.

 

One example of FPGA code is automatic latency balancing of several parameter pathways into a process where the code accepts abstract parameter objects whose latency is queried via a dynamic dispatch VI which simply returns a constant.  I use dependency injection to tell the sub-VIs which communication pathways they are being given and they can then query the latency and do some static timing calculations for the delays required on different pathways.  Tests have shown that this is constant folded and that it is thus possible to write very robust FPGA code which auto-adjusts request indices for parameters in multiplexed code.  At the moment, things seem to work but the ability to specifically designate such code as being constant folded would be welcome to make sure I don't accidentally produce a version which doesn't actually return a constant (and my compiles fail, I get timing errors, or just over-use resources)....  In the code below, all of the code circled in blue is constant-folded when compiling the FPGA code.  In the sub-VIs I have to do some awkward calculations because certain functioanlity is not available on FPGA.  By defining this code as requiring explicit constant folding, I could theoretically utilise the full palette of LV functions and also be guaranteed a compile error (LabVIEW error, not Xilinx) if the code thus designated can not be constant folded.

 

2016-09-15 10_13_00.png

 

So in a way, it's similar to the In place element structure which, when all goes well, should not be needed but there are cases (I've run into some myself) where either small changes in code can make the desired operation impossible or where the code complexity can cause the optimisation to not be performed.  As such, it is still required at times to explicitly designate some code paths as being In-Place.  I would like to have the same functionality for "Constant folding".

I was searching for occurences of a reference to a Graph in one VI, and as I was interrupted, came back to the search result after the interruption, only to discover that the Search Result Window did actually not show ANY kind of useful information regarding the object I was searching references for:

 

Screen Shot 2014-08-19 at 18.03.18.png

 

I know I have outrageous expectations as a LabVIEW user, but this seems to me an odd lack of feature:

 

- From this window, I have absolutely  no clue what I am searching for. In particular, if I have in the mean time jumped from windows to windows...

- ...there is no way to go back to the object these references are linked to (unless I go to one of the references and then look for the Control or Indicator they are associated with).

 

Of course asking for a VI information when this is provided in the list below is maybe unnecessary.

But consider this global variable whose references I was looking for:

 

Screen Shot 2014-08-20 at 10.12.34.png

 

Same thing here:

- I do not know the type of the global.

- I do not know which VI it is part of (Globals are saved in a VI).

- I do not know where I started my seach from (but that's more of a back-to-source button issue).

 

Suggestion: provide as much information as possible about the starting point of the search, when said starting point is an object (by contrast to a text search).

 

Tested in LV 2013 SP1 64 bits.

The number of parallel Instance is currently capped at 64, independent of hardware. This limit should be raised.

 

First Reason: Since even 64 bit Windows 7 supports up to 256 cores, it would be reasonable to raise that limit to 256.

 

(Even the next version of windows mobile (8) will support 64 cores. (Mobile! On a Phone! 🐵. Obviously the upcoming hardware is fast moving in that direction.)

 

Second Rason: Sometimes it is useful to generate many instances, even if we have fewer cores available, for example maintain individual data in a large number of identical reentrant subVIs. (Such an usage example where we want many instances even on a single core machine can be found here)

 

Idea: Raise the max number of parallel Instances of a parallel FOR loop to 256.

Since LabVIEW 2017, it's possible to build application with a compatibility with future version of run time engine.

This option is set by default but can be disabled.

 

I just discover that this option is set for real time applicaiton and cannot be unset. I mean that if you build your application in with labview real time 2017, it will run with a system installed with a newer version of LabVIEW Real time.

 

This can be a good idea, but I'm a little bit surprise that I cannot have informations on that options for real time application and I can't control it.

 

Here is a way to test it. Tested on a real time desktop with pharlaps.

Install RT target with LV 2017.

Build an application and set it to run as startup. A simple application writting something in the console is enough.

Make sure your applicaiton is running at startup.

Update your system by only installing LabVIEW real time 2019.

Restart your system and your application is still running !

 

Because I faced an issue where LabVIEW 2020 broke my application build in LabVIEW 2017, I'm asking myself how NI can garanty that a real time system will work in any case if we upgrade the system to a higher version of LabVIEW real time version without recompiling the application.

 

Real time system can be used to control system that can be a secure system. If a user update by error a system, I want to keep my system safe for user.

 

So my idea is to remove this option or give access to the user to unselect this option to avoid any bad behavior.

 

best regards

There are already a couple ideas to retain wire values on a hierarchy (here and here). This is a request to disable (or toggle) the 'retain wire values' option on all VIs of a hierarchy in the same vein as 'disable breakpoints on hierarchy'. This request was spurred by an investigation into VI performance on resizing a very large 2D array of strings, wherein several memory allocation strategies and lookup strategies (arrays, maps, variants) had been exhausted, only to discover that a subVI whose (front panel and diagram had been closed) was set to retain wire values. Merely toggling the retain wire value setting off on this particular subVI improved performance by approximately one order of magnitude for this particular project.

 

Investigating and accurately measuring run-time LabVIEW performance can be challenging with so many debugging tools available, the impacts of which are not immediately obvious. For this reason, I would like to see a menu option that recurses on VI hierarchy and toggles the 'retain wire values' option, in the same vein as the option which removes all breakpoints from VI hierarchy. The option would be very helpful with run-time performance investigation or optimization.

It's been great to be able to find Items with No Callers in Project. How about Find Broken VIs as well?

 

FindBroken.png

Currently there are no officially supported frameworks for Unit Testing in LabVIEW for Linux.

 

A lack of a unit testing framework on LabVIEW for Linux reduces LabVIEW's usability in widely-recognized and industry standard software engineering practices.

 

A Unit Test Framework created by NI already exists, as well as a 3rd-party tool for free, VI Tester by JKI. However, neither of these are available for desktop Linux (or Macintosh).

 

NI LabVIEW Unit Test Framework Toolkit

 

VI Tester - JKI

https://github.com/JKISoftware/JKI-VI-Tester/wiki 

When building Installers for an updated version of an executable, you can't select whether or not to overwrite the current source files directory. The reason this is a problem is when distributing to computers that need different configuration files that are installed with the original version but have since been customized to the specific computer. If you try to only include certain source files and not others then it still overwrites the entire directory and you lose the files you were trying to save. If there was an option in the Source File Settings window to "Overwrite", it could be automatically checked for the same default functionality but you could uncheck it to allow writing into existing directories.

 

Source File Settings.png

 

If you like this idea then give me some Kudos and hopefully we can get it in the next version of LabVIEW!

 

Peter W.

Applications Engineering

National Instruments

www.ni.com/support

I want to be able to work on STABLE versions of LV.

 

The last great stable version I remember was 6.1 (I never had 7.1).

 

2009 and 8.5.1 were not bad but please give us a feature-fixed long-term support version of LabVIEW.

 

For anyone unfamiliar with the idea, many Linux distributions offer the same:  Here's a link to the Ubuntu webpage outlining THEIR LTS strategy.

 

Shane.

I really think this is a bug, but I'm going to file this as a feature request, just to add more weight to the issue...

 

The VI Server Application method Library.Get File LV Version claims to be able to tell you the version of a Library (or XControl, XNode, LVClass) on disk, without loading it into memory.

 

noname.gif

 

Here are the docs:

 

12-1-2010 4-18-09 PM.png

 

But, when you try to call this method on a Library saved in a version of LabVIEW newer than the version of LabVIEW in which the method is executing, you get the following error:

 

Error 1125 (LabVIEW: File version is later than the current LabVIEW version.)

 

Why an error?  It had to read the version in order to generate the error.  Why not just return the version?

 

Thanks in advance for your kudos 🙂

When I installed LabView I realized that a really large number of automatically starting programs is being installed. Even when running LabView not all of them are really needed. But they are still actively working in the PCV's memory ant take the power that would be needed for other applications. Basically when LabView is not running at all. A possibility should be implemented to deactivate these automatic applicatins: -Reduce the number of Autostart-applications to these that are really needed -Give the possibility to switch them all off when there is no intention to run LabView. A reboot for re-activating these autostart applications would not be that problem. -A minimum request is to give a list of Autostart applications that are needed for each LabView-Application. This would help to deactivate the autostart-programs manually.

The Desktop Execution Trace Toolkit (DETT) can trace enqueue/dequeue operations, so I would think this is feasible:

 

Add a mechanism to like the Profile>Performance & Memory view to display all active queues in memory by name (for "unnamed" ones, use whatever unique refnum/identifier is available) as well as the max size and current # of items in queue. You could use the same "Snapshot" functionality as the Profiler tool.

 

A particular use case:

We were tracking a memory leak in a large application that resulted from an unbounded queue whose consumer was disabled. The standard DETT and Profile tools weren't showing where the excess memory consumption was coming from, since queue data does not "belong" to a single VI. Granted, you can see the individual enqueue/dequeue operations in DETT, and even highlight pairs, but that's a little cumbersome in a large application.

Hi, i wanted to suggest the creation of a separate utility software that would convert a VI from any version to any other version. This would save people a lot of time by not waiting to get it converted from their respective threads. Also it would serve for more people to able to reply on the forum(me included since i am using LabVIEW 8.6 and most of the posts contain VIs made in 2009 and 2010 even though most of the time the same functions are avalable in 8.6 😞 ).

 

 

PS: Sorry, got no pictures

 

I would like to see the Join and Split Numbers function to be expandable and polymorphic. I’m not arguing big vs little. Just accept there are two Endian worlds and work with them. Have you ever joined two or four numbers from a data stream in Little Endian? You have to change the order and cross wires as shown in figure (1).

Join Numbers Figure 1.GIF

This is not that clean and it gets worse when you need to split numbers and send them back to a device. Because I join and split a lot of numbers I created a library of vi’s that are clean on the diagram and visually indicate Big vs. Little Endian. A simple arrow works for me to indicate Big vs. Little Endian shown in figure (2). This Library is also attached.

Join Numbers Figure 2.GIF

I know the Join and Split two numbers in Big Endian is the same function in LabVIEW. This provide visual consistence on my block diagrams. An example of block diagram code that shows the difference between Big and Little Endian form is shown below in figure (3).

Join Numbers Figure 3.GIF

Here is what I would like to see National Instruments create. Make the Join and Split Numbers function to be expandable and polymorphic. The words are only going to get bigger and there will always be two Endian worlds. Make the Join and Split Numbers vi’s expand like the build array function. Click and drag possibly in groups of 2 i.e. 2, 4, 6 and 8 inputs or outputs for the Split Numbers vi. Of course the output data type would correspond to the number input connections. The polymorphic examples are shown below in figure 4.

Join Numbers Figure 4.GIF

To take the polymorphic function one step further it could include the data type. There are times when I need to join numbers and convert to a signed integer or a double floating point. A demonstration of the polymorphic data type is shown in figure 5 and 6 with before and after examples.

Join Numbers Figure 5.GIF

Expanding the functionality of the Join and Split Number vi’s will reduce block diagram clutter, increase coding speed and maintain visual readability. What do you think?

 

I've encountered a programming situation where I may need to call 'Match Regular Expression' where the regex is selected at runtime, and where that regex may potentially have a variable number of submatches to return.  Unfortunately, right now, the submatch count is a compile-time decision based on how far out I grow the node.  I can grow the node to some maximum number of submatches ever expected, and thankfully the node doesn't throw a runtime error if there are fewer, or greater, submatch expressions in the regex.  I'm building the individual returns into a string array for further processing, but it would be much more versatile if the node could return the submatches with a properly-sized string array.

Hi,

 

Currently if a VI is set to subroutine priority you can only call subVIs within that which are also subroutine priority (to prevent priority inversion I guess).

 

It would be great if it was possible to also use inlined subVIs inside subroutine VIs.

 

As inlining basically defeats a VI's priority setting an inlined subVI would just "inherit" the subroutine priority of its caller. I configure many of my very small reuse VIs  as inlined (most of those in the GPower Error & Warning toolset from v2.1 onwards for instance), since they typically perform much better than subroutine that way. But since they are configured as inlined, this effectively prevents them from being (re)used inside subroutine VIs.

 

Cheers,

Steen

Hi all,

 

After some issues spending 1 week to get HTTP embedded server working in LV for a single application, I have some remarks that might trigger some need to a more flexible, simple and open HTTP configuration. The current implementation of a HTTP server is quite limited and outdated to my opinion.

First thing is the NI Web Server. This is a nice feature, however, NI recommends using it rather then the outdated Application Web Server but the problem is that this thing is only a single server running on a single port (for every application executable). Good enough for a single web server on a host using a web browser but how about implementing a LV HTTP server for each application (e.g. RPC server)? To my knowledge, every other programming language (e.g. Python, C++, ...) has a core implementation for this.

I have spent a lot of time to see what the best solution is for implementing a HTTP server belonging to a single application executable in LV. This executable is typically a application GUI or a backend service in our projects and we have a lot of them. Every application needs its own RPC server (running on a different port) and hence running its own RPC methods and I ended up implementing a Web Service using a LV Application Web Server, I can't see other ways at this moment using core LV functionality without the need for additional packages to install.

I also miss the enabling and disabling of the HTTP server during runtime. As in our project applications, we also have other transport layers for implementing RPC, such as zeroMQ (thanks to Martijn Jasperse's library on VIPM) and TCP (native built-in to LV). I would like to run only one of these transport services by configuration but here is a second problem here, once the application is running, the HTTP Web Service automatically registers and there is no controlled way for disabling it during runtime, which gives me headaches since I have to change the port number as another transport layer cannot use the same port as the HTTP server. One might say to built another application (actor based) exe and implement the Web Service from there in a different actor but this is a pain in the *** to have 2 exe's for each single application. Why can't the HTTP Web Service not switched OFF and ON again, both in development and runtime? I found a property node to disable the server but it apparently doesn't work (seems related to the native panel web server).

One of the major disadvantages that I also encountered is the HTTP methods that are programmed in a single VI and there is no way to pass data to these method VI's (like using actor framework or even classes in general). It seems we have to use FGV's (Functional Global Variables) to share data between my main application actor and these HTTP method VI's itself. Even then, the HTTP Service Request refnum is only valid in the HTTP method VI itself, once it finished executing, the refnum is flushed and not valid anymore, so no way to pass this refnum using actor framework messages to my application actors. That's quite frustrating since I have to use notifiers within the HTTP method VI's instead as a plan B backup solution signifying that the method VI can proceed and finish its execution once it Wait on Notifier function is complete (since I want to send an answer from my application actors, not from the HTTP method itself)!

Another issue I observed is that I can't "Start" the HTTP Web Service from the right-click menu in the project explorer, it simply crashes with some dubious error that the 'system is currently in an invalid state for the current message'. What does this mean, no clue from NI help docs?

 

Arrowin_0-1715670098941.png

 

I can only right-click and select "Start (Debug Server)" to make it work (but on the debug port 8001 by default). All other options just fail, the same for "Publish", it simply doesn't work in my LV2020 SP1 (32-bit) version and I have no clue why as there is not a single error message at all!

Also, why must we use MS Silverlight to control application webservers from LV? Silverlight is deprecated and I ended up using MS Edge in "Internet Explorer" mode to get the config page working (after spending another two hours to find out). Even then, some config panes just show up with error dialogs and no way to see active services being registered by the application HTTP web server. In the end I just used TCP View to see active services running. It is always frustrating to use third party apps to do simple things.

 

As you might notice from this message, I suffered a lot of days to figure out how to implement HTTP in a simple decent way using LV's core HTTP functionality. I wonder if this will be better using LV 2024 Q1?
If anyone has ideas on how to properly configure multiple application HTTP servers for each application on different ports while controlling itself, please share it with me. I am open to any idea's and wonder if there are other solutions for HTTP implementation (not using 3rd party packages). To my opinion, HTTP should be easy and open to configure properly in LV without a lot of current non-working Web Server issues.
Please note that I tried to reinstall the NI Web Server and other web service related stuff using NI package manager but no avail.

 

Best Regards,

Davy Anthonissen

In LabVIEW it is not possible to have an array of arrays. Something that gets you close is a 2D array but each row must be the same size. You can have an array of clusters of arrays to get around this limitation. Many languages do support arrays of arrays.

Hi,

I want to start discussion about how to enhance Loop Conditional Terminals in LabVIEW. Generally my idea is to have an easy way how to monitor Conditional Terminal of user-defined "primary" loop. Under the hood there can be for instance notification triggered from the "primary" loop and one or more "slave" loop(s) equipped with "Wait for notification" (with timeout = 0) with predefined logical operation on the terminal input.
So this allows you to have one STOP source loop and one or more listeners.

 

sopping loops.png


Anyone wants to expand this idea?

Create an XY Graph and feed it a time stamped XY plot with some hundred thousand points...and you have yourself a very sluggish and possibly crash-ready application. The regular graph can take a bit more data, but still has its limits. Having 100k number of points to display is quite common (in my case it's most often months of 1 second data).

 

The idea could be formulated to just "improve how graphs handle large data sets"...but how that would be done depends a bit on what optimizations the graph code is open for. The most effective solution however would probably be to do what you currently have to write yourself - a surrounding decimation logic.

 

So my suggestion is to add an in-built decimation feature, where you can choose to have it automatically operated when needed - or when you say it is needed, and possibly with a few different ways to do the decimation (Min-Max/ Nth point etc.). The automatics should be on by default - making the problem virtually invisible for the novice user.

 

A big advantage of doing it within the graph is that it will (should) integrate fully with the other features of the graph - like zooming, cursors etc.