LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

This is a simple suggestion regarding returning all variant attributes from a variant in the correct data type if the name is left unwired but the default value is wired.

 

As you probably know, if you leave the name and value terminals unwired the "get variant attribute function" will return an array of names and variants for all of the variant attributes. If you wire the name or the default value then the node will adapt and return only a single value with the 'found?' output.

 

Here is a diagram to show you what I'm talking about:

2015-02-18_11-55-39.png

 

Currently, if you leave the name unwired but wire the default value this results in a broken block diagram.

 

The reasons for this suggestion are as follows:

  • Cleaner block diagrams - if you know all of the variant attributes are the same data type then you save the extra constant/variant to data node
  • Possible performance improvements - maybe NI does (or can do) something to improve the performance/memory allocations if the data type is known and can be done within the SubVI
  • I can't see a case where it would break compatibility other than perhaps a broken block diagram would no longer be broken if the default value was wired but not the name but the runtime functionality would remain the same
  • As variant attributes are a very efficient and recommended way of doing key/value lookup tables I think this minor change will tidy things up nicely and if there are performance gains to be had under the hood by doing this then all the more reason to do it!

Thanks for reading and hope you'll +1 my idea!

 

 

A pretty common use case of LVCompare in my workflow is to use it as a diff tool in SVN to compare different versions of a VI. When I do that, the previous version is downloaded into a temp directory, and then there is a decent amount of load time because dependency paths have to be resolved differently for the version in the temp directory and some recompiling happens. For top-level VIs in large applications, it seems like the whole dependency tree is getting loaded, which takes a long time. But really, for comparing VIs, there's no need to load the contents of lower level subVIs (and their dependencies, and dependencies of dependencies, etc.) As long as the connector pane, typedefs on the connector pane, and the icon of a subVI are loaded, that should be enough information for a visual diff of the top level VI.

Hi all,

 

After some issues spending 1 week to get HTTP embedded server working in LV for a single application, I have some remarks that might trigger some need to a more flexible, simple and open HTTP configuration. The current implementation of a HTTP server is quite limited and outdated to my opinion.

First thing is the NI Web Server. This is a nice feature, however, NI recommends using it rather then the outdated Application Web Server but the problem is that this thing is only a single server running on a single port (for every application executable). Good enough for a single web server on a host using a web browser but how about implementing a LV HTTP server for each application (e.g. RPC server)? To my knowledge, every other programming language (e.g. Python, C++, ...) has a core implementation for this.

I have spent a lot of time to see what the best solution is for implementing a HTTP server belonging to a single application executable in LV. This executable is typically a application GUI or a backend service in our projects and we have a lot of them. Every application needs its own RPC server (running on a different port) and hence running its own RPC methods and I ended up implementing a Web Service using a LV Application Web Server, I can't see other ways at this moment using core LV functionality without the need for additional packages to install.

I also miss the enabling and disabling of the HTTP server during runtime. As in our project applications, we also have other transport layers for implementing RPC, such as zeroMQ (thanks to Martijn Jasperse's library on VIPM) and TCP (native built-in to LV). I would like to run only one of these transport services by configuration but here is a second problem here, once the application is running, the HTTP Web Service automatically registers and there is no controlled way for disabling it during runtime, which gives me headaches since I have to change the port number as another transport layer cannot use the same port as the HTTP server. One might say to built another application (actor based) exe and implement the Web Service from there in a different actor but this is a pain in the *** to have 2 exe's for each single application. Why can't the HTTP Web Service not switched OFF and ON again, both in development and runtime? I found a property node to disable the server but it apparently doesn't work (seems related to the native panel web server).

One of the major disadvantages that I also encountered is the HTTP methods that are programmed in a single VI and there is no way to pass data to these method VI's (like using actor framework or even classes in general). It seems we have to use FGV's (Functional Global Variables) to share data between my main application actor and these HTTP method VI's itself. Even then, the HTTP Service Request refnum is only valid in the HTTP method VI itself, once it finished executing, the refnum is flushed and not valid anymore, so no way to pass this refnum using actor framework messages to my application actors. That's quite frustrating since I have to use notifiers within the HTTP method VI's instead as a plan B backup solution signifying that the method VI can proceed and finish its execution once it Wait on Notifier function is complete (since I want to send an answer from my application actors, not from the HTTP method itself)!

Another issue I observed is that I can't "Start" the HTTP Web Service from the right-click menu in the project explorer, it simply crashes with some dubious error that the 'system is currently in an invalid state for the current message'. What does this mean, no clue from NI help docs?

 

Arrowin_0-1715670098941.png

 

I can only right-click and select "Start (Debug Server)" to make it work (but on the debug port 8001 by default). All other options just fail, the same for "Publish", it simply doesn't work in my LV2020 SP1 (32-bit) version and I have no clue why as there is not a single error message at all!

Also, why must we use MS Silverlight to control application webservers from LV? Silverlight is deprecated and I ended up using MS Edge in "Internet Explorer" mode to get the config page working (after spending another two hours to find out). Even then, some config panes just show up with error dialogs and no way to see active services being registered by the application HTTP web server. In the end I just used TCP View to see active services running. It is always frustrating to use third party apps to do simple things.

 

As you might notice from this message, I suffered a lot of days to figure out how to implement HTTP in a simple decent way using LV's core HTTP functionality. I wonder if this will be better using LV 2024 Q1?
If anyone has ideas on how to properly configure multiple application HTTP servers for each application on different ports while controlling itself, please share it with me. I am open to any idea's and wonder if there are other solutions for HTTP implementation (not using 3rd party packages). To my opinion, HTTP should be easy and open to configure properly in LV without a lot of current non-working Web Server issues.
Please note that I tried to reinstall the NI Web Server and other web service related stuff using NI package manager but no avail.

 

Best Regards,

Davy Anthonissen

I would like to have an XY Chart that has the same functionality as the Waveform Chart, but with x and y inputs instead of just the y.  The XY Graph is not efficient to be used inside of a loop, mainly because it redraws the plots each iteration, and I've had a hard time trying to make a buffer that is as efficient as the waveform chart.  The Waveform Chart does not allow you to define the x-axis as anything that is not an interval.  For example, I might want to plot pressure vs. flow rate.  Many customers also request the ability to change the sampling rate during an experiment.  this would be much easier to handle with an XY Chart.

It would also be nice to have the buffer size included in a property node.

It would also be nice to have the ability to change the size of the graph palette.

It would also be nice to have the nearest plot coordinates (x and y values), or interpolated values, to mouse movements over the plot area as a "visible item" in the shortcut menu (this should show a dot over the plot's trace)...  I've done this with an X Control, but I'd like the ability built into the new XY Chart on a lower level to improve effiecency.

 

It would be better if we could probe the auto indexed tunnel while the for loop is still executing.

 

If the For Loop is running for a huge number of iterations and if the Auto-indexed tunnel is having a condition. Then to check the values in the conditional auto-indexed array should be made possible even when the for loop is still running for debugging purpose.

 

If something like the below can be done it will be helpful in debugging.

For Loop Tunnel Probe.png

Currently, if you use the stock one and two button dialog VIs in your code, they will block the root loop while being displayed.

For those of you who don't know what the 'root loop' is, this is the core process of LabVIEW that many UI functions must execute under.  One of the key functions that executes in the root loop is VI Server calls to open a new dynamic VI.  So, if your code has multiple threads all performing operations that involve dynamic calls to other 'plug-in' VIs and one part of your code calls one of these stock dialog VIs, then all the other threads will be blocked until the user clears the dialog.

As a result of this, I have had to write my own 'root loop safe' versions of these dialogs to prevent this from happening.

 

As a side note, dropping down a menu and leaving it down without selecting anything also blocks the root loop.  It would be great if they could fix this too!

 

 

In the Context Help for the Prompt User for Input Express VI (Functions palette » Programming » Dialog & User Interface), it says that you can use the VI to prompt the user for a password.  However, there is not an option for a "password" data type when configuring the VI, and thus any curious onlooker would be able to read your password if this VI was used!  Why not add a "password" type to the configuration options (see picture)?  Sure, you can build your own VI to do this already, no problem, but it still kind of makes sense to have the password data type as an option.

 

passwordinputdatatype.png

This is sort of two features bundled together, but they make sense to do them at the same time.

 

First, add an easy way to temporarily disable the "Allow debugging" VI property across an entire project.  This would be step 1 in making an easy "Release Mode" option.

 

Second, add some conditional disable symbols to all projects.  If the project is in debug mode, add "DEBUG_MODE" to the project, and if it's in release mode, add "RELEASE_MODE".  While it is possible to do this manually now, each user could choose a different name for their symbol.  If LabVIEW does this for everyone, then it allows better library interoperability.  The main use case for these symbols is to add debugging traces and breakpoints that are undesirable in shipping code.

 

This should work!! I know there are workarounds and I have used them but it would be much easier.  

atsrefdsfads.png

 

On Windows, you can define environment variables that auto expand to known directories. There are some variables that are already defined by the system. For example, %TEMP% automatically expands to c:\Users\<username>\AppData\Local\Temp OR WHEREVER THE USER MOVED TEMP DURING INSTALLATION. That's the important part .That makes it possible to write %TEMP%\abc as a symbolic path that works regardless of how the system gets reconfigured.

 

Users can define their own environment variables, and those get expanded when used in a path in the command line or Windows Explorer (the text entry region at the top of an Explorer window). On Linux and Mac, it is the equivalent of using $VARIABLENAME/abc, where VARIABLENAME is some user-chosen name.

 

[admin edit] Added background information on environment variables, and updated title to use the word "Environment" instead of "Environmental".

I rediscovered a kind of an annoying behavior with multiplot XY-Graphs which appears to have existed since LabVIEW's emergence. Say I want to use the XY-Graph like a curve tracer, thus adding XY data pairs to an existing multiplot XY-Graph by passing the array of array clusters around in loop shift registers and fill in new data as it's produced. This all works well BUT when adding multiple plots in different colors it shows that consecutive plots are always drawn underneath the already existing ones instead of the other way around. I find that new plots should be drawn over already existing ones also according to the sequence shown in the plot legend.
 
If your new plot has the same or similar values than the previous one then the second plot will be hidden underneath the first one. You can only see a second one if the values are different. It's counterintuitive I think and I wonder why it has always been like that.
 
I plead for a XY-Graph option along with a property node to change this so that new plots cover old ones when they intercept. The legend sequence should not be changed because the top to bottom order is correct. 
 
I demo VI of the misbehavior is attached 
 
I hope I get enough support for this to be done. 

Urs

Urs Lauterburg
Physics demonstrator
Physikalisches Institut
University of Bern
Switzerland

There has been a lot of discussion on this but no LabVIEW Idea (that I was able to find). Please move the Stacked Sequences from the Programming Palette to a "retro" or "classic" palette. This is dissuade novices from overusing them

 


Previous wording:

 

There has been a lot of discussion on this but no LabVIEW Idea (that I was able to find).

Retain them for legacy code but please remove them from the Programming Palette

 

Lets vote this in and get rid of Stacked Sequences forever.

I would go so far as to release a patch to remove them from all Installed LabVIEW Versions!

Message Edited by Laura F. on 09-30-2009 03:49 PM

In LabVIEW 2009, the programmatic API for shared variables and I/O variables was introduced. This allows you to reference a variable by name, rather than dropping a static node on the diagram.

 

Some of the benefits are: iterating over many variables with looping structures, creating modular code, and dynamically accessing variables based on names in a config file for example. 

 

Programmatic access to single-process shared variables would also be useful.

 

(Single-process variables are effectively global variables (not network published), but use the same static node as the shared variable and are contained in project libraries.) 

 

I've been recently experiencing issues with memory running LabVIEW 32-bit under Windows XP 32-bit. One potential solution was to swap to LabVIEW 64-bit running under Windows 7 64-bit, but unfortunately I need either NI-CAN or NI-XNET driver support which isn't currently available. Luckily it looks like running the 32-bit application under Windows 7 64-bit overcomes the immediate problem but I worry that this involves compatability interfaces (not that I don't trust Microsoft of course).

 

LabVIEW 64-bit has been available now for approaching 4 years and it would be nice to see a full suite of modules for the 64-bit in line with the 32-bit version. Failing this a timeline of when they're likely to be delivered would be useful - the only answer at the moment seems to be don't know, which doesn't seem very professional!

 

Could I ask the LabVIEW R&D team to at least consider publishing a plan?

Problem: 

When handling larger data blocks I get "not enough memory" error messages with only option to kill the executable (from time to time, but usually unwanted Smiley Wink ).

Most of the time this involves array functions (initialize array, build array, etc). As this problem is also related to the used PC it cannot be tested (easily) on a developer machine before deploying the executable to a wide range of computers...

 

I propose the idea to enable some error handling in such cases. This could be done in two ways:

1) Add error output to array functions. This could be available optionally by right-click menu so you don't break older code. Output would be an error stating "Not enough memory for operation ..."

 

2) Add a new application event "Application.MemoryAllocationError". This way the program the program can atleast catch the problem. (Inspired by "OnError" constructs of text-based programming languages...)

I looked for a similar idea, but I didn't' find anything. I find that hard to believe, so I may have missed it somehow.

 

LabVIEW has arrays the dynamically grow. This is great. Love it. It makes it very easy to do things like Loop A below.

 

But that can lead to performance issues as LabVIEW constantly has to allocate new memory as the array grows. However, and this is important, LV is smart enough to over allocate so that not every append results in new memory allocation. Somewhere in the LV code NI is keeping track of the size allocated and the size used.

 

If performance is a concern, we've always been told to do something like Loop B. Preallocate a big array and use Replace. But there are problems here. First, a lot of the built in array functions will return 'wrong' values (Array Size, etc.). Second, what if the initial size is just a best guess. If we need something larger (say 5050 elements), we have to have a special case to Append instead of Replace once we hit the initial allocation. BUT, LV is already doing all this internally, it just isn't exposed to us in a way that we can take advantage of.

 

So, I propose Loop C. (sorry, too lazy to create a good icon). This new primitive would return an empty array (like Loop A), but with memory already allocated for a much larger array (like Loop B). LabVIEW already has the code to handle this. So, if we know the array will grow to ~5000 we can preallocate that amount, have all the built in functions work and not have to worry about going over 5000.

Allocate Array.PNG

When designing large applications it is normal that the top level VI will spawn background tasks to perform specific tasks. It would be nice if the top level VI could receive an event when the subVI exits. I realize this is possible if you specifically add code to the subVI to post a message to a queue or fire off a user defined event. However, it would be nice if the event were available without the need to add code to the subVI. This would make it easier to design top level applications that could spawn tasks and not palce any additional coding requiements on the subVI. The "Panel Close" event can be used with callback however it will only trigger when the subVI exits via the Window Close button (X in the upper right corner). This is too restrictive and will not allow the top level VI o be informed the subtask exited programmically.

Dear NI,

 

Please fix LabVIEW so that when you right-click on a wire on the block diagram the menu pops up immediately, not after 1 second. Also, while you are fixing that, it would be great if you could fix the speed it takes to open a control/indicator properties configuration dialogue.

 

As far as I can recall these were fine in LV 7, but sometime after that itall started to get a bit sluggish.

 

Thanks!

 

 

 

 

A simple Idea, though I'm not sure how simple it will be to execute.

 

Idea:

Make run-time environments (RTEs) backwards compatable.

 

Example:

Allow a LV2010 executable to run on a computer that only has the LV2012 RTE installed.

 

Benefit:

Saves hard drive space and install time. The LV RTEs are 600mb or so. Yes, hard drive space is far from being expensive, but it's annoying having to set through 5 installs just so you can run programs from 8.5, 2009, 2010, 2011, and 2012.

 

 

 

 

 

Currently, any VIs including .NET code are shown as broken in Labview for OSX.

This is particularly annoying for people working in a cross-platform environment, specially considering that the Mono framework covers a lot of Microsoft's .NET implementation with the added bonus that it works in Linux, Windows and OSX.

 

Please allow users to choose which .NET implementation to use.

I always use "Find All Instances" for searching where a subvi / control is used.

 

When looking into code that is locked (read-only because of being under source control), this option is missing!!.

  • Now I always have to press Ctrl-M to Unlock, then press change on the next dialog
    ctrl+m.PNG
    before the "Find All Instances" becomes active
  • Or open the VI Hierarchy (could take a while on a large project)
    and here the normal right-click "Find All Instances" is not blocked!!

 

I don't think it should be a big problem to enable the right-click menu on a locked vi...