LabVIEW Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I just realized that when creating an interface you cannot create property node folders.

 

You can see from the pictures below that option is missing from interfaces.

 

second.pngfirst.png

 

You can also see from that screenshot that it is possible to have property folders in interfaces and they work just fine. You have to edit the xml to do that, but it works. So it is implemented, it is just removed from the IDE.

 

Now I talked to Darren and he seemed to think the original reasoning was "Well property nodes are for storing things in the class private data and there is no private data with an interface, so you don't need them." I can't really argue with that logic, however, there are times when an existing class uses a property node and you want to create an interface that includes that method. For example you may have multiple instruments that have a VISA ref property. You currently can't create an interface with that "write VISA ref" VI (without editing the xml.) If you create a method with the same name/conn pane and it is not in a property folder, the compiler complains. Now you could just go back and edit the original class and remove the property node and just use a regular method. However then you break every piece of calling code that is using a property node.

 

Here is a use case, which I think is fairly common - it happens to me a lot:


I inherit some code. It is using some particular instrument (Oscope, DMM doesn't matter) They want to support another similar instrument (maybe newer version of the DMM).

 

The instrument code is wrapped in a class. Great. As a first step, I can refactor. I can create an interface that has all the same methods and make the code rely on the interface. If it is a class wrapped in a DQMH module, all I have to do is replace the object in the Shift register with the interface and somewhere set the concrete class in the initialize. It all works exactly the same as before, but now I have an interface.

 

Then I create another class that implements that interface and add some logic to pick which one - some kind of factory. Done. I've made very minimal changes to the existing code and it now supports a different instrument. This is the holy grail of OOP. I create a new class and just inject it and everything works.

 

Not so fast. NI has decided I shouldn't be able to do this if the class uses a property node (oh no!) why? I should be able to have 2 classes that both have the same property. Sure the data's not getting stored in the interface, but what does that matter?

 

It does matter to the compiler. If I want to do what I proposed above and the original developer used property nodes anywhere this doesn't work directly. I have to either do some xml hack on the interface or I have to replace all the property nodes in the calling code with subvi calls and then go edit the class and remove the property folders. Why?

 

It seems like all that is needed is enabling the right click menu, because if you manually edit the xml, it all works. That is already implemented for classes, so I imagine the fix would be rather simple.

It would be helpful to add an option like "Digital Value" in the DAQmx Trigger.vi (comparable to Analog Window).
Background: when missing the edge of a digital trigger signal (due to being late in code execution after eg. waveform reconfiguration), the trigger event does not happen. With this new option it is possible to execute the trg event asap after missing the edge. 

I currently have a project with an auto-populating folder for documentation. That documentation is included in my Installer build specification.

 

If I regenerate that documentation while the project is open, the auto-populating folder sees it disappear from disk and removes it from my installer! No notification, nothing. Just completely silent. This has resulted in shipped installers with missing documentation

 

I propose this should work the same way any other dependency does if it disappears from disk. It should break the installer from building and show a warning sign that the dependency is missing.

Having sold applications that use functionality from the OPC UA Toolkit we run into an issue if we upgrade our LabVIEW version and continue to develop those applications beceause the OPC UA deployment license will then stop working if we upgrade the software we have delivered to them to one developed in a newer version. So, even though the customer has an OPC UA deployment license and we have an updated developer license it is not enough because the deployment licenses have to be updated as well (and it does not help that deployment licenses are not something we can bunde with the upgrade of the software either). From what I understand new deployment licenses will not actually cost anything, they are provided by NI as long as you already had a deplyment license for the previous version - but the logistics of this is a nightmare for us. Instead of just delivering a new installer with an updated version of our software we have to get involved in upgrading the dpeloyment license for all of their systems each time we have gone to a new version of LabVIEW.

If you develop an application using functionality from the OPC UA Toolkit on a machine with a developer license covering the OPC UA toolkit you cannot run the built application to test if it works without having to buy a deployment license for that machine. 

 

Having a developer licens on a machine should allow us to run built applications as well the same machine to verify the functionality after build (alternatively the developer seat should always be accompanied by a deployment license).

It would be great to be able to configure LVMerge options via the command line as described here

 

https://knowledge.ni.com/KnowledgeArticleDetails?id=kA03q0000019c3VCAQ&l=en-US

 

There was a previous proposal to fix this that was closed

https://forums.ni.com/t5/LabVIEW-Idea-Exchange/LVMerge-Configurable-Load-Options/idi-p/1282252

 

 

Please kudos this so that that it is considered 😉

The last decade has seen a fantastic development in using neuronal networks to analyze data. Neuronal networks allow doing nowadays feasts that were unimaginable a decade ago at a tremendous speed. The latest release of LabVIEW allows using old TensorFlow versions <= 1.14. About four years ago, TensorFlow switched to TensorFlow 2.0, which resulted in a significant transformation in how models are organized. Models developed in TensorFlow <2.0 are incompatible with current standards. If National Instruments aims to maintain its position in the market of computer vision, developing tools to use current TensorFlow versions is a MUST.

 

Clusters allow you access to the decorations inside the cluster but you can also drop decorations on a custom .ctl file too but there is not way to programmatically access them.

acaracciolo_0-1691284677389.png

 

Increase priority of Correct Action Request (CAR) #109501 created in ~2008 about sorting events alphabetically. Use cases include finding events quicker and simplifying comparison results.

 

c.f. https://forums.ni.com/t5/LabVIEW/Case-order-of-EVENT-structure/m-p/782114

Many times, at least in my codes, it's necessary to act only on a rising / falling or changed edge of a boolean signal.

Probably this option can be added for CASE-structures with a boolean-type case selector of cause. 

Scrolling all the way to find out a particular case in case structure is painful when you have "n" number of cases created. Sometime we know the case that we want to sneek in but finding it out of all the cases displayed is really a difficult and frustrating task. So having a search bar at the top of the cases listed would really help developers a lot to get into the desired case in seconds.

When you open a .lvproj,

and then open the "Dependencies" folder,

and then right-click on an item,

and then select "Why is this item in Dependencies?"

LabVIEW always crashes.  It's been like this for many years.

My idea is to change the "Why is this item in Dependencies?" so that it works instead of cras

Currently if you create a new VI for override, whether or not the terminals are displayed as icons is determined by the VI being overridden (e.g. overriding Actor Core.vi will always give you terminals as icons). Instead, I propose that it be determined by the user's preference in the Tools--> Options menu. If we've said we don't want terminal icons, shouldn't all newly created VIs respect that?

I find myself to replace controls to the preferred style and reorganize the labels (trough the quick drop shortcuts) for almost every accessor VI. For this purpose I ended up modifying the templates of these VIs, but I would prefer to have this in the script that creates these VIs, so they automatically match the setting made in LabVIEW options. This should include preferred control style, label placement of the controls on block diagram and also the front panel cleanup.

It would be better if we could probe the auto indexed tunnel while the for loop is still executing.

 

If the For Loop is running for a huge number of iterations and if the Auto-indexed tunnel is having a condition. Then to check the values in the conditional auto-indexed array should be made possible even when the for loop is still running for debugging purpose.

 

If something like the below can be done it will be helpful in debugging.

For Loop Tunnel Probe.png

This happens to me quite often.

 

In scenarios such as replace "Equal?" to "Equal to 0?" the quickdrop should be able to avoid comparing with a constant, or prevent getting broken wires such as this:

JimChretz_0-1686144056731.png

Only method to verify time of the Time Stamp indicator is to go Properties of the Time Stamp>> Display format>>Select Advanced editing mode and see the Format String display.

MMargaryan_0-1686132778472.png

The Idea is to add Format String Display in the Display Format Default editing mode to make it easier for users identify if the time is local, UTC or other.

 

On a block diagram string constant, there are shortcut menus to change the display style. Currently, we can change the style without making the selector visible. That leads to bugs when later programmers do not realize that the string they are editing is in a different mode. Currently, we have to choose "Visible Items >> Display Style" first, which makes the shortcut menu items useless (because then we just use the now-visible selector ring). In the future, when we change the style through one of the shortcuts, I would prefer that the selector automatically becomes visible. 

srlm_0-1685481071737.png

 

Hi

While working with LabVIEW I would suggest that there are broken arrow that shows that the code is not complete if we left to wire any input or output.

I would suggest that there should be an option in the block diagram that we can select the active region (it could be as to draw box) and LabVIEW should only consider code that is inside the box and outside of the box if any unwired input  or output should be ignored.

 

I hope  I made my idea clear.

 

Thanks

Asif 

Add another option to the Search Scope to Ignore VIs in user.lib

That's it!!  😁

Search Scope - user.libSearch Scope - user.lib