LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Currently, the only way to add a webservice to a project, is by right clicking a Target and selecting New->Web Service.

There's no way to add existing Web Service definitions to a project, other than manually performing the copy operation at the XML file level.

This prevents proper re-use of IP!

 

Webservice in project.png

 

I would propose extending the capabilities of a Library to also support Web Service definitions as Child objects.

That way, the library can act as a container for the Web Service definition, and can easily be reused in other projects:

 

Webservice in library.png

I tried using an XML editor to manually put a Web Service definition underneath a Library, LabVIEW didn't like that!

When you want to look for a specific VI in your project using Ctrl+F you have an handy option to search vi by name. But this is quiet difficult to find the VI you are looking for in this windows : All VIs are listed (without structure) and there is no Filter. Something similar to Event selection window would be much better (That would have a filter and where VI's would be grouped by libraries and class)

 

This "Find all instance" could also be added directly in the project explorer, in the Right click -> "Find" menu

 

 

Download All

LabVIEW functions "Set Waveform Attribute" and "Get Waveform Attribute" could write/read any attribute name/value to waveform. But, according to help files (http://zone.ni.com/reference/en-XX/help/371361N-01/lvwave/set_waveform_attribute/), there are default attributes such as device number, channel name, etc., which could be set by NI-DAQ, and Express VIs. But sometimes there is need to set them manually (for example, while reading data from FPGA and building waveform from it).
It would be handy to include to Waveform pallete simple API with polymorphic VI to set and get those default attributes. Then one should not remember what exact name it has, but just take the function, select which attribute he needs, and use it.

 

Sincerely, kosist90.

 

Refer to http://zone.ni.com/reference/en-XX/help/371361P-01/lvpict/draw_line/

 

Under the Pen -> Style, there is a "Dot" option.

 

But the output is not really a dot but a short dash. Can labVIEW make the output into a real single pixel dot?

 

Same applies to draw circle VI.

 

It would be useful to insert a VI directly from a project or from a class into a line.

 

Sometimes / often it happens that I add an existing VI to an exiting class or lib.

In this case the icon of the added VI stays at it is and does not change to the libraries icon. I could use the lvlib-option-function "apply icon to VIs" but this changes all my VIs which is not appreciated!

 

What I suggest to add is a right-click option to the VIs icon "set library icon" (see attached picture).

 

Hi, I'm a EE currently working on a project with a group of others. Some in the group have significant Labview expertise and are developing .vi's to control NI-device-interfaced hardware that others & I have developed.  Those in our group that "speak Labview" are fairly comfortable reviewing and debugging each others' designs by reviewing the .vi files.  Unfortunately others of us such as myself have minimal Labview expertise and will remain that way for the foreseeable future.  To accurately/readily review .vi code, we would like more traditional formats such as Timing Diagrams.  In my experience, such formats not only can serve a broader scope of reviewers (thereby being more appealing in the marketplace as well?), but in many cases they hold potential for being more illuminating for reviewing, debugging, communicating, and so forth.

 

One of our software developers offered to develop such a tool, but as with most projects our staffing & time resources are scarce, and like many projects, software is currently on critical path.

 

I suspected that one or more tools to generate Timing Diagrams from a .vi already existed from NI and would have been in routine use by some developers.  Or if not from NI, then 3rd-party, or somewhat redundantly developed by thousands of project labs around the world.  NI Technical Support Engineering has suggested that I submit this entry here, so I am, first-time NI submitter.  The suggestion isn't tied to our project's particulars, but if the above suggestion isn't sufficient, possibly the below info about our application will help.  Thanks!

 

about our own application

 

Our medical-therapeutic research device will use a myRio-1900, interfacing with a user via wireless to a laptop.  The laptop will use code being developed with Labview.  Rio code is being developed with Labview-RT for the processor, and with Labview-FPGA for Rio's FPGA portion.  Interfacing to Rio's I/O, much of our custom circuitry is SPI-based, but with particulars that preclude using Rio's native SPI, so we've developed lots of SPI in the FPGA.  Among the external circuitry there's a total of about 100 SPI-interfaced a/d's, d/a's, and GPIO-interfacing chips.  16 battery-packs, each recharged by the instrument under software control, provide >500W of power to various regions of circuitry that spans 32 voltage-isolation barriers.

 

Here's one example of how the proposed tool would have helped us in the recent past.  (This would have been easier without COVID19 so that we could all be co-located and with the prototype, but ....).  During some debug, we eventually found that some bits were being generated or sampled on SPI clock (SCLK) edges that weren't compatible with some of the peripheral chips.  This would have been readily apparent if we had had timing diagrams showing the .vi outputs nCS, SCLK, and MOSI.  Those by themselves would have been highly helpful.  In addition, also helpful would have been to see some nodes that are internal to the .vi if possible, such that we could know when the incoming MISO line was sampled relative to SCLK.  We didn't need to see propagation delays etc., we just needed to see the logic manifestations on a diagram or equivalently easy for a .vi-illiterate human to digest.

 

Hope that's helpful.

 

       -- Bruce P.  8/10/20

 

This is just a small QOL thing but also not that hard to do I feel like. When right clicking a wire whose data type is a type def enum or cluster, add the option to Open Type Def. from that menu. The current way of doing that would be to right click the wire and Create -> Constant, right click the constant and Open Type Def., then delete the constant.

When working on VIs that require large arrays or images for input, I often need to run a calling VI rather than the VI I'm currently working on, which involves a lot of tabbing between windows and clutter from having more open windows. I realize it is possible to set default values for controls, but I would rather avoid that as I often use optional connections to my SubVIs and therefore don't want to change default values.
In my opinion it would be useful to have the option to configure the run button to run a different VI than the currently open one, when working on VIs specifically developed to run as subVIs in the context of a higher-level caller.

When using VI scripting to create UI objects, please add the new "NXG Style" objects as an option

Add a right click option to navigate to the method in the project window when you right click on a send message VI. Bonus if it gave you an option to go to the base class method or any overriding methods. 

 

NavToMethod.jpg

I often run into the issue that I have one event case that I need to execute for multiple controls. Most of the times I simply read out the terminals and most of times I get the "NewVal" from the terminal. Some times however that's not the case. I'm currently having the issue that I use a property node of a control to trigger the event and every other time the terminal gives out the "OldVal".

 

I believe the proper way to solve this issue would be to go by the reference. I find this blows up the code more than it should do. Looking around in the community I've found two other ideas suggesting different solutions for this issue:

 

The first has multiple boolean controls triggering the same event. SectorEffector's solution was asking for a reference case structure, which might be a great idea, especially when the event is triggered by different data types. When I have multiple boolean triggering one event I almost always build an array from the terminals and convert the array to an integer, so that a case structure or other code can deal with the value. But again I'd have to deal with fiddling the new value from the triggering control and the other control values together.

 

The second mentions actually two ideas. The actual idea of rcpacini is to have an enum indicating which control triggered the event. This is a similar approach to what SectorEffector suggested. They're both aiming at using a case structure with a different cases for each event source. The alternative idea of rcpacini was to have event data elements for each control. The is more similar to my idea, but I would like to have the values of all registered event sources clustered together.

 

Here a side by side of the current solution and what I'd suggest:

New value by referenceNew value by referenceNew value clusterNew value cluster

 

The order of the events would determine the order of the values in the cluster, so a way to arrange the events like a move up and move down button would be helpful:

Event source order up/down buttonsEvent source order up/down buttons

Often times I need to change a property of something in labview that seems to only be available through a property node. And often I don't want to do it during run time just once at development but I find it really hard to get references to some objects and I want them in another VI that I can run real quick to set the property and then be done. If you have the option enables that shows labview scripting property node options and invoke nodes, can under create there be a  Create reference in a new vi. And if you click it you get a new VI with just a ref that you can then do whatever you need with it. And please make this work for anything that has a property or method related to it. I have some tools that I wrote that can kind of do this but I am sure it is not a good way of doing it (but it is the best I have got).

On Plot Attribute Change event, the cluster structure should match with the group of properties such that we could pass it as is and re-use it when we need to re-apply all properties. Inside the property node there should be a section Plot.Attributes.All Elements that would be identical to the cluster returned by the event.

 

Also worth mention that Plot.Visible is not part of the cluster but the event is triggered when user toggles the plot visibility. Strangely the Plot.Name can also be edited by user but that doesn't trigger the event.

 

I wonder if it's a missing feature or more like a design flaw.

 

JICR_0-1581470137676.png

 

enum.png

...fast selection of enum value/member instead of opening enum-element or constant...

Flattened String To Variant is so close to letting us define the layout of arbitrary external data.  However, labview's insistence on all strings and arrays being prepended with their lengths in the data string means when working with binary data from non-labview sources, the array or string lengths might have to be calculated or are somewhere not immediately adjacent to the data.  I understand it's far too late to modify the base unflatten from string or read from file as requested in this idea, but this feature would be incredibly useful for reading binary data directly into clusters without requiring either multiple copies of incredibly large (100,000+) arrays floating around, or incredibly tedious and complex sequences of Read from Binary File nodes.

In OOP, interfaces define a contract. When a method/function expects a type of interface as an argument, any type implementing this interface can be passed. A well known and much used concept. I imagine something like this could be implemented for clusters in LabVIEW. Imagine if you could have a subVI define a cluster/cluster array, mark it as an interface and be able to wire any cluster/cluster array that adheres to this interface (adhering to the interface would basically mean that an unbundle by name node accessing any/all members of the interface would also work on any “derived” cluster). Both the cluster on the front panel and the wire should clearly indicate that this is an interface, which would mean that the cluster/wire actually contains more members, but you only have access to the ones defined by the interface. I definitely see some use cases for this. Sound useful/feasible?

This subject has been touched on briefly, but only in a post that started out just suggesting an indication of whether or not a case structure is case insensitive. This has been implemented since then, of course.

 

What I'd like to bring to the surface again is the case sensitivity. The mention of this in the previous post was suggesting that case insensitivity be made the default. I don't agree with this - especially with the repercussions it could have on converting older code. 

 

However, I think it would certainly be useful and acceptable to add a checkbox to the "Block Diagram >> General" options for "Case structures case insensitive by default" (see screenshot). This would not break previous code, since turning it on would only effect case structures created after turning it on. Additionally, the "default default" would still be case insensitive, but programmers that do use string-based case selection could leverage this to their advantage (and not deal with forgetting to turn it on via the right-click option).

can we have an option of loading only user-specified rows from a delimited spreadsheet instead of all rows, perhaps allowing the user to specify which and how many rows to read? this is potentially useful if the user want to read the select or latest few rows (typically at the last few rows due to write append) without having to load unnecessarily large amount of data into memory.

 

implementation wise, can use a row index and length input optional terminals defaulted at index:0 and length:-1, to yield all rows by default for the rows (note: terminal name changed) output terminal. index:-1 to specify from end of file and non-zero length:L for how many rows from the end of file (-L for reversed order, last row on top). if index:-1, the data read will be replacing elements within a fixed sized array of L elements until the EOF returns true. for example, index:-1 and lenght:5 would yield last 5 rows from file. 

 

read delimited spreadsheet.png

thanks for reading and have a great day

There are already several Ideas on the exchange regarding custom error-code files. A subset are listed below:

Project error code files (request to locate error file near lvproj file, or via Project properties)

Error txt file (allow specifying a path when calling General/Simple Error Handler, store the custom file somewhere not in <National Instruments>/{Shared,<lv-version>}/...)

Make custom error code files part of a project (allow project-level error codes, not stored in user.lib)

Error ring without file (request for an error-ring typedef that could be stored in the project's source code collection)

 

My suggestion here is most like the last - I'd like to suggest a file type like a .ctl (or perhaps via typedef, if technically better/more implementable) that can be added to a project, library or class and contains a list of errors.

 

Ideally, these errors should be only possible to throw from inside the library, but must of course be visible from outside of the library.

Implementing these at the project level restricts the possibility of use with PPLs, reusable libraries, etc, because the project file is a dev-only construct (as far as I understand), unlike classes and libraries (which are part of what you distribute in whatever form you choose).

 

The ability to store these files in a library or class also allows the possibility of scoping the error code, such that the handler can more easily identify the source of the error.

 

LabVIEW already categorizes it's errors into broad sections (LabVIEW, MAX, NI Platform Services) etc and so perhaps these could be used as the scope of the existing, built-in error codes, although I'm not sure if this would require significant work or not... I suspect many things seem easier when you're not the person who will implement the changes!