LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Currently the Clipboard.Write Method in the  App Invoke node only accepts text as a data type. I think it should accept any data type. So for example, if you wire a array of doubles to it you would than be able to paste that data into a spreadsheet. Or wiring a picture data type would allow you to than paste that picture into a image editing program. 

Back in the NI-CAN days, there was a handy development tool which was the usage of two virtual CAN ports, ports CAN256, and CAN257.  If you wrote a frame on one, it would be read on the other, and vise versa.  Other CAN hardware like Vector, and Kvasar support virtual CAN hardware which does something similar, where initial development can be tested before having access to the hardware.

 

This idea is to add virtual hardware support for XNET which supports this same feature.  it has been talked about in a thread here several years ago, but nothing ever came of it.  Adding support for virutal hardware for CAN, LIN, Flex-Ray and any other XNET hardware would be a great development tool, and enable the testing of the expected handshaking of software, with simulated communications.

Although new folders can be created in the application and installer build specifications, they are not created unless a file is put there. An empty folder is desireable for data output. It would be better for it to be created before running the application so that security access rights can be set by the person doing the installation if administration priveleges would otherwise be needed to create new files there. It leaves quite a bad impression on those who waste time finding out by trial and error that the folders defined in the build specifications are not created. The forum also documents complex schemes to work around this limitation by people who surely would rather have been doing something productive instead.

There are currently two NI toolkits which add a software layer on top of the automotive CAN bus.  

 

The Automotive Diagnostic Command Set (ADCS) adds a couple of protocol standards like KWP2000 (ISO 14230), Diagnostics on CAN (ISO 15765, OBD-II), and Diagnostics over IP (ISO 13400).  This is a pretty handy API when all you need is one of these protocols.  But often when you need to communicate to an ECU, you also want have a normal Frame or Signal/Channel API where you get raw frame data, or engineering units on signals.

 

The ECU Measurement and Calibration (ECU M&C) adds XCP, and CCP capabilities on top of CAN allowing to read and write parameters based on predefined A2L files.  This again is super handy, if the A2L you provide can be parsed, and if all you need to do is talk XCP or CCP to some hardware.  But often times you also need to talk over a Frame or Signal/Channel API.  And what if you need to also talk over some other protocol.

 

Every time I've had to use one of these toolkits, it has failed me in real world uses, because you generally don't want just one API you want severaal.  And to get that kind of capabilities you often are going to close one API sessions type, and reopen another, then close that and reopen the first.  This gets even more difficult when different hardware types either NI-CAN or NI-XNET are used.  The application ends up being a tightly coupled mess.

 

This idea is to rewrite these two toolkits, to be as pure G implementation as possible.  The reason I think this would be a good idea, is because with it you could debug issues with file parsing, which at the moment is a DLL call and hope it works, and it would allow for this toolkit to be used, independently of the hardware.  Just give it some raw frames, and read data back.  NI-XNET already has some of this type of functionality in the form of their Frame / Signal Conversion type, where it can convert from frames to signals without needing hardware.  If these toolkits could support this similar raw type of mode then it would allow for more flexible and robust applications, and potentially being able to use these toolkits on other hardware types, or simulated hardware.

I have an application with multiple Waveform Charts, each with a visible Plot Legend, Scale Legend, and Graph Palette.  I'd like to give each Chart a distinct color (a sea of gray is boring).  There is a Frame Color property that does the Frame, and I figured out how to color "underneath" the X Scale numbers, but I can't programmatically color the Plot Legend, Scale Legend, or Graph Palette.  To do so, I have to break out the Tools Palette and wield the Paint Brush (boring and tedious, especially when compared to wiring a single Color Box to a set of Properties).  Can we get the "missing Color properties" made available?

 

The Figure shows the original Grey Chart, the result after wiring Yellow into the Frame Color property (note that there's still a lot of Grey, including underneath the X Scale values -- this is fixable), and the third shows what I'd like to be able to accomplish with additional Property nodes made available.  It is frustrating to able to color Frame, but not the "Extras".

 

Plot Coloring Idea.png

Bob Schor

 

         in "debug mode" (retain wire values), be able to choose the "radix"

 

                something like this:

 

                toto.png

 

           yes, I know the custom probes, but I'm not talking about that. .
It would be nice and useful to have this in the native behavior of LabVIEW
                                        sorry for my poor english, i do my best

The In Range and Coerce function is frequently used to determine whether a value is within range of an upper limit and lower limit values.

 

But when it is out of range, you often also want to know whether the value is out of range too high, or out of range too low.  It is easy enough to add a comparison function alongside and compare the original value to the upper limit.  It's another primitive and 2 more wire branches.  But since comparison is one of the primary purposes of the In Range and Coerce function, why shouldn't it be built into it?

 

The use case that made me think of this that as come up in my code a few times over the years is with any kind of thermostat type of control, particularly one that would have hysteresis built into it.  If a temperature is within range (True), then you would often do nothing.  If it is lower than the lower limit, you'd want to turn on a heater.  If it is higher than the upper limit, than you'd turn off the heater.  (Or the opposite if you are dealing with a chiller.)

 

Request Add an additional output to In Range and Coerce that tells whether the out of range condition is higher than the upper limit, or lower than the lower limit.

 

(That does leave the question as to what should the value of this extra output be when the input value is within range.  Perhaps, the output should not be a boolean, but  a numeric value of 1, 0, and -1, perhaps an enum.)

 

                             below, here is the "old" File-Dialog

 

         toto3.png

 

                                       But NI advises this :

 

                              toto1.png

 

 the problem is that this "new" File-Dialog isn't aligned like the "old" one

 

      toto2.png

 

                Please ... align this "new" File-Dialog-Express-VI"

 

                                                  (thank you)   Smiley Happy

It would be quite helpful if LabVIEW would automatically grow down a function when bringing a new wire to build array/string concatenate/build cluster/interleave arrays/build matrix/compound arithmetic, allowing the programmer to make a new connection to the function if the function currently does not have room for the connection.  

 

Even better is if it would grow/insert where the wire is near the function, below an existing connection, similar to 'ADD INPUT'.

new.PNG

 

 

                                                            Find and Replace

 

                                Type in the world(s) to search for   >>  recent words

 

 

 

toto.png

I hate right clicking.  Currently (for me LV2014) double clicking on a Block Diagram Class Constant does nothing.  Many times class constants are connected to launching Actors or other class VIs in the Class library.  But right clicking to pop up a menu and clicking "Show Class Library" wastes a lot of time (around 2 seconds Smiley Sad).  

 

To quote an example from Steve Jobs about wasted time:

“If it could save a person’s life, would you find a way to shave ten seconds off the boot time?” he asked. Kenyon allowed that he probably could. Jobs went to a whiteboard and showed that if there were five million people using the Mac, and it took ten seconds extra to turn it on every day, that added up to three hundred million or so hours per year that people would save, which was the equivalent of at least one hundred lifetimes saved per year.”

 

 

 

 

If I have this:

 

"before"

I want to be able to select the objects along the wire, outside of the frame, and then right click inside the frame (or out of it) and move them without a copy-paste-rewire to end up with this:

 

"after"

 

The key point:  I'm not looking for the "wires" to change, but I want the visual structure to change to meet my needs.  This is about making the VI better more quickly and with fewer errors and manual inputs.

 

Note: this is the simplest, most reproducible option.  I am dealing with things that are substantially more complex than this. 

 

I have found these "single pane flat sequences" to be amazing for organizing the VI, enhancing readability and utility without hitting performance.  I use them to stage for convert to subvi, and moving things in/out of the box is an decent (and manual intensive) part of that.  

 

Ergonomics is also important.  

GitHub is amazing.

 

Figured that deserved to be on its own line just to let it sink in to the importance of this idea. Version control is no longer something only used by professionally trained software engineers. It's here for the masses; for the graphical design artists, animators, document writers. GitHub has a really cool web front-end to version control and brought sanity to DVCS (maybe bitbucket deserves some credit as well). It has championed a simplistic work-flow via pull-requests: GitHub Flow. It supports Issues for reporting bugs or other issues, and has an assortment of other collaboration features that make GitHub amazing.

 

GitHub (I don't mean git) integration with LabVIEW would demonstrate the power of Graphical Programming to the world.

 

Again, let that one sink in. There are 3 tenants of GitHub integration that LabVIEW must achieve:

1) Display the image of the block diagram and possibly front panel on GitHub when viewing a repository. Isn't it embarrassing that JKI State Machine Objects has to resort to putting images of VIs in the repo because GitHub can't render the VIs block diagram source?

2) NI must give away graphical diff and convince GitHub to run it. GitHub shows diffs on the web; Source Tree shows diffs in its tool. It's decision circa 1990 to not give Diff and Merge away with the basic version of LabVIEW. To think of diff/merge as advanced software engineering tools is a thought stuck in the past. LabVIEW needs its graphical diff shown within GitHub on a source file's history.

As a side note, vote for a version agnostic Diff/Merge idea here.

3) VI merge needs outstanding auto-merge capability that is built into pull-request merges. When creating a pull-request on GitHub, you'll see this statement (possibly), "Able to merge. These branches can be automatically merged." To work well in a DVCS multi-contributor, possibly open-source environment, the language needs superior auto-merge capability. Pretty much all other languages get it for free because they are text.

 

Keep in mind that NI will need a partnership with GitHub to accomplish this; however, this type of thing is not unprecedented on file type (maybe unprecedented with a proprietary langauge...). Just take a look at GitHub's ability to Render and Diff Images and GitHub's ability to Display Jupyter Notebooks (formerly IPython Notebooks).

OpenOffice last version was released 5 years ago. One of the best suites that continued the work is LibreOffice, developed by The Document Foundation. NI has a add-in for OpenOffice, but not yet for LibreOffice.

 

Please update the Integrating TDMS in Third-Party Products page adding support for LibreOffice, Apache OpenOffice, Calligra Suite, NeoOffice, etc.

There should be an option to fully enable optimization when building an application as to automatically remove performance impacts caused by diagram elements that shouldn't cause any.

 

As summarily declined by NI, this idea

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Improve-Diagram-Disable-performance/idc-p/3253939#M34372

 

shows that unless you manually go over ALL your VIs disabling debugging, Diagram Disables (that are supposed to not avoid executing some code!) you'll suffer a performance impact.

 

It is preposterous to expect the users to manually disable/reenable debugging on every single VI when building an application.

 

Please add an option to enable full optimization.

 

Untitled.png

So at the moment LabVIEW has the icon editor glphs stored in the following location.

 

<username>\Documents\LabVIEW Data\Glyphs

 

This is fine, until you login as another user.  Then you realize all of your icons are gone because LabVIEW installed the icons for the user that was logged in when LabVIEW was being installed.

 

Similarly a problem exists if I am distributing icon editor glyphs of my own.  I have a reuse package with a bunch of useful icons, and I have VIPM install them to that users folder.  The problem is if I login to a machine remotely, and perform the update as my user, then those icons are only installed for that user name, and not all users.

 

What this idea is for, is to have a shared folder for icon editor glyphs, which is used for all users on that PC.  Some place like

 

<LabVIEW Install>\Resources\Icon Editor Glyphs

 

would work, or...

 

<ProgramData>\National Instruments\Icon Editor Glyphs

 

Then icons can be installed in one folder, and be shared for all users.

FPGA code allows I/O references to be used so that code can be re-used/abstracted, like this

FPGA-SubVI.PNG

This could/should be extended to cRIO C-modules so that code can be re-used on multiple modules. This would require the 'cRIO Device' property node to return an array of I/O Refnums that could then be passed to subVIs or other modular code.

 

For example, if I have code for an analogue input module (say, NI 9205) which filters and processes the data, and I wish to re-use that code in multiple experiments I have to manually change the module name in each IO constant or node (Mod1/AI0 to Mod3/AI0) if the module moves or I need more than one of the same type.

 

Conversely, if the cRIO Device property could provide the I/O references, the code could accept a cRIO Device constant and the reusable code extract the I/O references. The code would then adapt from Mod1/AI0 to Mod3/AI0 as the cRIO device was changed from Mod1 to Mod3.

 

It would be added to these properties.

FPGA-cRIO-Prop.PNG

 

Thanks,

Richard.

 

When searching for text in LV 2014 (from 2015 DS1), it appears that the Window Title text isn't picked up by the search. 

 

To verify this, I created a new VI, and modified only the Window Title (File -> VI Properties -> Window Appearance -> Window Title, Click OK, Save VI).  Then press Ctrl + F, Select Text, type in a word in the Window Title.  Not found .

 

We should be able to search on any text saved anywhere in the VI (in cluding VI properties like Window Title).  Please expand the find capability to search Window Title and any other relevant fields.

I really could do with the Ethernet/IP toolkit to act as a scanner rather than just an adaptor. As an adaptor you can write code that will allow a PLC to scan and request data from you, which is fine if you only want a PLC to request data, but I have the requirement to have my Labview app act as the scanner and request data from several different adaptors.

Capture.PNG

User Colors 1-16 are now pre-defined by the default LabVIEW.ini file  that leaves us developers with only two.  Why 18 total?  This needs some adjustments IMHO!  It should be easy to implement but, can't be done without hacking the ini editor