LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Format into text is very useful but can become hard to edit when it has a lot of inputs. I propose, instead of one huge format string, that the programmer be allowed to put the required format next to the corresponding input. Also, the user should be allowed to enter constant strings, e.g.. \n, \t, or "Comment", and have the corresponding input field automatically grayed out.

 

FORMAT INTO TEXT IDEA

The current behavior of LabVIEW is as such:

 

UserEventsWithClusters.png

 

The pick list of event data strips the cluster, only allowing access to the elements inside the cluster. In order to manipulate the native datatype, you must reconstitute the cluster inside the event structure, a performance hit on large data structures firing at quick event rates. This is not to mention a messy FP.

 

I suggest that User Events preserve the clustered event datatype, allowing the programmer to access to the native datatype on the consumer end. The option to select only the element "Clustered Typedef.Array of I32s" of course still remains, but is not forced upon you. (Note that the below block diagram is NOT a direct LabVIEW screenshot... brought to you by trickery and a paint program. I clustered the cluster on event datatype [not shown] so that the event structure could eat the outer layer)

 

NewUserEventsWithClusters.png

 

My motivation with this suggestion is directly linked with my other post on the Big Typedef Issue.

LabVIEW loops are quite flexible and allow me to do just about anything I want, but I often find myself writing the same code over and over again trying to iterate across different data types and data structures. There are several situations where smarter looping constructs could greatly simplify my code. One simple example is stepping by a delta between a minimum and maximum value. Currently, I have to calculate the number of iterations required ahead of time (for a for loop) or do a comparison with the maximum (with a while loop) and use shift registers to maintain the intermediate value. I'd like to be able to wire the max, min, and delta to my loop and have LabVIEW do the required calculations for me. The iteration terminal could also adapt to the proper data type given the input parameters. Perhaps the iteration terminal would have two outputs, one with the current iteration count and the other with the proper iteration value.

 

stepped-loop.PNG 

 

Another useful feature would be allowing me to wire a queue reference to the loop count terminal of the for loop and having it automatically pop each value from the queue and feed it to me through the iteration terminal. It would do this until there are no values left in the queue or until the code stops the loop. One could write an algorithm that pushes new points into the queue from within the loop or push the current value back onto the queue for later processing.

 

I'm sure there are other useful iteration strategies that are fairly common, please share them with the community

LabVIEW is awesome. But sometimes things don't go quite right, and when something isn't quite right somewhere (maybe a bad bit of binary) it would be great to be able to recompile your code. Mass Compile is great for upcompiling from older major versions, but will ignore VIs that are already compiled in the current version. What I'd like to see is a "Force Recompile" flag in Mass Compile to ensure all identified VIs are most definitely recompiled, especially useful when you call Mass Compile from the shortcut menu in Project view.

 

Yes, it could take some time, but less time than scripting it and you can use the time to make yourself a nice cup of hot British tea.

 

forcerecompile.png

Hi.

 

I simply propose to add a Warning case in the automatically generated Error/No error case structure:

 

Warning.png

 

The inclusion of the Warning case could be optional, and maybe setup by right-clicking the case structure?

 

Cheers,

Steen

TensorFlow is an open source machine learning tool originally developed by Google research teams.

 

Quoting from their API page:

 

TensorFlow has APIs available in several languages both for constructing and executing a TensorFlow graph. The Python API is at present the most complete and the easiest to use, but the C++ API may offer some performance advantages in graph execution, and supports deployment to small devices such as Android.

Over time, we hope that the TensorFlow community will develop front ends for languages like Go, Java, JavaScript, Lua R, and perhaps others.

 

Idea Summary: I would love to see LabVIEW among the "perhaps others". 😄

 

(Disclaimer: I know very little in that particular field of research)

 

 NativeLoopWait.png
If unwired it would default to 0, and at least let thread switching occur.  You could maybe right click on the loop and remove the wait node to make it run as fast as possible (like it is now), but it should be there by default...  Maybe also right-click on it and get to choose between "Wait (ms)" and "Wait until Next ma Multiple"?
  
(PS: I know that you can do this using timed loops, but I'd like to see it in all of them - I've seen too many "programmers" complain that their CPU is at 100% because their loops are going nuts).

I was recently trying to develop a function to navigate thru a deeply nested directory structure and came across system path length limitations which could potentially be addressed by use of a "change directory" function.

 

I realize I could use system exec with cmd /c cd <path>, but found this extremely slow

As much as I love Quick Drop, I don't love this:

 

quick_drop_delay.png

 

This delay occurs because we have to load all the palette information in the background to be able to drop objects by name.  You can choose Tools > Options > Controls/Functions Palettes > Load palettes during launch to front-load this delay during LabVIEW launch, but (1) a lot of you don't really like the longer launch time much better and (2) a lot of users still don't know about this setting.

 

I'm sure there's a way to solve this problem and make Quick Drop instantly usable, but I don't know enough about the underlying palette code to get it done.  Solving this problem, by the way, could have positive side effects on LabVIEW launch time, palette search performance, and maybe some other areas as well.

Idea:

Double-click to a wire + holding <ctrl> or <alt> will call the wire-cleanup tool for this particular line => much faster than via context menu

cleanup

Hello LabVIEW Experts.

 

I am using LV 2017 (Not sure how it works in Recent Versions of LabVIEW). Sorry if its fixed in Recent versions.

Started this thread to share my recent learning/bug.

 

It all started like this.

 

One my Engineer was re-architecturing existing LabVIEW Application and it has 40+ States/Cases Defined- Out of which two were identical and he is not able to figure out the duplicate.

We spend sometime to resolve by sorting and finding the duplicate.

 

Later i was working on the same to figure out why cant LabVIEW Highlight the issue/restrict from adding.

 

LabVIEW does restricts User from adding identical Items when using Edit Items of Enum Properties.

 

PalanivelThiruvenkadam_0-1664173360124.png

But it allows me to add identical items when users adds using shortcut/add items.

 

During this experiment i also found that LabVIEW doesn't allow user to create (Make Type Def) if the items are identical.

The above behavior is perfect but there should be a warning why they are not able to (Make Type Def)

 

I have Few Points on this.

 

1. There should be Error Warning/Highlight  to make user to debugg where the identical cases located (helpful when more states are mapped)

2.There should the restriction in adding items at all levels (all possible options to add items)

3.There should be an warning why user cannot (Make Type Def) when there are identical items.

It would be nice if the LabVIEW primitives for TCP allowed for the creation of a socket without actually connecting it to an endpoint.  My thoughts are that there would be two new commands added to the TCP palette:

 

TCP Create Connection (Advanced)

TCP Open Connection (Advanced)

 

TCP Create Connection (Advanced) would create the socket but not perform the connect() method on that socket.  I would imagine that it would otherwise look and feel quite like the current TCP Open Connection, except that the user would need to manually run TCP Open Connection (Advanced) afterwards that would perform the connect() method on that socket.

 

I have a need to access the raw socket object before it is connected, using the TCP Get Raw Net Object.vi in the vi.lib\Utility\tcp.llb library.  This VI works to get the handle that can be used by winsock and other Windows DLLs, however, it only allows for setting properties for a socket that is already connected.

 

Specifically I have a need to enable Windows FastPath for TCP to optimize the rate at which I can stream data between two applications.  Per the specification, this option must be enabled BEFORE the socket is connected.  Right now I can set this for the listener on the server side (TCP Create Listener creates a socket but does not connect), but I cannot set it for the TCP connection on the client side as the first method it calls is TCP Open Connection which returns a socket that is already connected.

Hello everybody,

 

(as suggested I will separate my idea Expand the functionality of Event structures into four seperate ideas to allow giving kudos separately.)

 

Make it possibly to dynamically register references like tcp/ip, visa, queues, etc. for Event structures. Possible events are "new bytes at tcp/ip", "new bytes at seriel port", or "connection closed". In the case of queues it could be the same function as "Dequeue Element" or "Queue referenced closed". There are certainly a lot more of similar references, which could be registered for catching events, which up to now we have to poll regularly or use other event functions (as in the case of visa -> new bytes at seriel port). Benefits are: no more regular polling of changes, better integration of several functions (like using queues to communicate with GUIs instead of dynamic user events), and in my opinion just better code.

 

Regards

Marc

The Timing palatte is looking bad with all thes gaps.  A simple fix would be to fill these holes with useful functions. I'm proposing 3 and attaching 2 from my re-use code. (I may re-create the third later)

timing2.PNG

 

Time to XL.vi (Attached): and its inverse, XL to Time.vi

12:00:00.0 AM Jan 0, 1900 is a pretty common epoch (Base Date) for external programs and converting from LabVIEW epoch shows up several times a year on the forums. and Time to excel has a few solutions to threads under its belt.   Moreover for analisys against external data from other enviornments you are often using Access, Excel, Lotus... All share the same epoch (and Leap year bug) in their date/time formats.  These vi.s have been pretty useful to me although the names may change to avoid (tm) infringements

 

Time to Time of Day.vi (Attached) has also been in my arsenal and proves both valuable and get on a few threads per year on the forum.

 

The gaps in the palatte make it a perfect fit

timing.PNG

Download All

(Similar title here.  Different concept.)

 

Many OO languages provide an Interface construct that allows objects in unrelated class hierarchies to expose similar functionality that can be called via common methods.  For example, a Car class and a Race class might both implement a Start method.  Unless their common ancestor has a Start method (which may not be practical) there's no way for me to take advantage of dynamic dispatching.  Interfaces allow an object to temporarily disguise itself as a different object so I can use a single vi on multiple objects.

 

Using the Car/Race example, I would like to be able to create an IStartable interface construct that defines a Start method.  Car and Race, in implementing IStartable, create their own Start methods that will override IStartable.Start.  I could then transform the Car and Race objects into an IStartable type and wire that into IStartable.Start.  IStartable.Start would then automatically transform each object back into its original type and call its member Start vi.

 

A couple points:

  • Classes need to be able to implement multiple interfaces.
  • There needs to be a way for class designers to indicate which interfaces it implements.  Setting that up in the class property dialog box seems like the best solution.
  • There also needs to be a way for class designers to designate which member vis to use when overriding an Interface method.  I can't think of a good way to do this in the vi itself, so I'd also manage these settings in the class properties dialog.
  • The illustration uses a Type Cast prim even though the actual type is not being changed.  It would need a new prim.
  • I've toyed with the idea of hiding interfaces on the bd by having an 'Invoke Interface Method' node that would automatically do the transformation and call back into the class' overriding method, but I think that restricts developers too much.

What bugs me about the VI Profiling Tool is that it is not intuitive.  The information it provides is really useful; however, it's so hidden and difficult to interpret that few people actually know where it is to use it.  Let's say you are simply acquiring data using DAQmx and writing that data to a file, as below:

 

Block Diagram of Inefficient DAQmx and File I/O

 

You might want to find out how to make this more efficient, but the only way you know is to insert Tick Count VIs and wrap your wires through sequence structures to do it.  It's annoying, and there are other ideas from JackDunaway (here) and JohnMc19 (here) which aim to simplify the use of those VIs. 

 

But why re-code your application when the VI Profiler can do it for you?  In addition, the VI profiler has more timing information (longest, shortest, average, total, etc) as well of number of runs and memory allocation data.

 

Good news: VI Profiler makes getting the data easy. 

Bad news: VI Profiler makes using the data difficult.

 

Why Using the Profiler is Difficult

 

First, you need to know it exists among a number of bland and condensed gray menus (Tools>>Profile>>Performance and Memory).

 

Next, you have to coordinate starting the profiler with starting your VI (start the Profiler, then start your VI, then stop your VI, then stop the Profiler).

 

Finally, you have to dig through a TON of VIs to find the ones that are relevant (I assume this is because, for polymorphic VIs, all of the instances are loaded into memory, even the dozens that aren't currently being used.)

 

When you find the VI you wish to examine, it will look something like this:

 

VI Profiler Currently

 

Have fun sorting through that!  When I finally find a VI that's hogging memory or speed, I'd expect to click on it to navigate to that VI.  NOPE!  All the VI Profiler does is make the line bold.  Not particularly easy to use...

 

I can't say if it's possible to get rid of VIs that aren't being used, or to make the menu option more visible to the user, but I do have an idea or two for how to make this information easier to understand in LabVIEW.

 

So here's what I suggest:

Adding a couple of check-boxes to the top of the VI Profiler will view in relation to your LabVIEW VI.  Notice the extra check boxes in the top of this image. 

 

VI Profiler With Color Shading

 

One checkbox allows you to color the column you wish to highlight in your LabVIEW code.  The other checkbox inserts a text comment containing the highlighted data straight into you LabVIEW code (right next to the sub VIs):

 

Shading SubVIs According to Relative Execution Speeds

From the above picture, you can clearly see the Write to Spreadsheet File VI is the slowest to execute.  Next in line are the Start DAQmx Task VI and then the Stop DAQmx Task VI.  So if a developer wanted to find out how to make his loop run faster (and therefore increase the rate data is read from his PC RAM), he would know the VIs that are more red are the ones he needs to focus on first.

 

Also, if a user wants to highlight the memory usage, he could select a memory column from the VI Profiler.

 

Highlight Average Memory Usage Per VI 

 

Then the LabVIEW block diagram would look like this:

 

Block Diagram Shaded in Blue for Average Memory Usage

In this case, if a developer wants to find out how he can optimize his code for memory usage, he knows where to start.

 

Side-note: I think selecting multiple colors at a time (one for each column of data you wish to highlight) would be cool, but that would start to get messy on the block diagram.

 

Other data, like the number of runs, could highlight which sections of code are running more often than others.

 

If we integrate the VI profiler more effectively into LabVIEW, there are a lot of benefits:

 

1. Re-coding to find timing specs won't be necessary for Sub-VIs

2. Monitoring memory allocations much easier (some users don't know it's possible with LabVIEW).

3. When there's a problem, it's easier to understand which SubVIs are slowing down code or hogging their memory.

4. Developers can further code development WHILE being wary of inefficiencies.

5. More integrated development environment "feel" for new customers or the experienced G-coder.

 

Please let me know what you think!

Hi,

 

I suggest an option added to the Open VI Reference primitive to open that VI reference without any refees. I suggest option bit 10, i.e. option 0x200h:

 

OpenWithNoRefees.png

 

The demand for this arises when you want to access methods and properties of VIs that may be broken, while on the other hand you don't have any need to run those VIs - for instance in one of my current tools (BatchEditor) where I'm tasked with investigating hundreds of VIs simultaneously of which some could be broken or missing dependencies. Other situations would be tools for traversing project trees for instance. Opening a large number of healthy VI references takes a while, and when something is broken in a VI, opening even a single reference could take 30 seconds to minutes.

 

Currently you can suppress the "loading" and "searching" dialogs by setting option 0x20h on the Open VI Reference primitive, but that only loads the refnum silently as far as that will get you. Opening the refnum takes the same amount of time as if you could see those dialogs, and you are rewarded with an explorer window if a dependency search fails. I just want a way to open a VI refnum without even starting to look for dependencies, thus very quickly and guaranteed silently.

 

The relevant people would know that this request isn't that hard to implement, as it is kind of already possibly using some ninja tricks. I'd like such an option to be public.

 

Cheers,

Steen

Currently with LabVIEW Scripting you can very easily create an event structure and add new event cases to the event structure.    Unfortunately, for some reason, you cannot programmatically configure the event case after it's created.  I think there should be some way, via property nodes, invoke nodes etc, to accomplish this.

 

idea.png

 

PS..  I would love to be wrong about this.  If you know of how to do that, feel free to argue it in the comments and we'll get this closed as completed 😄

The current implementation of flattening and unflattening from XML is quite noisy and includes information unnecessary for the users. Rewriting it or including Pretty Print functionality to LabVIEW would greatly simplify loading settings, exchange of data with other languages, dynamic configurations, visualizations of complex systems, network communication etc.

 

PrimaryKey_0-1573820513072.png

For community solution and examples of these features please go to -> https://forums.ni.com/t5/LabVIEW-APIs-Discussions/Tree-Map/td-p/3972244

 

This could include also Pretty JSON since XML and JSON are interchangeable -> http://www.utilities-online.info/xmltojson/#.Xc6XjVdKiUk

 

Additionally the XML parsing should be implemented without requiring Windows .NET platform components, so it can be done on a real-time system. Current XML parsing functions cannot be called on RT.

 

Under Web Services there is a conversion palette.  One of the functions here is to convert LV Image data to a PNG stream.  This is super useful when dealing with sending and receiving large amounts of PNG data over something like websockets.  I've used this in places where a web page can control a VI, and the image of some front panel controls can be sent to the web page.

 

However only recently did I realize this function turns the image into a PNG stream that is completely uncompressed.  This idea is to expose the Compression input to the VI.  Here is just one example where I get the image of a control then represent it as a PNG.  With no compression this file is over 64KB.  With just the most minimum compression that drops to 3KB, and down to 2KB under the highest compression.

 

Example_VI.png