LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea
As title, users are able to select several controls or indicators and convert them to a cluster by selecting "combine them to a cluster" in right click menu. It is much easier for who want to create a sub vi and want to have cluster instead of exsited controls/indicators.

Typical question in development process: "How quickly does my code execute? What runs faster... Code A or Code B?" So, if you're like me, you throw in a quick sequence that looks like this:

 

TimingDuringDevelopment.png

 

AHHH! What a mess! It's so hard to fit it in, with FP real estate so packed these days!

 

We need this:

ProposedTimingDuringDevelopment.png

 Just like my other idea, and for simplicity's sake NI, I would be PERFECTLY happy even if you had to set up the probes during edit mode, and were not able to "probe" while running.

 

 As a bonus, this idea may be extrapolated into n timing probes, where you can find delta t between any two of the probes.

If you write a multi-purpose sub-VI, it is sometimes desirable to know during run-time, if an input of the sub-VI is connected to some source inside the calling VI or not. E.g. if you have two inputs and you want to do a certain action whichs depends on which input is connected, you have to write a polymorphic VI. Therefore you have to write at least 3 VIs: one VI for each input and the polymorphic VI. With a new control property or method, which tells you if the input got it's data from some source, you could do this with just one VI. There are of course other scenarios where this new feature could be useful.

 

Regards,

Marc

The current behavior of LabVIEW is as such:

 

UserEventsWithClusters.png

 

The pick list of event data strips the cluster, only allowing access to the elements inside the cluster. In order to manipulate the native datatype, you must reconstitute the cluster inside the event structure, a performance hit on large data structures firing at quick event rates. This is not to mention a messy FP.

 

I suggest that User Events preserve the clustered event datatype, allowing the programmer to access to the native datatype on the consumer end. The option to select only the element "Clustered Typedef.Array of I32s" of course still remains, but is not forced upon you. (Note that the below block diagram is NOT a direct LabVIEW screenshot... brought to you by trickery and a paint program. I clustered the cluster on event datatype [not shown] so that the event structure could eat the outer layer)

 

NewUserEventsWithClusters.png

 

My motivation with this suggestion is directly linked with my other post on the Big Typedef Issue.

Currently, you can place a probe on a wire while developing, which is an indicator of the data on a wire. I want the ability to CONTROL the data on the wire, with a data forcing mechanism.

 

The implementation would be very simple... right click on a wire, and in the context menu the option "Force" would be right under "Probe." It would pop up a window of the forcing control, and while the VI is running and forcing is set to "Enable", the programmer can control the values that pass on the wire. If the force window were set to "Disable", the data in the wire would be completely controlled by the VI's logic.

 

DataForcing.png

 

I think the implementation by NI could be trivially simple. If you only allow a forcing control to be added during edit mode (not while the VI is running), the force could be added as an inline VI (as denoted by the green rectangle on the wire). The code inside the inline VI would be as follows, and the front panel would be "Data Force (1)" as shown above.

 

ForcingImplementation.png

 

Of course, if you could add a force to a wire during runtime like probes, props NI. But I would be PERFECTLY happy if you could only add these force controls in edit mode prior to running.

 

One level further (and this would be AMAZING, NI, AMAZING): enable and disable certain parts of the cluster that you would like to force and allow the other elements to be controlled by the VI logic. I made the example above because it would be very natural to ONLY force Sensor1 and Sensor2, and letting the output run it's course from your forced input.

When working with Arrays via VI Server (similar to a previous idea) it's challenging to get at a single element of the array.

 

My workaround involves setting only one array cell visible and then changing the index to move to different elements....

 

Why can't we read / write the current active index of the array via Property node?

 

Shane.

When developing a  utility to traverse any control using VI Server and save its contents to a file (similar to the OpenG utility using Variant) it is quite challenging to find out the size of the array's data.

 

There are various workarounds, but all of these are relatively tedious and over-complicated.

 

Why don't we have a "array data size" read only value on the property node of an array?

 

This would make things MUCH easier.

 

Shane.

Message Edited by Intaris on 06-12-2009 12:33 PM

This is something that would be a great addition for those of us working on different platforms (i.e. Windows and RT).

 

When a VI is compiled for a particular platform, add a seperate area to the actual VI file to keep the platform dependent compiled code. This way a vi that has been compiled for use on windows and for RT would have two sectons that keep a seperate copy of the actual machine code for each OS.

 

Where this issue becomes a problem is when you have common code called from a project that has both Windows and RT components. This creates an endless recompile operation as code is built

 

As an example:

 

Code is compiled for Windows.....windows machine code is generated and saved in the vi

Code is opened under RT.....changes pending because the compiled code does not match the platform

Recompile vi for RT....RT machine code is generated and saved in the vi.

 

When the code is reopened under Windows, it requires another recompile due to the existing RT machine code.

 

Obviously any changes to the FP or BD would cause all of the machine code areas to require a recompile. 

 

This would add more size to each individual VI, however disk space is relatively cheap with respect to LabVIEW development systems.

Here is an idea that I have wanted to make happen for a while. It is the Asyncronous Call By Reference. Basically, it is just a Call By Reference, but split in half so that the call is not syncronous (blocking), and return data can be obtained from multiple locations.  Plus, there should be ways to check the status of the asynchronous call and kill it.  I've even implemented this, to some degree, here http://forums.openg.org/index.php?showtopic=88
 
 
Message Edited by Jim Kring on 06-10-2009 08:14 AM

If you want to fork a VI (start a parallel running VI), you have to load the VI reference, use the set control elements values method to initialize the controls of the VI, and than start it with the run VI method without waiting to finish. Quite complicate, if you ask me.

 

It would be cool, if there would be a fork VI method, which you call inside the VI you want to fork. The method copies the VI to a new place in memory including its actual running state. The new copy just goes on as it would without the fork, while the calling VI would get back the values of the ouputs, which they have at the moment of the execution of the fork method. Of course this does only work with reentrant VIs.

 

I can think of a LOT of applications which can use this!

 

Regards,

Marc

I read about this idea on LAVA and in fact had this idea a few years before myself: parallel For loops. Of course this does not work with every VI (only reentrant VIs) and shift registers have no function, but it would be cool to start dynamically parallel processes.

 

Regards,

Marc

I'm pretty sure I just deleted my originalpost with an edit:

 

I would like some help when wiring up a Connector Pane.

 

On large Monitors and less than 20/20 vision, it's easy to hit the wrong terminal.

 

A visual feedback as to which terminal is currently active would be of help.

 

As would showing a larger version when wiring allowing the user to change active terminal with the arrow keys....

 

Shane.

Message Edited by Intaris on 06-08-2009 06:54 AM
Message Edited by Intaris on 06-08-2009 06:54 AM

I would like to make a small workind change on another suggestion found HERE.

 

I would like to be able to declare LVOOP classes as ABSTRACT.

 

One example is a spectrometer class I have developed which provides much of the needed functionality but which is not designed to actually DO anything (Get name, Set Name, GET calibration coefficients, SET calibration Coefficients and so on).  At the moment I can instantiate an object of this class as with any "VALID" class which then just returns an error at run-time because the functionality is not complete.

 

By preventing users from (either willfully or accidentally) dropping what is essentially an abstract class onto the BD of a program we could prevent some awkward bugs.

 

Shane.

Something like "pure virtual functions" in C++,  I think it makes code more clear.
Message Edited by Support on 06-09-2009 08:32 AM

This idea will probably have a narrow audience... those of us who use the "zip" functions in LabVIEW. There is currently an unzip function that takes a zip file on disk, then writes the unzipped files back to disk. To manipulate zipped files, you must then access the disk and load into memory. In other words, 3 disk operations... read zip, write file, read file.

 

There needs to be a function that unzips the files into memory, with the output of this function as an array of flattened strings, byte arrays, or data pointers.

 

 

UnzipToMem.png


Message Edited by Support on 06-09-2009 08:35 AM

I currently have a 3D array that I need to remap to some strange looking discret calibration data by linear interpolation (there is no good model function, not even polynomial!)

 

(1) This requires deeply stacked FOR loops (image top).

 

(2) In many cases it would simplify code if we could use a single FOR loop, but specify to loop over all scalar elements in all dimensions. (SUGGESTION: right-click tunnel, select "index over all scalar elements" -> different tunnel look). This would cause the autoindexing output tunnels of scalars to produce a multidimensional array of the same size as the autoindexing input. In all respects it should act the same as if we would stack as many FOR loops as needed to strip the input down to a scalar array element (center image). The index terminal [i] would output an array of all current indices (3 elements in this case). I am not sure what to do about N. If it would accept an array, we need to make sure the size (=# of dimensions) is defined elsewhere if there are no autoindexing inputs for example).

 

Case (2) could be useful in many situations, especially for more complicated code than shown here. It would work for all situations where only the innermost loop contains real code.

 

(3) In this particular case, the "interpolate array" function could of course be made polymorphic to accept multidimensional arrays and produce outputs if the same size as shown at the bottom of the image. Case (3) is a more general suggestion. Many existing functions could be made "more polymorphic" to do the same operation on each element of an array such as shown here. Since all operations are independent, it could even be automatically parallelized internally to take advantage of multiple CPU cores. Good candidates are functions that have scalar inputs and output a single scalar. If I have time, I will go over the palettes and identify more candidates. It might even turn into a seperate idea here in this forum. 😉

 

 

Message Edited by altenbach on 06-05-2009 05:25 PM
Message Edited by altenbach on 06-05-2009 05:25 PM

Anyone that refactors code from another programmer has probably received some code that has too many sequence structures.  Often I convert these to a simple state machine.  With scripting this could be easily converted to a state machine for loop.  A default enum type def could be created to do a state machine sequence 1 -> 2 -> 3 -> 4 ->5 -> ...last sequence in structure.  All sequence locals could be replaced with either a local or a shift register (that will bypass all other cases where it is not used). 

 

Why would I want this?

 

Sequences are over used and it is well documented that state machine architecture can provide many benefits over the static and simple sequence.  Many programmers are un aware of these benefits and don't understand data flow so use sequences  to force execution order.  Sometimes sequences are convenient early in the prototyping only to find the known shortfalls of this programming paradigm.  When this happend dont worry, just right click on the sequence structure and click convert to state machine.  This could make refactoring code easier, at least in my mind (but is late on friday and I am ready for the weekend which could cloud my thought process).

 

 

One ugly way i usually have to code is  

 

Example_VI_BD-1.png

 

A better way would be to be able to wire, a wire to every function.

This wire will carry data, but it will not pass them in the middle function. It will be a bump input-output, just to determine which function to be executed first. There should be an indicator, when this dump input-output is activated. It can be activated with right clicking on the icon.

Example_VI_BD.png

 This wire should be able to be, a dump wire without any data so you can impliment this

Example_VI_BD.png

The images way not be so nice, but that's the idea..

 

 

I like using Linux whenever I can, particularly when running large software like LabVIEW, since it tends to crawl on my XP systems. I was happy to realize that LabVIEW works on Linux, but soon after I was disappointed by the lack of usefulness of it when interfacing with hardware. I need to use the RealTime module to interface with my RealTime Compact-RIO. I also need Linux support for the FPGA module, as I need to program the FPGA attached to my cRIO. I'm sure I am not the only person who would like the ability to do this.

 

Without support for any of the hardware or LabVIEW modules I need, the Linux version of LabVIEW is entirely useless to me, and XP as an OS simply cannot perform up to par for me.

For distribution, only package necessary libraries in installer packages built with the project. A lightweight UI, server, or client does not need a full 70MB+ installer that bloats out to a few hundred MB's once installed! A colleague has remarked that the total size of our LabVIEW application+RTE EXCEEDS the entire size of the XPe image running on the embedded computer! This becomes an issue when distributing software upgrades to places in the world without high-speed internet connectivity.