LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

We need a 64-bit version of the Internet Toolkit, which we plan to use on our new product. Are there any plans to make it? It seems that it should not be that difficult to recompile the current 32-bit version for 64 bit.

There is a button at the bottom right hand side of the 'Build status' dialog labeled 'Cancel'.

 

Clicking it does absolutely nothing other than grey out the cancel button.

 

I would like to suggest a working cancel function.

 

http://forums.ni.com/t5/LabVIEW/Cancel-part-way-through-building-installer-Does-nothing-LabVIEW/m-p/1495770

 

Forum post from 3 years ago...

http://forums.ni.com/t5/Real-Time-Measurement-and/Canceling-a-build/m-p/700231

 

 

         scripting  -  "Move" ... please, add an output for "Moved Object Reference"  

 

                (like the invoke node "Create from Data Type" has an output "New Objet Reference)

 

                  SR1.png

 

 

SR2.png

With "Sort 1D array", we can get ascending ordered elements, as below.sort 1D array.PNG

 But in practice,  the chances of using elements with ascending order and descending order are almost the same. Thus it will be helpful to add a node returns the descending ordered elements, or users are able to select between two options within one node. Although we can reverse the array after "ascending sorting" to achieve a descending order, it costs extra CPU and memory.

Anyone that refactors code from another programmer has probably received some code that has too many sequence structures.  Often I convert these to a simple state machine.  With scripting this could be easily converted to a state machine for loop.  A default enum type def could be created to do a state machine sequence 1 -> 2 -> 3 -> 4 ->5 -> ...last sequence in structure.  All sequence locals could be replaced with either a local or a shift register (that will bypass all other cases where it is not used). 

 

Why would I want this?

 

Sequences are over used and it is well documented that state machine architecture can provide many benefits over the static and simple sequence.  Many programmers are un aware of these benefits and don't understand data flow so use sequences  to force execution order.  Sometimes sequences are convenient early in the prototyping only to find the known shortfalls of this programming paradigm.  When this happend dont worry, just right click on the sequence structure and click convert to state machine.  This could make refactoring code easier, at least in my mind (but is late on friday and I am ready for the weekend which could cloud my thought process).

 

 

 

                                    The data type of the property node "scale range" of a slide

 

                                    should match with the data type of this slide itself.

 

 

SR2.png

Many languages have that, why not LabVIEW?

 

See here for implementation details.

 

This would help me out loads to solve Project Euler's problem faster, at this moment LabVIEW is ranked 53, it would be nice if we had tools that help us improve that.

Spurred by Darren's latest nugget, I think it would be an excellent addition to the feature to be able to retain wire values for all subVIs of a particular VI as well as the VI itself. Several times, I have found myself having to run a VI a couple times to get to the point where I can satisfactorily examine the data flow. There is a JKI RCF plugin by Vishal posted here which implemented this, but native functionality would be much preferred. 🙂

 

I'm not sure how best it could be implemented in UI so as not to disturb those who don't want this, and I can forsee a hairy situation arising if a particular subVI is called from a different hierachy later. Ideally, the subVI would retain values for the last execution in the retained hierachy, but obviously that's incorrect in the grand scheme of things. I'd love to hear other ideas on how to handle that scenario.

A multicolumn list box is often decorated by a sub-vi via a control reference.

 

A common example would be coloring the background of alternate rows or columns.

 

In order to all columns or rows, a sub-VI must retrieve the ItemNames array via the reference to determine the number of iterations.

 

Add a property node for the MCL that returns the size of the ItemNames array directly so the ItemNames array does not need to be retrieved.

 

 

 

MCL - ItemNames Array Sizes.PNG

How about allowing the creation of XControl classes?

 

Families of XControl classes would allow the Publish / Subscribe paradigm I'm struggling with as discussed here: http://forums.ni.com/t5/LabVIEW/XControl-publish-subscribe/m-p/1568958

 

Being derived from a common ancestor client, XControls could be managed in a collection (array) and the publisher could call a common or overidden class method for each XControl client in the list.

 

XControl classes would also allow an elegant implementation of the Model View Controller paradigm used extensively by other object oriented languages.

 

You should also allow XControls to be members of an existing class.  That way the XControl would gain access to the member variables of the class without resorting to cumbersome accessor vi's.

 

XControls sub-classes of a class should also be allowed, then methods in the parent could be overidden in the XControl.  Mediator objects can then call class methods without caring if the instance is a XControl View, Model, Controller or some other object.

 

Phill

LabVIEW saves configuration data, including Recent Files and Recent Projects, in a single LabVIEW.ini file, by default saved in National Instruments\LabVIEW <Version>.  If more than one user logs onto (and uses) LabVIEW, this "sharing" of the configuration, particularly the path names to files, will probably point to the "wrong" place, as they go by default to the user's <My Documents>\LabVIEW Data.  Note that the NI function "Default Data Directory" will correctly "point to" the user's <My Documents>\LabVIEW Data, but there is no guarantee that the (single) LabVIEW.ini file will be correct for all users.

 

A simple fix is to save LabVIEW.ini in the user's Profile.  I notice that there is a National Instruments folder in App Data\Local -- this is one place that could be used.  Then if a second user logs on to a PC, he (or she) will have a unique set of saved files/folders in the Configuration file, one that references files in the appropriate <My Documents> folder.

If I have an existing cluster in my project and one day I decide to delete one of the elements of that cluster, LabVIEW tries to fix all the references or black them out to help me find the errors.  However, It seems that LabVIEW in some instances keeps track of clusters internal controls with an index and when you delete something, those indexes are now messed up. 

 

For Example, if you have a property node linked directly to an item within that cluster, then you delete an item in that cluster, the property node now points to something else.  OOOPS! 

 

Also, if you have an Event Structure Case pointed to a control within this cluster, and one of the other controls in the cluster get's deleted, this Event structure case now points to the wrong thing!

 

LabVIEW is internally keeping track of the controls in a cluster with an index.  It works fine as long as you only add new stuff to the cluster.  But if you delete things, LabVIEW does not handle it well.

If LabVIEW instead had a notion of the name of the items, then it probably could recover well when an item was deleted from a cluster.

By default when a control/indicator is dropped in the FP/BD the Label is on top of the control and the developer needs to realign them for better UI.There must be an option in the FP and the BD with which the Labels can be aligned left, right, or on top based on selection.

 

See the attached image which better describes the solution

 

untitled.GIF

 

 

The same utilty can also be used to save some space in the BD as shown

untitled.GIF

 

I often employ a design pattern of for each item in a list where the list is a typdef enumerated type.  This is a great way of making a linear sequence where each item is tested one time (test sequences, list checking .....).  I used to make an array of the type and populate it with one of each item, this is not very scalable since when I change the typdef I have to redo the list.  Instead I do a little type manupulation to make sure that I itterate one time for each intem in my enum, this requires some support code and extra wiring.  It would be nice if I could make a for loop that takes the enyumerated type and used it as its itterator.  See below:

 

 

 

 

For each enum loop.jpg

The new for loop which takes an enum would greatly simplify the code readability.

I use this all the time and find it to work great, uppdate my enum and handle the new case, no chance of missing a case (I use it in conjunction with case structures with no defaults so my code breaks and forces me to handle the new case.

 

I have attached the code if any one is interested

 

 

The Report Generation Toolkit provides functions to set various Format aspects of Excel "areas", and Excel Graphs, but doesn't provide the complementary "Get" functions to return the current values.  For example, I wrote a LabVIEW function that will set the Font Color of a row of a WorkSheet to Red (using Excel Set Cell Font), but if I want to find the row with the Red font, there is no "Excel Get Cell Font" that can return the property to me.  Of course, I could cobble something together using ActiveX calls, perhaps, but these are poorly documented, and since NI is already doing the "heavy lifting" to provide the Set functions, it would seem "relatively simple" for them to also add the corresponding Get functions, as well.

 

Bob Schor  (using Excel as a "controller" for some experiments controlled by a LabVIEW Real-Time system)

Currently, we have to use Unbundle By Name from Cluster and select an element for Case Section

 

1.png

 

It would be great if we could just wire the Cluster Directly and have a Right-Click Option at Case selector to select an element (one element only).

 

2.png

 

P.S. If it is a reasonable suggestion and gets enough Kudos to get R & D team’s attention for feasibility of this idea, then we ask for more logical operators support that would be useful. Also multiple elements and/or more statement node i.e. (type == Array and # elements <= 2)!!!

Only sometime I miss “if statement” support in LabVIEW. 

 

 

Message Edited by Support on 07-16-2009 11:56 AM

I am extending on an old idea, but the implementation is different than the OP so I made this a new idea:

https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Decimation-feature-built-into-the-graph-indicators/idi-p/1109956

 

What I would want would be an XY graph with automatic disk buffering and on screen decimation.  Imagine this, I drop a super duper large data sets XY graph.  Then I start sending data in chunks of XY pairs to the graph (updating the graph at 10 hz while acquisition is running at +5000Hz).  We are acquiring lots of high rate data.  The user wants to see XX seconds on screen, probably 60 seconds, 120 seconds, or maybe 10 or 30 minutes, whatever.  That standard plot width in time is defined as a property of the plot.  So now data flows in and is buffered to a temp TDMS file on disk with only the last XX seconds of data showing on the graph.  The user can specify a file location for the plot buffers in the plot properties (read only at runtime). 

 

We decimate the incoming data as follows:

  • Calculate the maximum possible pixel width of the graph for the largest single attached monitor
  • Divide the standard display width in time by the max pixel width to calculate the decimation interval
  • Buffer incoming data in RAM and calculate the min and max value over the time interval that corresponds to one pixel width.  Write both the full rate data to the temp TDMS and the time stamped min and max values at the decimation interval
  • Plot a vertical line filling from the min to max value at each decimation interval
  • Incoming data will always be decimated at the standard rate with decimated data and full rate data saved to file
  • In most use, the user will only watch data streaming to the XY graph without interaction.  In some cases, they may grab an X scroll bar and scroll back in time.  In that case the graph displays the previously decimated values so that disk read and processing in minimized for the scroll back request.
  • If the user pauses the graph update, they can zoom in on X.  In that case, graph would rapidly re-zoom on the decimated envelope of data.  In the background, the raw data will be read from the TDMS and re-decimated for the current graph x range and pixel width and the now less decimated data will be enveloped on screen to replace the prior decimated envelope.  The user can carry on zooming in in this manner until there is at least one vertical line of pixels for every data point at which point the user sees individual points and not an envelope between the min and max values.
  • Temp TDMS files are cleared when the graph is closed. 
  • The developer can opt to clear out the specified temp location on each launch in case a file was left on disk due to a crash.

This arrangement would allow unlimited zooming and graphing of large datasets without writing excessive data to the UI indicator or trying to hold excessive data in RAM.  It would allow a user to scroll back over days of data easily and safely.  The user would also have access to view the high rate data should they need to. 

 

If you write a multi-purpose sub-VI, it is sometimes desirable to know during run-time, if an input of the sub-VI is connected to some source inside the calling VI or not. E.g. if you have two inputs and you want to do a certain action whichs depends on which input is connected, you have to write a polymorphic VI. Therefore you have to write at least 3 VIs: one VI for each input and the polymorphic VI. With a new control property or method, which tells you if the input got it's data from some source, you could do this with just one VI. There are of course other scenarios where this new feature could be useful.

 

Regards,

Marc

It would be desirable if the event structure supported more than one timeout event. Ideally we could define multiple, independent timeout events. It could provide a fixed number (more than one) if this eases the implementation. Currently if you want multiple timeouts you end up with some type of state machine in the timeout event to control multiple events. Multiple timers would allow us to easily process different timeout events. Allow us to control them easily without special timeout management logic in the current single timeout event. Currently we can register/deregister user events. Let's extend this to timeouts as well.

We have quite a few LabVIEW users here, but not many of us have the application builder or the experience to use it.  So I get many requests to build an executable and installer for others.  Each time I have to take their DaqMX tasks (in the form of a *.nce file from their machine), import it into my MAX and then when creating the installer essentially re-create the same file.  Can an option be added to the Hardware Configuration tab to allow you to select a NCE file instead of create one?

 

Thanks,

-Brian