LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

When a control caption.text property is called and the caption has not been set in the Properties dialog of the control an error is thrown.

caption text code.PNG

error 1320.PNG

It would be nice for the caption.text to default as an empty string so an error wouldn’t be thrown when it is called in the block diagram.  This would allow easier runtime editing of each control on the VI.

If I get the sematic of trace memory allocations right, every "Memory Allocate" event for a given handle should be accompanied by a "Memory Free" event somewhen later for the same handle or there is a memory leak. Given that, the trace toolkit desperately needs a feature that highlights all "Memory Allocate" events where no free event exists. Moreover, the event details should not only contain the VI that allocated the memory but also the concrete diagram element. Optimal would be switching to Labview and highlighting the element in the diagram just like the "LabVIEW Compare" tool does.

For now I'm helping myself by

  • exporting the traces to text,
  • deleting everything except the handle in "Details" using regular expressions in ultra edit: ;^(?*^)^p
  • deleting all "Memory Resize" events for performance reasons: ^p^(?*^)Memory Resize^(?*^)^p
  • renaming the column name "#" with "ID"
  • importing the text file into an access database
  • executing an SQL statement that gives me the ID's of all open memory allocations:
SELECT jo.id
FROM (
  SELECT q1.id, c.Beginn 
  FROM queueTest2 AS q1 LEFT JOIN (
       SELECT q1.vi, q1.id AS Beginn, Min(q2.id) AS Ende
       FROM queueTest2 AS q1, queueTest2 AS q2
       WHERE q1.event='Memory Allocate' 
       And q2.event='Memory Free' 
       And q1.id<q2.id 
       And q1.Details=q2.Details
       GROUP BY q1.id, q1.vi) AS c 
  ON (q1.id=c.Beginn AND q1.Event='Memory Allocate'))  AS jo
WHERE isNull(jo.Beginn)

Please note that a Bug in MS Access 2007 reformats this statement on saving it in a way that executing it gives an join expression not supported error. This can be fixed by putting the ON statement into braces again (don't press save before executing).

I suggest to change the VI execution priority scheme from the current normal/above normal/high etc. to a scheme similar to timed structures with a priority number and CPU affinity:

 

VIProperties.png

 

Note that Enable automatic error handling and Auto handle menus at launch are off by default as they should be Smiley Wink. I'd also expect enabling Inline subVI into calling VIs to disable and gray the rest of that form, since everything should be enherited from the caller VI anyway.

 

The reasons for my suggestion regarding VI priorities are multiple:

 

1) It seems background priority (lowest) has somewhat dubious effects (http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Remove-Background-priority-from-VI-Properties-Execution-since-it/idi-p/1637938).

2) Time critical priority is more than a priority, as it also doesn't yield the thread when it first gets hold of it. This detail can get lost when you're inexperienced in setting VI priorities, and I frequently see very unfortunate use of this setting (the most common being more VIs set to this than there are logical CPUs).

3) Subroutine priority is mostly, if not fully, replaced by the new code inlining functionality from LV 2010.

4) It is not recommended to mix timed structure priorities with VI priorities other than normal. I see this mixing frequently, making it quite hard to determine the true priority of the code.

5) It is often asked "at what priority runs my while loop?", "...my local?", "...my two parallel case structures?" etc. This would be simpler to understand if the entire VI had a priority that could be referenced absolutely to all other code using the same numbering system.

6) Timed structures run above high but below time critical priorities today. So we have a set of named priorities with a number range in-between. It would be much simpler if the numeric range covered all priorities, including the illusionary stuff that runs above time critical (NI Scan Engine for instance).

7) It would be quite simple to answer the question of at which priority runs 'TCP Read', or the SVE etc. There could even be a ceiling above which NI specific code could only run (maybe even a floor also).

😎 Regarding the processor assignment part; Today everything is multi-core, and we are in need of a simple way to set CPU affinity. This setting ought to exist globally for VIs.

 

If some code inside a VI need to run at a different priority than the VI setting, you can still just enclose that in a timed structure and set a localized priority. Priority enheritance to avoid priority inversion will still be in effect of course. It would just be much simpler to set and to understand how code chunks were prioritized in relation to each other. These settings should be readable and writeable through VI properties by the way, just like the input nodes on a timed structure allows for setting the priority at runtime.

 

A related, but not included in this idea, thought, is that it might also be time to get rid of the current execution system division scheme. It's not always obvious how many threads are available for each execution system. On Desktop there is (by default, this can be changed) 1 thread per logical CPU for each execution system. This is different on LV Real-Time, where there can be as many as 4 threads for some execution systems, but not all. Then there is 1 thread per LV instance for user interface, and 1 thread per timed structure (up to around 120 I think, I don't recall where I stumbled upon this upper limit, or if it's something I just dreamt up). It would be simpler if we had a single (user configurable maybe) threadpool, within that the simple priority numbering system I suggest above, as well as an ability to create a new thread programmatically (we can already do that by putting that code inside a timed loop or sequence, so no need for change here). Then there is the user interface thread. I'm not sure how to change that, if at all. It's actually a quite robust system, and we'd still be limited by stuff requiring root loop. That's how life is, and not going to change.

 

A new structure, a special case of the "single-frame timed sequence" for setting priority and CPU affinity should probably be created: The Priority structure. It has got nothing to do with timing anyway, the timed structures are just currently the only interfaces to setting priority and CPU affinity.

 

What do you think?

 

Cheers,

Steen

According to some engineers that I have spoken to in the prototyping industry, Windows 10 IoT is gaining some traction as a preferred embedded OS for developing ideas.  Windows 10 IoT, as I understand it, is delivered in 3 different versions: Core, Enterprise, and Enterprise Mobile.  Now where Core is built as a .NET application and lacks some of the system capabilities of Windows 10, the Enterprise version is built as a Win32 OS and is closest to Windows 10 as it is now.

 

If resources allow, it might be beneficial to see some future support for at least this version of Windows 10 IoT to meet the demands of a growing body of engineers who will want to integrate LabVIEW executables with their Windows 10 IoT powered devices.

I´d appreciate an event that is called whenever there is a change of Top cell (row). Several times I needed to "align" visible content of two objects (listboxes, trees, etc.). Value change or mouse scroll event can be used but still I´ve found no way how to catch when the user uses scrollbar arrows or draging scrollbar position indicator. The only solution I found is to keep it aligned by periodically reading out Top row visible property of one object and set it for another one. That requires code in Timeout event that is called very often (otherwise it is noticeable for user) causing performance drop.

The error ring is a handy tool I've just learned about.  It would be fantastic to be able to create an error ring for a project that can be type defined to easily distribute with applications.  This way all custom error codes for a project may be easily stored with all the project files without having to worry about including a file from the user.lib.  Other ring controls/constants/indicators can be type defined, why can't the error ring?

I use Unflatten from string (UFS) a lot.  Most of the the flattened data, however, comes from outside LV and as such does not use an I32 to encode the length of strings and arrays.  That leads to a lot of branching and bending of wires.

 

NewUnflatten.png

 

 

I would like to make the boolean input (Includes length?) polymorphic to accept numeric types to specify the number of elements in an array or the length of a string.  I was too lazy to draw it, but I would also like it to promote the type input to an array if the count is wired so I only need a simple scalar constant to specify the type instead of the array constant (think Read from Binary File without the cluster of array business).  Allowing a cluster of  N numerics to specify ND arrays would be very welcome and permit compile time error checking.  1D arrays for dimension sizes could work as well, but the error checking would be runtime only.

 

I desperately want the boolean input to be overloaded since it is a straight shot from the output of the previous UFS.

Hi,

 

The Event Structure is able to catch the information when Windows is shutting down is the event "Application Instance Close?".

That event don't work in Linux OS. It exist a workaround in Linux with CIN code, but since CIN has been deprecated with LV 2011, the is no actual workaround.

http://digital.ni.com/public.nsf/allkb/C2470DFFFC71D47F86256F70005891C6?OpenDocument

 

Smiley Wink Only single click to concatinate 1D array out from For Loop:

Concatenat For Loop output tunnel.png

 

 Usually we can make it like below Smiley Sad.

 

But which easier, faster and most elegant Smiley Happy.

 

Concatenat 1D Array via shift register + Build_Array in For Loop.png

Very often during debugging, you have probes placed all over the block diagram checking wire contents. It would be really convenient if during Run Time we could place more Free Labels, as this would allow a developer to place comments during debugging/running, otherwise you have to stop the VI and then try and remember all the comments (assuming you have not written them down).

 

Note: I do not know of any other languages that allow you to do this, as you are essentially changing the source on the fly, so I don't think this is a trivial change! (But LabVIEW is better than other languages anyway Smiley Tongue)

According to this document only 14 ideas from the idea exchange were implemented in LabView 2010.This is a fantastic start.

 

There are at least 100 more great ideas on the Idea Exchange that should be implemented in the next version of LabView. Keep listening to the users. Keep improving LabView in every way.

 

Smiley Happy

FINAL PIC.png

I come across many situations where the value that needs to be entered into a numeric control is not readily available but has to be computed (For example: I need to enter the value 679.5/23.2 into the numeric control), then I would need the help of a calculator to compute this division answer, copy and past it back in the LabVIEW control.

 

So it would be good if we support basic math operations (+-*/^%) in the numeric control (and constants?) so that even if I enter 679.5/23.2 or 2^23 or 172857+3675 in the numeric control, it evaluates the math expression in edit time and assigns the value to the control. This would less than 5% of the time it takes currently in LabVIEW.

 

(One other option would be to change the code to accept 2 controls and use the division primitive. But this may not be feasible if the VI is built into an application or if the operation is a one time calculation that need not be repeated, or if the math operation that is done to calculate the input to the numeric is different each time,etc)

With text based languages, the For Loop has a programmable starting index, stopping index, and step size.  With Labview, the starting index is always zero and the step size is always 1.  It is not changeable.  I would like to see Labview have a real For Loop where there would be three terminals inside the For Loop that can be set by the user.  One terminal for initial value (starting index), one for final value (stopping index), and one for step size.  This would be of great value to all Labview programmers.  Of course the terminals can be much smaller than what is displayed in the picture.  One or two letter terminals, such as ST for start, SP for stop, and SZ for step size would do fine.  (or N for initial value, F for fnal value, and S for step size).  The real for loop should be capable of going in a negative direction, like starting at 10, ending at -10, with a step size of -2.

 

 

21077i3760182794779C02

A common thing for me to do is have an event structure that handles a lot of events (including user events and events outside the current VI). I also have a timeout case that handles a few things like updating a time indicator every 500ms, etc. When troubleshooting some event interactions it would be nice the to be able to set a breakpoint on the event structures EXCEPT for those timeout events.

 

No Timeout.png

Currently when we use implicit property nodes and you want to create a "VI Snippet ", the image automatically switches to explicit property nodes. It would be good if this tool will work in this way.

 

snipp A.png

 

snipp B.png

I would find it helpful if it were possible to define enumerated values (or some sort of pre-defined selection list) for conditional disable symbols.  The current implementation appears to be based on strings and it is easy to make a typo in either the case statement or in the conditional disable dialog box that causes unexpected execution.

Consider expanding programmatic modification of the Build Process.  There is already an Idea here to allow the Pre-Build Action to set the Version Specification before the Build Process (it happens during the Build process, after the before-the-change Version Specification has been cached and used in the Build).  However, other Specifications in the Build Spec would be very useful to be able to specify in a (true) Pre-Build Action.

  • Target Filename (low priority)
  • Destination Directory (I like to put Builds in Public Documents, but the Public Document folder can "move", though LabVIEW's Get System Directory can find it, so I could "pre-build" a path specific to the given PC)
  • Build Specification Description (allows the user to include version-specific or Build-Time text)
  • Version Information (in addition to Version Number, before being used, the other Fields might be useful).

I've noticed references on the Web to automated Builds, and know there are Build VIs that will do a Build.  I also know there are a set of "barely-documented" APIs that some have used for this purpose, but I don't know (other than the Set Version Specification, which doesn't work as I expected in a Pre-Build Action) of others.  I don't especially want to poke around inside the Project's XML file -- can we consider adding some other "Pre-Build" Set functions?  Could (for example) some of these "Build Properties" be set with a Property Node?  Maybe this functionality is already there and I've not found it ...

 

Bob Schor

Some companies provide the best drivers ever. Some don't.

 

So when using the Call Library Function Node for using external libraries, a call to badly written dll library shouldn't harm LabVIEW, in my humble opinion. However, as described in the KB, badly written libraries can let LabVIEW crash. I think this is not necessary.

 

Would it be possible to let the calls to a dll library be done in a sandbox, such that when the sandbox crashes, LabVIEW still remains alive?

 

 

Frequently seen message:

LabVIEW crash upon dll error

Diagram cleanup has become my friend lately.  It doesn't do a great job but it is adequate to make th code readable and moving each terminal to remove as many kinks as possible just gets too time consuming.

 

However, I do a diagram clenup, look at the results and then go around option-clicking to swap terminals on symmetric inputs and then cleanup again.  The diagram cleanup code should consider this option and optimize the terminal inputs.

 

By symmetric terminals I mean the whole host of primitives where the input order does not matter, addition, multiplication, Max/Min, Equals?, Not Equal?.  This would remove a whole bunch of wire crossings that are very annoying and make code hard to read.  Even "Bundle by name" would be amenable to this cleanup optimization.

 

If you really want a challenge, this could also be applied to the Compound arithmetic node but the "negate this input" would move as the input is reassigned.

 

(If this has been impelemented post LV 12, "never mind" as Emily Litella used to say)

The Ignore all Feature implemented for loading the vi's has to be extended for Mass compiling also. When we do a mass compile on a larger code its a pain to ignore the items individually.