LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Hi.

 

I simply propose to add a Warning case in the automatically generated Error/No error case structure:

 

Warning.png

 

The inclusion of the Warning case could be optional, and maybe setup by right-clicking the case structure?

 

Cheers,

Steen

Format into text is very useful but can become hard to edit when it has a lot of inputs. I propose, instead of one huge format string, that the programmer be allowed to put the required format next to the corresponding input. Also, the user should be allowed to enter constant strings, e.g.. \n, \t, or "Comment", and have the corresponding input field automatically grayed out.

 

FORMAT INTO TEXT IDEA

It is fairly common to build applications in LabVIEW that are useful to run as a service (monitoring software e.g.) , however to do so today you need to "cheat" and use srvany, and write and run batch files to get it installed. 

 

It would be great if we could build real services, and the installer would take care of the installation process. You could even have a nice template for services with an accompanying user interface client, with notification icon and everything.

 

LabVIEW everywhere...Not just on different targets, but in every part of the system. 🙂 

 

 

 

 

The BeagleBoard xM is a 32 bit ARM based microcontroller board that is very popular. It would be great if we could programme it in LabVIEW. This product could leverage off the already available LabVIEW Embedded for ARM and the LabVIEW Microcontroller SDK (or other methods of getting LabVIEW to run on it).

 

The BeagleBoard xM is $149 and is open hardware. The BeagleBoard xM uses an ARM Cortex A8 running at 1,000 MHz resulting in 2,000 MIPS of performance. By way of comparison, the current LabVIEW Embedded for ARM Tier 1 (out-of-the-box experience) boards have only 60 MIPS of processing power. So, about 33 times the processing power!

 

Wouldn’t it be great to programme the BeagleBoard xM in LabVIEW?

This is an extension of Darin's excellent idea and touches on my earlier comment there.

 

I suggest that it should be allowed to mix booleans with numeric operations, in which case the boolean would be "coerced" (for lack of a better word) to 0,1, where FALSE=0 and TRUE=1. This would dramatically simplify certain code fragments without loss in clarity. For example to count the number of TRUEs on a boolean array, we could simply use an "add array elements".

 

(A possible extension would be to also allow mixing of error wires and numerics in the same way, in which case the error would coerce to 0,1 (0=No Error, 1=Error))

 

Here's how it could look like (left). Equivalent legacy code is shown on the right. Counting the number of TRUEs in a boolean array is an often needed function and this idea would eliminate two primitives. This is only a very small sampling of the possible applications.

 

 

Idea Summary: When a boolean (or possibly error wire) is wired to a function that accepts numerics, it should be automatically coerced to 0 or 1 of the dominant datatype connected to the other inputs. If there is no other input, e.g. in the case of "add array elements", it should be coreced to I32.

I16 is currently the only output representation of this function, and it is particularly unhandy - I typically end up typecasting it 90% of the time. I propose the output representation of this primitive can be changed via Right-Click Menu.

 

BoolOutputRepresentation.png

(This is different and less controversional than  this related old idea)

 

Arrays of timestamps only contain legal values and can even be sorted. We can use "search 1D array" to find a matching timestamp just fine.

 

As with DBLs, there might be a very close array value that is a few LSBs off, but well within the error of the experiment so it is sufficient to find the closest value. We can always test later if that value is close enough for what we need or if "not found" would be a better result.

 

If we have a sorted input array, the easiest solution is "threshold 1D array" giving us a fractional index of a linearly interpolated value. For some unknown reason, timestamps are not allowed for the inputs, limiting the usefulness of the tool. One possible workaround is to convert to DBLs, with the disadvantage that quite a few bits are dropped (timestamp: 128 bits, DBL: 64bits).

 

Similarly, "Interpolate 1D array" also does not allow timestamp inputs. In both cases, there is an obvious and unique result that is IMHO in no way confusing or controversial.

 

 

 

IDEA Summary: "Threshold 1D Array" and "Interpolate 1D Array" should work with timestamps.

 

 

 

There is a construct I am quite fond of in pointer-friendly languages, using iterator math to implement circular buffers of arbitrary data types.  They are a little bit slower to use than straight arrays, but they provide a nice syntax for fixed sized buffers and are helpful in cases where you will be prepending and appending elements.

 

I am pretty certain that queues are implemented as circular buffers under the hood, so much of the infrastructure is already in place, this is mostly adding a new API.  Added bonus:  the explicit circular buffer can be synchronous, unlike the queue, so for example you can put them in subroutine VIs.

 

It should be easy to convert 1D arrays to/from circular buffers.  Array->CB is basically free, the elements are in order in memory.  CB->Array requires two block copies (most of the time).  This can be strategically mananged, much like Reverse or Transpose operations.

 

Use cases:

 

You can implement most of  the following two ideas naturally:

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Looping-Input-Tunnels/idi-p/2020406

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/New-modes-on-auto-indexed-input-array-tunnels-in-loops/idi-p/2263706

 

Circular buffers would auto-index and cycle the elements and not participate in setting 'N'.

 

You can do 95+% of what I wanted to do with negative indexing:

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Negative-Values-in-Index-Array-or-Array-Subset/idi-p/960863

 

A lot of the classic divide and conquer algorithms become tractable in LV.  You can already use queues to implement your own stack and outperform native recursion.  A CB implementation of the stack would be amenable to subroutine priority and give a nice performance kick.  I have done it by hand for a few datatypes and the beauty and simplicity of  the recursive solution gets buried in the implementation of the stack.  A drop-in node or two would give you a cleaner look and high-octane performance.

 

Finally, perhaps the most practical reason yet:  simple XY Charts.

 

As for appearance I'd suggest a modified wire like the matrix data type.  Most if not all Array primitives should probably accept the CB.  A few new nodes are needed to get/set buffer size and number of elements and to do the conversions to/from 1D arrays. The control/indicator could have some superpowers:  set the first element, wraparound scrolling (the first element should be highlighted).

What bugs me about the VI Profiling Tool is that it is not intuitive.  The information it provides is really useful; however, it's so hidden and difficult to interpret that few people actually know where it is to use it.  Let's say you are simply acquiring data using DAQmx and writing that data to a file, as below:

 

Block Diagram of Inefficient DAQmx and File I/O

 

You might want to find out how to make this more efficient, but the only way you know is to insert Tick Count VIs and wrap your wires through sequence structures to do it.  It's annoying, and there are other ideas from JackDunaway (here) and JohnMc19 (here) which aim to simplify the use of those VIs. 

 

But why re-code your application when the VI Profiler can do it for you?  In addition, the VI profiler has more timing information (longest, shortest, average, total, etc) as well of number of runs and memory allocation data.

 

Good news: VI Profiler makes getting the data easy. 

Bad news: VI Profiler makes using the data difficult.

 

Why Using the Profiler is Difficult

 

First, you need to know it exists among a number of bland and condensed gray menus (Tools>>Profile>>Performance and Memory).

 

Next, you have to coordinate starting the profiler with starting your VI (start the Profiler, then start your VI, then stop your VI, then stop the Profiler).

 

Finally, you have to dig through a TON of VIs to find the ones that are relevant (I assume this is because, for polymorphic VIs, all of the instances are loaded into memory, even the dozens that aren't currently being used.)

 

When you find the VI you wish to examine, it will look something like this:

 

VI Profiler Currently

 

Have fun sorting through that!  When I finally find a VI that's hogging memory or speed, I'd expect to click on it to navigate to that VI.  NOPE!  All the VI Profiler does is make the line bold.  Not particularly easy to use...

 

I can't say if it's possible to get rid of VIs that aren't being used, or to make the menu option more visible to the user, but I do have an idea or two for how to make this information easier to understand in LabVIEW.

 

So here's what I suggest:

Adding a couple of check-boxes to the top of the VI Profiler will view in relation to your LabVIEW VI.  Notice the extra check boxes in the top of this image. 

 

VI Profiler With Color Shading

 

One checkbox allows you to color the column you wish to highlight in your LabVIEW code.  The other checkbox inserts a text comment containing the highlighted data straight into you LabVIEW code (right next to the sub VIs):

 

Shading SubVIs According to Relative Execution Speeds

From the above picture, you can clearly see the Write to Spreadsheet File VI is the slowest to execute.  Next in line are the Start DAQmx Task VI and then the Stop DAQmx Task VI.  So if a developer wanted to find out how to make his loop run faster (and therefore increase the rate data is read from his PC RAM), he would know the VIs that are more red are the ones he needs to focus on first.

 

Also, if a user wants to highlight the memory usage, he could select a memory column from the VI Profiler.

 

Highlight Average Memory Usage Per VI 

 

Then the LabVIEW block diagram would look like this:

 

Block Diagram Shaded in Blue for Average Memory Usage

In this case, if a developer wants to find out how he can optimize his code for memory usage, he knows where to start.

 

Side-note: I think selecting multiple colors at a time (one for each column of data you wish to highlight) would be cool, but that would start to get messy on the block diagram.

 

Other data, like the number of runs, could highlight which sections of code are running more often than others.

 

If we integrate the VI profiler more effectively into LabVIEW, there are a lot of benefits:

 

1. Re-coding to find timing specs won't be necessary for Sub-VIs

2. Monitoring memory allocations much easier (some users don't know it's possible with LabVIEW).

3. When there's a problem, it's easier to understand which SubVIs are slowing down code or hogging their memory.

4. Developers can further code development WHILE being wary of inefficiencies.

5. More integrated development environment "feel" for new customers or the experienced G-coder.

 

Please let me know what you think!

So when it comes to using a queue, there is a somewhat common design pattern used by NI examples, which makes a producer consumer loop, where the consumer uses a dequeue function with a timeout of -1.  This means the function will wait forever until an event comes in.  But a neat feature of this function is it also returns when the queue reference becomes invalid, which can happen if the queue reference is closed, or if the VI that created that reference stops running.

 

This idea is to have similar functionality when it comes to user events.  I have a common design pattern with a publisher subscriber design where a user event is created and multiple loops register for it.  If for some reason the main VI stops, that reference becomes invalid but my other asynchronous loops will continue running.  In the past I've added a timeout case, where I check to see if the user event is still valid once every 5 seconds or so, and if it isn't then I go through my shutdown process.

 

What I am thinking is that there could be another event to register for, which gets generated when that user event which is registered for, becomes invalid so that polling for the validity of the user event isn't necessary.

 

before:

before.png

after:

after.png

The Timing palatte is looking bad with all thes gaps.  A simple fix would be to fill these holes with useful functions. I'm proposing 3 and attaching 2 from my re-use code. (I may re-create the third later)

timing2.PNG

 

Time to XL.vi (Attached): and its inverse, XL to Time.vi

12:00:00.0 AM Jan 0, 1900 is a pretty common epoch (Base Date) for external programs and converting from LabVIEW epoch shows up several times a year on the forums. and Time to excel has a few solutions to threads under its belt.   Moreover for analisys against external data from other enviornments you are often using Access, Excel, Lotus... All share the same epoch (and Leap year bug) in their date/time formats.  These vi.s have been pretty useful to me although the names may change to avoid (tm) infringements

 

Time to Time of Day.vi (Attached) has also been in my arsenal and proves both valuable and get on a few threads per year on the forum.

 

The gaps in the palatte make it a perfect fit

timing.PNG

Download All

Currently with LabVIEW Scripting you can very easily create an event structure and add new event cases to the event structure.    Unfortunately, for some reason, you cannot programmatically configure the event case after it's created.  I think there should be some way, via property nodes, invoke nodes etc, to accomplish this.

 

idea.png

 

PS..  I would love to be wrong about this.  If you know of how to do that, feel free to argue it in the comments and we'll get this closed as completed 😄

After fighting an increasingly slow IDE for a while I found the reason editing my libraries and classes was slow: Lengthy and large mutation history.  I'd like to see an easy option to clear that history so I'm not burdened with the edit-time slowness when I don't have to be:

DelMut.png

If I was smart with right-click-frameworky stuff I guess I could make it work myself, but for any so inclined:

DelMutCode.png

We I love the idea of the new cluster constants. I have found thought if I need to edit these constants to change something It opens the control back to full size. This really defeats the purpose of these space saving features. We need to be able to double click these constants and have them open in a new window so that my block diagram size does not change

 

20861i667C4C24B992C741

 

If my diagram gets this big if I need to change a true or false state every time I need to change it what good is the nice new icon or constant.

 

Just a thought.

Good programming practice says to handle your errors.  If you want to clear a specfic error, you have to do some variation on the following code:

 

ClearSingleError.png

 

Since this can be pretty cumbersome in your code, I would love to have a Clear Specific error VI/function that will do exactly this.  It can also be polymorphic and accept an array of multiple errors.  To make it even better, you could probably merge it with the existing clear error VI and set it up where if the error input is empty, it clears all errors.

 

CaptainKirby made a nice community example that does all this, but I'd like it to be on the palette for everyone to use...

https://forums.ni.com/t5/Example-Code/Clearing-a-Specific-Error-From-The-Error-Cluster/ta-p/3517242

 

Trim Whitespace currently accepts strings (scalars).

 

I propose that Trim Whitespace be made polymorphic so that it also handles an array of strings (basically just wrap the scalar impelementation in a For Loop).

This topic keeps coming up randomly.  A LabVIEW class keeps a mutation history so that it can load older versions of the class.  But how often does this actually need to be done?  I have never needed it.  Many others I have talked with have never needed it.  But it often causes problems as projects and histories get large.  For reference: Slow Editor Performance with Large LabVIEW Projects Containing Many Classes.  The history is also just added bloat to your files (lvclass and any built files that use the lvclass) for something most of us do not need and sometimes causes build errors.

 

My proposal is to add an option to the class properties to keep the mutation history.  It can be enabled by default to keep the current behavior.  But allowing me to uncheck this option and then never have to worry about clearing the history again would be well worth it.

What happens when you want to change a Border Node on an IPE? It is a manual process which I propose can be automated as shown in the two examples I have provided...

 

InPlaceReplace.jpg


 

 

 

 

For modern application development we need better methods to detect wheter our application is called to open a file.

Currently the only help NI gives is based on command line parameters. And we need to jump through loops to get it working to react on opening a file when the program allready runs.

 

This is a major showstopper in creating professional applications.

 

LabVIEW 8.2 had a hidden event for getting this event, which unfortunatly doesn't work in later versions.

 

 

Searching an array for a certain element can be expensive for large arrays. The speed could be dramatically increased if we can assume that the input array is sorted. The speed difference can be several orders of magnitude.

 

There is an old example that discusses this in more detail. I also wrote a similar tool long ago when I needed to recursively score positions for the tic-tac-toe solver, using a retrograde analysis similar to what's used to generate endgame tablebases for chess. (It literally took hours with the plain old "search array"!).

 

(A similar tool is the "search ordered table.vi", which only works for DBL and returns a fractional index.)

 

Suggestion:

 

"Search array" should have a boolean input (default=FALSE) where we can tell the algorithm that the array is sorted.

 

(The output would be exactly as before, with -1 indicating "not found".). Setting this input to TRUE would switch to a binary search where in the worst case only ln2(N) instead of N comparisons need to be made. (e.g. 20 instead of 1048576 for a 50000x speedup!!!)

 

It could look as follows.

 

Of course it should continue to accept all possible datatypes (including clusters) and not be limited to simple datatypes as the polymorphic example quoted above.

.
Message Edited by altenbach on 08-31-2009 03:30 PM