LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I wish the Formula VIs supported conditional logic.

 

More broadly, make the Formula Node and the Formula Parse and Eval VIs have the same syntax and capability.

 

from LV help:

Differences between the Parser in the Mathematics VIs and the Formula Node
The parser in the Mathematics VIs supports all elements that Formula Nodes support with the following exceptions:

Variables—Only a, a0, ..., a9, ... z, z0, ..., z9, are valid.
Logical, conditional, inequality, equality—?:,, &&, !=, ==, <, >, <=, and >= are not valid.
Functions—atan2, max, min, mod, pow, rem, and sizeOfDim are not valid. You can use these functions in a Formula Node or use their corresponding LabVIEW functions.

 

This proposed new VI would expand upon the Clear Errors VI, and keep a history of the error codes that have been cleared. Ideally, it would have a History Length input (not shown) that, when not wired, would default to 1024 errors.

 

ClearErrors+History.png

 

 

I often want to pull elements out of an array based on their content. In a simple example, I'd like an array B that contains all, the elements of A that are greater than 0.5. A few previous posts have suggested a conditional append function on the output of a loop. Here's another possibility. Change IndexArray so that one can wire an array of Booleans to the Index input. The output would be just those elements of the input array for which the Boolean array element is true.

 

 

Another way to handle this would be to have a primitive that returns an integer array containing the indices of the true elements in a boolean array. Then allow arrays of indices to be wired to the Index input of IndexArray.

 

By the way, these are not really orignal ideas. They come from Matlab. Specifically, the A(A>0.5)  syntax and find(A>0.5).

Hi,

 

My application uses a series of files to configure it self and I need to search in arrays to find which are similar to a given reference.

 

My solution is to use a for with a Match pattern VI and some logic to do the operation.

 

I believe that "Search 1D Array" would be faster than this implementation if it had the option to use wild cards ("*" and "?") as "element" input.

 

Other option would be include a flag "Exact match", by default set to TRUE to behave as is today or FALSE to stop on first occurrence of "element" in the array that contains it somewhere.

 

For example, if element = "ode" and array element = "model", it should set as a match if Exact match is set to FALSE.

 

Cheers.

When replacing a normal Add (of a Timestamp and a value) with the compound arithmetic, the Timestamp input gets broken, this should not be the case.

 

Inconsistency is found in the supported image types for many IMAQ vi's:

 

For instance IMAQ ROIProfile, IMAQ LineProfile and IMAQ LinearAverages supports U8, I16 and SGL format but do not support U16 format images.

 

But, a profile along a line drawn in a U16 image should make as much sense as in an U8 image. Shouldn't it? So why is it not supported?

 

In addition, I would expect it to be easy to add an additional polymorphic U16 version.

 

These are just a few of several examples of similar inconsistency in IMAQ vi's.

 

 

Message Edited by heel on 03-17-2010 08:33 AM

there have been several ideas on the forums pertaining to making labview smarter about how it handles arrays, but none of them (that i've been able to find) quite get to the root cause: labview doesn't directly support a 'sorted' status of arrays.

 

sorted-arrays.png

 

 

labview should add a 'sorted' field to the array data. when the array primitives detect, either at runtime or compile time, that an array is or could be sorted, it generates optimized code. for the most part primitives will function the same but just be more efficient (e.g. array min/max will be constant time operations).

 

the special cases for this feature are the array building and modifying primitives because i believe they will need special behavior in some cases to make sorted arrays useful.

 

build array  when all input arrays are sorted, the output array will be sorted. this seemed strange to me at first, but the more i thought about it, the better this seemed as the default behavior.

 

insert into array  when the input array is sorted and the index is unwired, the output will be sorted. this will be true whether or not the insertion is one element or an array of (possible unsorted) elements. 

 

i would assume you could turn any of these behaviors off by right-clicking on the node and configuring it. i'm sure the labview team can figure out mutation issues as necessary.

It's a simple idea really.  Recompile the existing code so that the SPT runs in Linux and Mac, not just Windows.

It would be nice to get 64bit references. The old 32bit references are build up from two main parts: the descriptor represented on 12bits and the address represented on 20bits. If the application requires more than the available 1048576 references the labview generates an error because the memory is full. The workaround is to close references found in the memory. If an application requires serialization, this rule should be kept in mind. If the application serializes references by an input data, the input data should be limited, which means that really big data can't be handled although the machine have enough memory to handle them. Unfortunately, this feature in the labview has been not changed in the last 30 years although 64bit processor architectures are used world-wide and the 64bit labview plays a bigger role every year.
I think, the main reason is the backward compatibility...but is it possible to provide 64bit references which could be chosen by the developer itself, where the descriptor field will not be changed but additional 32bits are available for addressing? e.g. new option in the labview settings "use 64bit references" or selection (polymorphic) from all native components which could be create a reference. If the feature is supported by (environment) settings, then the compiler uses it generic. If the feature is supported component level, then the references should be checked for conflicts. At the end the architect/developer/application is no more limited inside in the labview because the available 4503599627370496 addresses are more than enough for a long time again.

Idea: The In Range and Coerce Include upper limit option should be selected by default.

 

1 (annotated).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Maybe it's just me, but when using the In Range and Coerce node I virtually always need to have both the Include lower limit and Include upper limit options selected. In approximately ten years of using the node I think I used a different configuration less than five times. It has entered my muscle memory that the first thing I do after dropping the In Range and Coerce node is to right-click it and select Include upper limit.


In my experience this point of view is supported by anecdotal evidence. For example, I have recently seen a large codebase that was rightfully using lots of In Range and Coerce instances. All of the nodes had been left in their default configuration (Include lower limit selected, Include upper limit unselected). However, after inspecting the code carefully I came to the conclusion that the intention was for all of the nodes to perform an inclusive comparison on both sides. This was confirmed by a conversation with the original code author. The author had simply been unaware of the true behaviour of the node (he had assumed it performs inclusive comparison on both ends) and was unaware of the right-click options!

Problem:

in many cases after wiring in the loop , we will go for the shift register to store variable. but in case of changing the loop tunnel to shift register left side tunnel required to connect manually.if not it creates the another tunnel in the loop.

 

solution:

in the loop , if the left and right tunnel variable are same , LabVIEW automatically replace tunnels with the shift registers.

 

shift registers.JPG

 

 

It would be good to have Search an Array and give the number of times the search element is available in the array.

Count of elements.jpg

 

 

 

 

Most of us have made this logic with for loop with an increment if it matches

The Formula Node (FN) is indispensable when implementing numerical algorithm in LV. There is just no way to get it right with wires and built in functions.

The problem is that it is limited to Double Precision only.

 

Since all functions available in the FN are implemented as primitives supporting extended precision, it is not like NI doesn't have the needed algorithms at hand.

Hence, please upgrade the FN to allow use of extended precision numbers (internally but also as inputs and outputs).

 

Note: I know about Darin. K's wonderful tool to transform text-based formulas into VIs, but sometimes there is just too many inputs/outputs for this approach to be a viable route, not mentioning that there is then no way to decipher (and modify) the resulting code...

I use fixed point values quite a bit, and i do find myself splitting and joining them quite often when I have to roll my own low-level, optimized operations. 

 

The fixed point type is (generally) treated as an arithmetic type (e.g. floating point) rather than a logical type (e.g. integers). The (default) configuration should maintain this behavior.

 

split.png

What I would have found most useful is having Split cut the value in half and return the two properly configured fixed point values. Join would take two adjacent fixed point types and glue them together into one value. This definition would actually make Join equivalent to adding the two values.

 

Split could take an optional split location which dictates the binary point at which the values are split apart. I suggest defining the value as the location of the lsb (least significant bit) of the high part. In the example, the value would be 0 to get the equivalent behavior. This terminal would require a (immediately computable) constant wired to it since the fixed point output types can't be computed until this value is known.

 

 

At present VI analyzer allows you to check a single VI or a complete project. However, within my Project files I often have clutter from imported librarys such as test functions which are not required for the application I am building but will be analyzed if I select the project, making it difficult to tell what is actually important in the results.

 

What I would like is an option to do a "top down" evaluation of the VI where you select your top level VI and run the analyzer on that VI and all the VIs below it in it's tree. This would allow me to ensure that in large projects I can only analyse and fix those VIs that are relevant to my current application rather than wading through potentially hundreds of useless results to find the useful ones.

Currently when replacing an element/subarray using Replace Array Subset, you are only able to replace an element that is of lower dimensionality where one dimension is unity.

For example, given an array of size 100x3, it is only possible to replace one row at a time 1x3.

It would be helpful for me to avoid using a loop and be able to replace a subarray of dimension Nx3 where  100 > N > 1.

 

I may still have to use a loop to completely fill an array, but the loop could potentially run many less iterations.

This comes up for me because I read the FIFO buffer from a device and generate a chunk of data at once.

I then need to insert this data into a preallocated array (avoiding buildarray or similar due to the time critical nature of VI).

 

If I have a 1E5x3 array that I want to place into a 1E8x3 array, why waste time with a loop to do each row at a time (remembering that the loop is still much faster than buildarray)?

Hi,

 

Adding or allocating a secondary axis in Excel chart/graph using Report Generation is a good to have feature. We can however use Macros to do this, but if we have a VI in Report Generation that would be great.

 

 

Thanks!

If you have messy "future" or "obsolete but we are being risk-averse and keeping it around" code in the Disabled state of a Diagram Disable structure, VI Analyzer reports test failures for such code, in my opinion cluttering results.  It would be great to have an option, perhaps in the "Select Tests" page of the setup wizard, to ignore any such code.

Idea: The Assert Structural Type Match node should be growable (able to expand the number of inputs downwards and/or upwards). This would be similar to how many well-loved nodes can be "grown" downwards or upwards, such as Build Array, Concatenate Strings, Index Array, Merge Errors, etc.

 

1 (edited).png

 

 

 

 

 

 

 

 

 

 

 

3 Growable Nodes (edited).png

 

 

 

 

 

 

 

 

 

The following screenshot shows a real-world VIM that I created where I would have benefited from this feature. I needed to ensure that three inputs were all of the same data type. This required using two Assert Structural Type Match nodes. It would be possible to use a single node with three inputs if this idea was implemented. This would result in fewer wires and fewer objects on the block diagram.

4.png

 

 

 

 

 

 

 

 

 

 

 

 

One of the most common operations performed on arrays is to determine whether an element is found inside the array or not.

 

There should be a function that is dedicated to this fundamental operation. My workaround is to use the "Search 1D Array" function followed by a "Greater Or Equal To 0?" function, as seen below. While it only takes a few seconds to drop the geqz function using Quick Drop, it's still slightly frustrating that this is necessary.

 

The set and map data types rightly have been given the "Element of Set?" and "Look In Map" functions. An equivalent should be provided for arrays.

Element of Array - edited.png

 

Thanks!