LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I'd like the Equal To Zero? and Not Equal To Zero? primitives to support the error cluster wire.  The node would look at the Error Code and compare it to zero, resulting in a Boolean accordingly.

 

Error_Cluster_Zero.png

 

Thanks,

 

Steve K

For instance, I process an image and I get overlay results (even without knowing exactly the overlay objects drawn) and I want to copy this image with its overlay in a bigger image at a specified offset...

IMAQ ImageToImage 2 seems not to do the job...

Direct to PDF reporting is an extremely important feature to provide customers.  It cannot be relied upon that the customer has MS Word or the like installed.

There are a couple of LV PDF toolkits supplied by developers.  However, the problems with these include that they are (a) not updated or well-supported (b) buggy (c) have out-dated dependencies such as .Net 2.0 (d) restrictive licencing for deployment.

Good reporting tools are essential and NI should develop and support a direct to PDF toolkit.

Curvature in NI Vision has a property that makes a lot of sense: "If the current point is too close to either end of the array to choose the additional points, the curvature is calculated as 0." (Vision Manual).

Too close refers - obviously - to ~0.5 Kernel size.

 

curvature.png

 

This makes no sense when I'm working on an contour that is "closed" (starting point = ending point) - for example, when I am trying to analyse a particle and its "turning points". 

 

 

I'm losing 1 kernel width of data at exactly the starting point/end point - as marked in the picture - and in this synthetically generated and exxagerrated case, I'm losing the information about one edge!

To fix this, I either rotate the ROI or change the search direction, calculate the missing data and replace the values in the curvature profile. (Or - calculate the curvature myself.) 

This makes absolutely no sense. 
Vision could easily recognize starting-point = ending-point, or just allow me set some boolean if there is a reason not to make this automatic. (I can't think of one.)

 

There are a plethora of timestamp formats used by various operating systems and applications. The 1588 internet time protocol itself lists several. Windows is different from various flavors of Linux, and Excel is very different from about all of them. Then there are the details of daylight savings time, leap years, etc. LabVIEW contains all the tools to convert from one of these formats to another, but getting it right can be difficult. I propose a simple primitive to do this conversion. It would need to be polymorphic to handle the different data types that timestamps can take. This should only handle numeric data types, such as the numeric Excel timestamp (a double) or a Linux timestamp (an integer). Text-based timestamps are already handled fairly well. Inputs would be timestamp, input format type, output format type, and error. Outputs would be resultant format and error.

Wouldn't it be nice to just check/ uncheck existing sub-VIs of parent classes in the Inheritance Dialog Box, since all of the sub-VIs in child classes have to have the same name anyway. Instead of manually add them one by one.

With only a few VIs and a few child classes the saved work is huge.

 

PS: I did not find a similar idea here, so bare with me if it already was put up. The AES guy at the UGM did not know about anything easier then copy and paste at the level of WinExplorer and later add the files to the project.

Please add the number of characters processed to be used as a failure indicator.  This function transforms "1.1.1" into 1.1 and there is no way to tell whether anything was lost.

When a  breakpoint has been inserted in a block diagram, and the vi stops and shows the blow diagram, frequently the breakpoint is on the edge of the screen.  This also occurs when using find/replace to find something in the block diagram, the object found is on the edge of screen, instead of centered on the screen.

It would be nice if the block diagram was centered on the object or breakpoint. 

I guess it is possible this problem is unique to me, possibly because some of the properties of the VI or the options in LabVIEW, and if so, please let me know how I can change it. 

 

In my experience, it is common to open files written by other peole, and realize that constants are displayed in hex or bin format only after spending time in debugging.

 

I think it would be nic to see directly on the block diagram the display format.

This could also apply to controls/indicators.

 

 dispFormatHint.png

It would be very useful if we could have same QuickDrop PlugIn with the same shortcut depending of the selection object that we have made in "Block Diagram" or in "Front Panel".

 

For example:

- Imagine "Ctrl+C" short cut, this would be useful for lots of QuickDrops that comes to my mind.

  • Copying to clipboard a bundle by name text.
  • Copying to clipboard a unbundle by name text
  • Copying to clipboard a selected case.
  • etc....

I considering the use of Digital Image Correlation (DIC) in our rock properties testing laboratory. We use LabVIEW quite extensively in our lab. I found an earlier thread saying it was not currently available in LabVIEW but to bring it up @ NI Idea Exchange.

 

http://forums.ni.com/t5/Machine-Vision/digital-image-correlation-vi/m-p/2477642

 

However, it seems nothing was brought forward. Is this something that might get further attention or have I missed some more recent developments? The links below have piqued my curiosity.

 

http://trilion.com/products-services/digital-image-correlation/

 

http://www.lavision.de/en/products/strainmaster-dic.php

 

There is no way to get to the GObject property of "Position" in the dialog box. Its only available via property nodes, but if you want to statically set the control position, this is a pain.

 

I run into this when I have multiple, similar VIs that i insert and remove into a subpanel. in order to set the controls to be in the same position across the VIs, i have to use the property node for a 1 time use. 

 

I wish the Formula VIs supported conditional logic.

 

More broadly, make the Formula Node and the Formula Parse and Eval VIs have the same syntax and capability.

 

from LV help:

Differences between the Parser in the Mathematics VIs and the Formula Node
The parser in the Mathematics VIs supports all elements that Formula Nodes support with the following exceptions:

Variables—Only a, a0, ..., a9, ... z, z0, ..., z9, are valid.
Logical, conditional, inequality, equality—?:,, &&, !=, ==, <, >, <=, and >= are not valid.
Functions—atan2, max, min, mod, pow, rem, and sizeOfDim are not valid. You can use these functions in a Formula Node or use their corresponding LabVIEW functions.

 

Hello,

 

In many of my applications i had to run dynamic assynchronous VIs using a VI path.

 

When the VI you want to launch has some problems (VI broken, missing ressources, ... ) you get the generic error 1003. (The VI you want to launch may be broken ... ? )

 

This error can be easily solved when you are using LabVIEW IDE. Smiley Happy

 

But sometimes, this error can occured in an executable you have deployed to your final user computer.

And then, the error 1003 (The VI you try to call may be broken ... ) is a little bit poor in order to analyse the problem. Smiley Frustrated

This error occurs, for example, when the dynamic VI tryes to access missing DLL's.

 

It would be nice, if the error 1003 message, could be filled out with and additionnal "error description", giving more informations about the error source.

For example, an error list, like the error list in the LabVIEW IDE. (Broken arrow error list !)

 

Thanks a lot.

 

Manu.

It would be great to have the option to get decimated (and/or interpolated) data from Citadel, up to a specified maximum # points over a time interval, with these conditions:

1)  Buffer only the decimated/interpolated data, instead of reading all the raw data and then decimating. The goal is to retrieve data from a large time period without filling up memory!!  (That's why this is best done down in the Citadel code, not up in LabVIEW.)

2)  Use the same kind of smart decimation that the LV graph uses (maybe borrow that code). For example, if you display 100,000 values on an XY graph, and they are all =10 except ONE single value=100, the graph will show that spike no matter how small the graph is, i.e. no matter how much it has to decimate the data to fit it into relatively few pixels. It would be important to keep minmax, and NaN (break) values.

 

The menu "tools...profile...find parallelizable loops" is a great tool to indentify loops that can be parallelized and it gives detailed advice and warnings in questionable cases.

 

There is however an important scenarios that is ignored in the analysis:

 

The case is if the parallelizable loop is contained inside a larger loop that is already parallelized. As a general rule, it is typically most efficient to parallelize the outermost loop only. A parallel loop inside a parallel loop only creates more overhead and will not gain much if the outer loop already causes 100% use of all CPU cores. It is possible that the LabVIEW compiler sorts things out automatically, but I think the "find parallelizable loops" tool should consider if an outer, already parallelized loop exists and should tone down the recommendation to a question mark instead of a check mark in this case.

 

Here is a typical analysis (yes, the subVI is very simple and inlined, so the outer loop parallelization is sound ;))

 

 

The description could read "This loop can be safely parallelized, but it is already contained inside a parallel loop and thus parallelization would not give any significant advantage" or similar.

 

Idea summary: The result window of "find parallelizable loops" should warn if a parallelizable loop is contained inside a parallel loop.

I suggest to add the following tools to the Number/string conversion palette:

 

Number to Roman Numerals

Roman Numerals to Number

 

Here's how they could look like on the block diagram. A simple draft of these function can be found here.

 

 

 

Idea Summary: Add conversion tools for roman numerals to LabVIEW

Interpolate 2D Scattered uses triangulation which is fairly CPU intensive.

 

The most difficult part, (and the part that most of the CPU time is spent on) is defining the triangles.  Once the triangles are specified, the interpolation is fairly quick (and could possibly be done on an FPGA).  This is the same idea as using the scatteredInterpolant class in Matlab (see : http://www.mathworks.com/help/matlab/math/interpolating-scattered-data.html#bsow6g6-1 )

 

The Interpolate 2D Scattered function needs to be broken up into two pieces just like Spline Interpolant, Spline Interpolate are. 

 

 

 

 

It would be useful to have a configurable tool for generating swept values for RFSA/RFSG mainly.

 

This is actually present in Signal Express. You can insert and even nest the "sweep"-step here, but as far as I can see SigEx doesn´t have support for RFSA/RFSG.

So the next thing coming to my mind was to export the configured sweep as LabVIEW code from SigEx. Unfortunately the resulting Express VI cannot be converted into a regular subVI.

 

It would be very useful for debugging and prototyping to get this feature!

 

kind regards

 

Marco Brauner AES NIG Munich

After applying my own subjective intellisense (see also ;)), I noticed that "replace array subset" is almost invariably followed by a calculation of the "index past replacement". Most of the time this index is kept in a shift register for efficient in-place algorithm implementations (see example at the bottom of the picture copied from here).

 

I suggest new additional output terminals for "replace array subset". The new output should be aligned with the corresponding index input and outputs the "index past replacement" value. This would eliminate the need for external calculation of this typically needed value and would also eliminate the need for "wire tunneling" as in the example in the bottom right. (sure we can wire around as in the top right examples, but this is one of the cases where I always hide the wire to keep things aligned with the shift register).

 

If course the idea needs to be extended for multidimensional arrays. I am sure if can be all made consistent and intuitive. There should be no performance impact, because the compiler can remove the extra code if the new output is not wired.

 

Several String functions have an "offset past ..." output (e.g. "search and replace string", "match pattern", etc.) and I suggest to re-use the same glyph on the icon.

 

Here is how it could look like (left) after implementing the idea. equivalent legacy code is shown on the right.

 

 

 

Idea Summary: "Replace array subset" should have new outputs, one for each index input, that provides the "index past replacement" position.