LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

When you have a probe to se the data in an array, you se all elements in one row (in the Value column) and just one element in the Probe Display area. If you have a large array, it is difficult to get a good overview.

 

My suggestion is that in the part of the probe window called Probe Display, you should be able to pull the bottom right corner and see several elements simultaneously.

 

Probe 4.png

 

An extra feature would be to also show the array size anywhere in the probe window.

The Report Generation Toolkit includes Excel Easy Table that allows either Text (2D String Arrays) or Numeric (2D Dbl Arrays) to be written to Excel.  The function is written as a Polymorphic function to handle the two types of input.  However, when processing numeric input, an inner function called "Append Numeric Table to Report (wrap)" converts the numeric data to a String Array using the format string %.3f.  This is, in fact, a Control input for the function, but its caller does not wire the input, forcing the numeric data to be truncated to three significant digits.

 

I suggest that the default either be changed to %.15f (or something similar) to preserve the precision of the input data, or the Format String be "brought out" to the User (but there are no free Connector slots) to allow the User to control the precision.

An add-on to the highlight execution that makes the highlight execution go into the subVIs and keeps doing highlight execution in the subVI

It would be useful if there was functionality within the VI Analyzer, under the style section, that checked for overlapping on the block diagram. This would allow you to be able to check readability and check for some mistakes. For example, it is possible to copy and paste while loops on top of each other. VI Analyzer should be able to tell you that there are 2 while loops overlapping to help with style and debugging. 

Hey there. I'm working on a transparency vi which overlays 2 images (U32RGB and U8 Grayscale with User Palette). After I found out about the Resample Functionality, I thought the transparency Issue or its parts would be easy. But this was not the case. I'm missing following functionality:

Merging 2 Images or ROIs with a transparency factor
extracting the color image from a grayscale single image with user palette (my solution attached)
multiplying a color image with a floating constant  (my solution for the U32RGB Array attached) (integer values are not suited for small numbers)
subtracting a color image FROM a constant without creating an image from the constant / image inversion (does not work with my U32RGB images)

I've got my solution but i'm convinced it is slow.

Download All

I would like to encourage NI developers to produce an Advanced Signal Processing Toolkit (with the wavelets functionality like the one for WINDOWS) for MAC OSX.  I have been using Labview for some time now but i really dislike having to change OS platforms just when i use wavelets.  I am sure I am not alone here as there are many using Labview in the OSX environment.  

I would love an update to the signal processing VI's contained in NI_MAPro.lvlib to support waveforms with a SGL Y-value representation. The library is locked and most VI's call dll's that are not able to be modified anyways (by me that is, I am not all that strong in traditional text based languages). It would be nice to also support SGL waveforms within the .llb's contained in vi.lib/measure; although these are mostly unlocked and able to be modified.

 

Working with a cRIO, the FPGA to host DMA channels encourage the use of SGL data type so I went with it and kept it as SGL throughout my application. For some functions I turn my SGL array into a waveform with SGL Y-value representation. I was disappointed to learn that most of the signal processing waveform tools contained in NI_MAPro.lvlib do not support the SGL Y-values.

 

 The predessesor application was done on the usb cDAQ line that i was using DBL representation Y-values. I want to re-use a lot of code and was hoping the waveform signal processing VI's would accept SGL Y-values. For now I am stuck converting my data type for the sole purpose of re-using code; at 50kHz on 36 channels this can become a performance issue.

I only mean that this should apply to the sub vi's that come with LabVIEW. I was putting together a vi that is execution time sensitive. I had a choice between the IMAQ Histogram and IMAQ Histograph. I could get the result i needed from wither one but I was forced to try each,  run a few times, and clock each one. There are many such "which of these two similar options is fastest" choices we make for every program and knowing which upfront would be very helpful.

As the title says.

 

double click on a *.rsl file should open the Vi Analyzer results window

 

We have a CI server that runs VI Analyzer and posts the rsl files as artifacts, downloading the rsl file to my code base works great to find and fix errors. The only thing missing is the double click.

The current error-case only allows two states, when the error-cluster is wired: "No error" or "Error".

 

My suggestion is, to allow any number of cases which depends on manually defined error-codes (see attached picture). The error-case must be enhanced so that error codes can be treated separately in individial cases.

 

Previously to handle a specific error code, first the code must be read from the error cluster and than be wired to the case. This is to be omitted.

 

Optimized_Error_Case.png

The In Range and Coerce function is frequently used to determine whether a value is within range of an upper limit and lower limit values.

 

But when it is out of range, you often also want to know whether the value is out of range too high, or out of range too low.  It is easy enough to add a comparison function alongside and compare the original value to the upper limit.  It's another primitive and 2 more wire branches.  But since comparison is one of the primary purposes of the In Range and Coerce function, why shouldn't it be built into it?

 

The use case that made me think of this that as come up in my code a few times over the years is with any kind of thermostat type of control, particularly one that would have hysteresis built into it.  If a temperature is within range (True), then you would often do nothing.  If it is lower than the lower limit, you'd want to turn on a heater.  If it is higher than the upper limit, than you'd turn off the heater.  (Or the opposite if you are dealing with a chiller.)

 

Request Add an additional output to In Range and Coerce that tells whether the out of range condition is higher than the upper limit, or lower than the lower limit.

 

(That does leave the question as to what should the value of this extra output be when the input value is within range.  Perhaps, the output should not be a boolean, but  a numeric value of 1, 0, and -1, perhaps an enum.)

TensorFlow is an open source machine learning tool originally developed by Google research teams.

 

Quoting from their API page:

 

TensorFlow has APIs available in several languages both for constructing and executing a TensorFlow graph. The Python API is at present the most complete and the easiest to use, but the C++ API may offer some performance advantages in graph execution, and supports deployment to small devices such as Android.

Over time, we hope that the TensorFlow community will develop front ends for languages like Go, Java, JavaScript, Lua R, and perhaps others.

 

Idea Summary: I would love to see LabVIEW among the "perhaps others". 😄

 

(Disclaimer: I know very little in that particular field of research)

 

Many advanced funtions in the optimization and fitting palettes allow the use of a "VI model" given as a strictly typed VI reference to be defined by the user. A great feature!

 

LabVIEW provides various templates containing the correct connector pattern. Here is a list of the templates I found:

 

in labview\vi.lib\gmath\NumericalOptimization:

  • ucno_objective function template.vit

  • cno_objective function template.vit

  • LM model function and gradient.vit

in labview\vi.lib\gmath

  • Zero Finder f(x) 1D.vit

  • Zero Finder f(x) nD.vit

  • 1D Evolutionary PDE Func Template.vit

  • 2D Evolutionary PDE Func Template.vit

  • 2D Stationary PDE Func Template.vit

  • ODE rhs.vit

  • Global Optimization_Objective Function.vit

  • DAE Radau 5th Order Func Template.vit

  • function_and_derivative_template.vit

  • function_template.vit

As you can see, the naming is quite inconsistent:

  • all files are templates as is immediately obvious from the file extension! (*.vit )
  • some containing the word "_template" (undescore/lowercase t)
  • some containing the word " template" (space/lowercase t)
  • some contain the word " Template") (space/uppercase T)
  • some don't contain the word template in any form (good! :D)

 Since the extension fully defines them as templates, maybe the word "template" could be scrubbed from all the filenames, making things more uniform and consistent.

 

Idea Summary: remove the word "template" from all model template names that contain it.

I very much like the formula parse and evaluate vi's. For me writing a formula is easier and I am making less mistakes writing formulas than wiring numeric nodes. Specially when the formula is taken from literature.
Unfortunately, the parsed formula is much slower than using standard numeric nodes. Browsing through the formula nodes, I notice that the formulas are parsed down to the same standard numeric nodes (add subtract etc.). Still the formula parsing method is much slower because of many case statements that have to be executed before arriving at the  level of the numeric building blocks.
I think from the status where the formula parsing blocks are now, it would be feasible to have the formula parsing blocks generate vi's using only numeric nodes so the formula parsing nodes will have the same performance as the standard labview mathematics. The best solution would be to include it in the building/compiling of the code.

 

Arjan

 

 

Should be able to specify tolerance instead of just upper and lower limits. This de clutters the application block diagram when you are checking for a value within certain limits.

 

 . Upper limit becomes : x+ tolerance, Lower Limite becomes : x - tolerance when using tolerance instance of the polymorphic vi.

 

For even higher level functionality , specify units of tolerance : absolute, percent(1e-2), parts per million (1e-6), parts per billion (1e-9)

The idea is to change Equal? function in a way, that it will be configurable, and will have one input as function "Equal To 0?". Sometimes you need to evaluate number of loops execution in While Loop (or not just it), and when you put standard Equal? function, some of wires will be not aligned in a straight line (either which is connected to Index output, or which is connected to Loop Condition), and you need to move up/down one of terminals.

So, you can see it from the attached picture.

Idea.PNG

Hello,

 

now the percentile VI calculate a table with NaN-values as the NaN-values are numbers higher than +Inf. This is incorrect.

 

The percentile VI should propage to result NaN if one of the elements of the table is NaN.

(An other solution is not consider NaN values and return as result the value greater than p percent of the data values in the array, but it would create incoherences with all other VI which propage the "NaN value")

 

Here a screenshot of the a bad response of this VI.

percentile-vi.PNG

I'm currently trying to simulate figure two in the paper 'An Electronically Controllable Capacitance Multiplier with Temperature Compensation'.  Any assistance would be much appreciated!

 

 

LabVIEW currently has functions Format Date/Time String and Scan Value to convert between time stamps and strings.  Unfortunately, these functions do not handle arrays of time stamps and strings.  While this conversion can be handled using a For Loop, a specific function would be cleaner and probably faster.  This functionality is currently available for other numerics and should be extended to time stamps.

I suggest modifying the Format Date/Time String function to handle arrays of time stamps, integers and floating point numbers as well as single values.  An additional function Date/Time Sring to Time Stamp should be created to convert strings and array of strings to time stamps.

 

time conversion.jpg

NI updater kindly informed me that LabVIEW 2014 SP1 was released (even though I uninstalled it shortly after I tried it last year) and out of curiosity, I took a look at the known issues list.

I learned a few interesting things I did not know about, and also that some problems had been reported as long ago as version 7.1.1. This type of stuff looks like bugs that won't be fixed, ever.

For instance, CAR #48016 states that there is a type casting bug in the Formula Node. It was reported in version 8 and the suggested workaround it to use a MathScript Node instead of a Formula Node (where is the "Replace Formula Node by a MathScript Node" contextual menu item?).

Problem: the MathScript RT Module is required. Even in my Professional Development System, this is not included by default. Does this really count as a workaround?

I read: we don't have the resources to fix that bug, or we don't want to break code that expected that bug.

In any case, this bug with most likely never be fixed.

The bottom line is, we can waste a lot of time as users, rediscovering bugs that have been known for a while and will probably never be fixed. As a user, I would really appreciate a courteous warning from NI that there are known traps and have a complete description handily available with the help file related to the affected function.

 

My suggestion: add a list of known issues (with link to their description) for all objects, properties, functions. VIs, etc, in the corresponding entry in the Help File.