LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I suggest to add the following tools to the Number/string conversion palette:

 

Number to Roman Numerals

Roman Numerals to Number

 

Here's how they could look like on the block diagram. A simple draft of these function can be found here.

 

 

 

Idea Summary: Add conversion tools for roman numerals to LabVIEW

Interpolate 2D Scattered uses triangulation which is fairly CPU intensive.

 

The most difficult part, (and the part that most of the CPU time is spent on) is defining the triangles.  Once the triangles are specified, the interpolation is fairly quick (and could possibly be done on an FPGA).  This is the same idea as using the scatteredInterpolant class in Matlab (see : http://www.mathworks.com/help/matlab/math/interpolating-scattered-data.html#bsow6g6-1 )

 

The Interpolate 2D Scattered function needs to be broken up into two pieces just like Spline Interpolant, Spline Interpolate are. 

 

 

 

 

It would be useful to have a configurable tool for generating swept values for RFSA/RFSG mainly.

 

This is actually present in Signal Express. You can insert and even nest the "sweep"-step here, but as far as I can see SigEx doesn´t have support for RFSA/RFSG.

So the next thing coming to my mind was to export the configured sweep as LabVIEW code from SigEx. Unfortunately the resulting Express VI cannot be converted into a regular subVI.

 

It would be very useful for debugging and prototyping to get this feature!

 

kind regards

 

Marco Brauner AES NIG Munich

After applying my own subjective intellisense (see also ;)), I noticed that "replace array subset" is almost invariably followed by a calculation of the "index past replacement". Most of the time this index is kept in a shift register for efficient in-place algorithm implementations (see example at the bottom of the picture copied from here).

 

I suggest new additional output terminals for "replace array subset". The new output should be aligned with the corresponding index input and outputs the "index past replacement" value. This would eliminate the need for external calculation of this typically needed value and would also eliminate the need for "wire tunneling" as in the example in the bottom right. (sure we can wire around as in the top right examples, but this is one of the cases where I always hide the wire to keep things aligned with the shift register).

 

If course the idea needs to be extended for multidimensional arrays. I am sure if can be all made consistent and intuitive. There should be no performance impact, because the compiler can remove the extra code if the new output is not wired.

 

Several String functions have an "offset past ..." output (e.g. "search and replace string", "match pattern", etc.) and I suggest to re-use the same glyph on the icon.

 

Here is how it could look like (left) after implementing the idea. equivalent legacy code is shown on the right.

 

 

 

Idea Summary: "Replace array subset" should have new outputs, one for each index input, that provides the "index past replacement" position.

 

 

 

 

This is an extension of Darin's excellent idea and touches on my earlier comment there.

 

I suggest that it should be allowed to mix booleans with numeric operations, in which case the boolean would be "coerced" (for lack of a better word) to 0,1, where FALSE=0 and TRUE=1. This would dramatically simplify certain code fragments without loss in clarity. For example to count the number of TRUEs on a boolean array, we could simply use an "add array elements".

 

(A possible extension would be to also allow mixing of error wires and numerics in the same way, in which case the error would coerce to 0,1 (0=No Error, 1=Error))

 

Here's how it could look like (left). Equivalent legacy code is shown on the right. Counting the number of TRUEs in a boolean array is an often needed function and this idea would eliminate two primitives. This is only a very small sampling of the possible applications.

 

 

Idea Summary: When a boolean (or possibly error wire) is wired to a function that accepts numerics, it should be automatically coerced to 0 or 1 of the dominant datatype connected to the other inputs. If there is no other input, e.g. in the case of "add array elements", it should be coreced to I32.

Why doesn't LabVIEW Provide any Simulation for real time hardwares.... What i mean to say that Just as if we build some hardware circuits using microcontroller ... We can simulate the values in almost actual environment using some simulation software such as proteas.... I came up with this when i tried generating a PWM Signals as i ordered the Digital I/O card from LAbVIEW which did took time for shipping... 

After looking at the problem encountered here, it turns out that LabVIEW seems to make some insane choices when mixing a waveform with simple datatypes. Some behavior is good and intuitive. For example multiplying a waveform with a scalar applies the multiplication to the Y component only, because it would not make sense to e.g. also multiply the t0 or dt values.

 

It is less clear what should happen if multiplying a waveform with an array. Intuitively, one would expect something similar to the above, where the Y component is multiplied with the array. Unfortunately, LabVIEW chooses something else: It creates a huge array of waveforms, one for each element in the array. (as if wrapping a FOR loop around it, see image). If the waveform and the array both have thousands of elements, we can easily blow the lid off all available memory as in the quoted case. Pop! 😄 But the code looks so innocent!

 

 

I suggest that operations mixing waveform and simple datatypes (scalars, arrays) simply act on the Y component as shown.

 

(not sure how much existing code this would break, but it actually might fix some existing code!!! :D)

I am probably the only one to be using Extended precision numbers considering the feedback on these requests:

 

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Allow-graphs-to-display-extended-type-values-i-e-create-true-EXT/idi-p/2239078

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Allow-All-LabVIEW-Supported-Number-Types-in-Formula-Node/idi-p/2502198

 

but so be it.

One other area where LabVIEW ignores extended precision is wire values in debug mode. To illustrate what I am talking about, consider this snapshot of a debugging session:

 

ScreenHunter_001.jpg

 

The result of my modified Bessel calculation (that reminds me I haven't suggested to implement special function calculation in extended mode...) returns a perfectly valid extended precision number, such as 5.03E+418, but LabVIEW doesn't recognise this as a valid value and returns an "Inf" value (which would be the correct reaction if the wire could only display double precision floating point values).

This wire is connected to the logarithm primitive, which happens to be polymorphic and hence accepts the extended type. The result is the correct logarithm of 5.03E+418, i.e. 964.15.

On the face of it though, it appears that the output of my VI is +Inf, and that LV went wahoo and estimated an arbitrary value of log(Inf)...

My code actually stores such values in shift registers, so when I debug problems with the code, I have 3 or 4 wires carrying an "Inf" value, which, when I am trying to understand the reason of overflow problem, is not exactly helpful.

 

Suggestion: display Extended Precision wire values correctly in debug mode.

 

Add "Calculate Statistics" Enum to "Write to Spreadsheet file"

 

<Bye row,  Bye Col, Bye table,  Bye both Row and Col...>

 

The staticistics are nice to know... and certainly can be calculated from the 1D or 2D data input to "Write to spreadsheet.vi"

 

We just need an enum to add the evaluation of the datapoints.

At the moment there are two wait functions in LabVIEW that I know of:

 

-wait (ms)

-wait until next ms multiple

 

I propose a third option, 'wait until ms timer value' which waits until the system timer reaches the specified value.

 

What does this gain us? Suppose we want a loop to execute on the average every n milliseconds. We use the existing 'wait next ms multiple' in the loop. What if we want n to be non integer? It may not make sense to pass a fractional number to a wait function that doesn't offer that resolution, but it's a reasonable wish to have a loop execute on the average every n milliseconds for non integer n. How can we achieve this? Add n to a count each time we loop, then each loop wait the whole part of this accumulated value and take this off the count. The result would be a loop which takes sometimes a little under, sometimes a little over the specified number of millis due to rounding, but averages to the non integer value requested. The problem is the required wait function- wait(ms) will not do it- it doesn't account for the time the code in the loop takes to execute. Wait next ms multiple won't do it- it's no good when the wait is varying - what we need is to wait until a fixed timer count.

 

Hence the request.

Dear all Labview fans,

 

Motivation:

I'm a physicist student who uses Labview for measurement and also for evaluation of data. I'm a fan since version 6.i (year 2005 or like)

My typical experimental set-up looks like:  a lot of different wires going every corner of the lab, and it is left to collect gigabytes of measurement data in the night. Sometimes I do physics simulation in Labview, too. So I really depend on gigaflops.

 

I know, that there is already an idea for adding CUDA support. But,not all of us has an nvidia GPU. Typically, at least in our lab, we have Intel i5 CPU and some machines have a minimalist AMD graphics card (other just have an integrated graphics)

 

So, as I was interested in getting more flops, I wrote an OpenCL dll wrapper, and (doing a naive Mandelbrot-set calculation for testing) I realized 10* speed-up on CPU and 100* speed-up on the gamer gpu of my home PC (compared to the simple, multi-threaded Labview implementation using parallel for loops) Now I'm using this for my projects.

 

What's my idea:

-Give an option for those, who don't have CUDA capable device, and/or they want their app to run on any class of calculating device.

-It has to be really easy to use (I have been struggling with C++ syntax and Khronos OpenCL specification for almost 2 years in my free time to get my dll working...)

-It has to be easy to debug (in example, it has to give human readable, meaningful error messages instead of crashing Labview or making a BSOD)

 

Implemented so far, by me, for testing the idea:

 

-Get information on the dll (i.e..: "compiled by AMD's APP SDK at 7th August, 2013, 64 bits" , or alike)

 

-Initialize OpenCL:

1. Select the preferred OpenCL platform and device (Fall back to any platform & CL_DEVICE_TYPE_ALL if not found)

2. Get all properties of the device (CLGetDeviceInfo)

3. Create a context & a command queue,

4. Compile and build OpenCL kernel source code

5. Give all details back to the user as a string (even if all successful...)

 

-Read and write memory buffers (like GPU memory)

Now, only blocking read and blocking write are implemented, i had some bugs with non blocking calls.

(again, report details to the user as a string)

 

-Execute a kernel on the selected arrays of data

(again, report details to the user as a string)

 

-close openCL:

release everything, free up memory, etc...(again, report details to the user as a string)

 

Approximate Results for your motivation (Mandelbrot set testing, single precision only so far.):

10 gflops on a core2duo (my office PC)

16  gflops on a 6-core AMD x6 1055T

typ. 50 gflops on an Intel i5

180 gflops on a Nvidia GTS450 graphics card

 

70 gflops on EVGA SR-2 with 2 pieces of Xeon L5638 (that's 24 cores)

520 gflops on Tesla C2050

 

(The parts above are my results, the manufacturer's spec sheets may say a lot more theoretical flops. But, when selecting your device, take memory bandwidth into account, and the kind of parallelism in your code. Some devices dislike the conditional branches in the code, and Mandelbrot set test has conditional branches.)

 

Sorry for my bad English, I'm Hungarian.

I'm planning to give my code away, but i still have to clean it up and remove non-English comments...

Hello,

 

the current functionality doesnt allow to asynchronously call a method that has any dynamically dispatched inputs. This causes a need to create a statically dispatched wrapper around the dynamic method which then can be called.

 

This is a source of frustration for me because it forces you to have code that is less readable, and this doesn't seem to be any reason for having functionality like that. Since you allready need to have a class loaded in the memory to provide it as an input for the asynchronously called VI why not just allow to use dynamic dispatch there (the dynamic method is allready in the memory).

 

How it is right now:

DynamicDispatchAsynchCall0.png

DynamicDispatchAsynchCall1.png

 

Solution: Allow to make asynchronous calls on methods with dynamic dispatch inputs.

The Formula Node (FN) is indispensable when implementing numerical algorithm in LV. There is just no way to get it right with wires and built in functions.

The problem is that it is limited to Double Precision only.

 

Since all functions available in the FN are implemented as primitives supporting extended precision, it is not like NI doesn't have the needed algorithms at hand.

Hence, please upgrade the FN to allow use of extended precision numbers (internally but also as inputs and outputs).

 

Note: I know about Darin. K's wonderful tool to transform text-based formulas into VIs, but sometimes there is just too many inputs/outputs for this approach to be a viable route, not mentioning that there is then no way to decipher (and modify) the resulting code...

It would be helpful if the IMAQ Particle Analysis VI took as inputs:

 

Max Number of Particles

Maximum Analysis Time

 

It could then use these parameters to decide when to quit processesing the image and report back that it did not complete the operation via a boolean output or enumeration that indicates why it did not complete the processing.  

 

In an automated vision system used count defects it is possible that the sample under test has an enormous amount of defects.  In that case the user might want to call the sample grossly defective and they do not care if the exact number of defects (particles) are reported.  Likewise, if the automated system has a fixed time frame over which it needs to process a sample this input would guard against that time frame being exceeded.

 

Context Help.PNG

Controls.png

How about a plugin for doing vi analysis to check whether our vi follows the style rules of Labview?

Something like StyleCop or FxCop for C#.

 

 

The Attached code calculates the square root free Cholesky factorization (LDL'), it is very useful to decompose matrices and in my specific case, to make observability analysis within electrical distribution networks. I'm publishing it because LV counts with other kind of decompositions like the LU decomposition, very useful but for my case, the values delivered by LDL' are more accurate and easier to calculate.

 

Best regards

The Current Situation:

 

In a LabView Real Time program Control References cannot be used because the front panel is removed (even if debug mode is enabled). In order to write to a control in a cluster, the only options are to use unbundle/bundle or variants. It is preferrable to be able to programmatically write to any control within the cluster when one is required to be able to change any value within a large multi-level cluster that has many controls. Using variants and drilling down into the cluster to access and change a value causes execution issues in a Real Time program.

 

The Idea:

 

Have a Control Reference Terminal on the Data Value Reference Read Node on the InPlace Element Structure. This is shown in the code below.

Control Reference terminal on InPlace Element Structure.PNG

This would allow the value (and whatever other properties make sense to be accessible) to be programmatically modified inside the InPlace Element Structure. This would work in the Real Time environment too.

Being in line with the expansion of LabVIEW and its products worldwide where ever we have more users, it is necessary to enable a space dedicated to present ideas for other supported languages.

 

labview idea.png

Timed structures require a priority for scheduling. Default priority level is 100 for each structure.

If that priority is not changed (one dedicated priority level per structure) it could lead to unexpected behavior like hangs.

 

Therefore i propose a new test for the VI Analyzer which tests for unique (static) priority levels in timed structures. If there are priority levels shared among several structures, the test shall fail.

The test shall also work within a whole project, so it shall not only check this within single VIs.

 

Norbert

I use the following shortcuts in Quick Drop:

"+" - shortcut for "add"

"-" - shortcut for "subtract"

"*" - shortcut for "multiply"

"/" - shortcut for "divide"

 

And that got me thinking that what I would really like is to be able to type simple math equations and have Quick Drop generate said equations.

So if we typed "(*3+2)/(4+5.68)", Quick Drop would spit out the following:

simple math.png

 

-Carl Wecker