LabVIEW Idea Exchange

Community Browser
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Post an idea

The menu "tools...profile...find parallelizable loops" is a great tool to indentify loops that can be parallelized and it gives detailed advice and warnings in questionable cases.

 

There is however an important scenarios that is ignored in the analysis:

 

The case is if the parallelizable loop is contained inside a larger loop that is already parallelized. As a general rule, it is typically most efficient to parallelize the outermost loop only. A parallel loop inside a parallel loop only creates more overhead and will not gain much if the outer loop already causes 100% use of all CPU cores. It is possible that the LabVIEW compiler sorts things out automatically, but I think the "find parallelizable loops" tool should consider if an outer, already parallelized loop exists and should tone down the recommendation to a question mark instead of a check mark in this case.

 

Here is a typical analysis (yes, the subVI is very simple and inlined, so the outer loop parallelization is sound ;))

 

 

The description could read "This loop can be safely parallelized, but it is already contained inside a parallel loop and thus parallelization would not give any significant advantage" or similar.

 

Idea summary: The result window of "find parallelizable loops" should warn if a parallelizable loop is contained inside a parallel loop.

Following Robbob's advice from the end of this http://forums.ni.com/t5/LabVIEW/LV-2009-error-bar-plots-suggestions/m-p/987381/highlight/true#M441998 thread, I'd like to "bump", and extend, X.'s idea posted there:

 

A true, built-in, native (not x-control) error bar option on a standard fully-fledged X-Y graph- perhaps implemented simply as a different bundling of arrays to the graph indicator- eg bundle(Xvals, Xerrs, Yvals, Yerrs) or bundle(Xvals,SD(X), Yvals, SD(Y)). In the latter case, the depiction of each data "point" as a gaussian intensity "blob" would be the answer to a years-old dream of mine...!

 

Thanks for considering....

Struggling to replace R&S FSW-K70 by VST and RFmx for my customer, found 16APSK and 32APSK are not directly supported by RFmx DeMod.  The customer is currently using FSW-K70 and their current test scnearios require 16APSK and 32APSK modulation and demodulation.  

 

I found a thread mentioning about APSK support below.  

 

https://forums.ni.com/t5/LabVIEW/APSK-Modulation-Demodulation/td-p/4232293

I just found out about LabVIEW NXG's C Node. I know my team would love to have it in LabVIEW to use for bit masking in a familiar syntax. The Expression and MathScripts nodes have issue with 64-bit values and give unexpected results. A quick test in C Node does fix these issues.

The Excel-Specific, Excel General sub-Palette of the Report Generation Toolkit contains a useful function, "Excel Get Last Row".  This allows a user to add new rows of data to an existing Worksheet by returning a cluster of MS Office Parameters that can be used with the other Excel Report functions (such as Excel Easy Table) to place the new (row-organized) data below existing rows, such as Column Headers, that might be already present in the Template.

 

 

I propose that NI add a similar "Excel Get Last Column" that does the same thing, but returns the last column on the WorkSheet.  This would be useful when entering data one column at a time, not uncommon when entering multiple channels (columns) of sampled data, where you want the new data to be just to the right of the existing (columnar) data.

 

Get Last Column.png

I could easily write such a function myself, but so could NI, and if NI did it, everyone who uses the Report Generation Toolkit would have access to such functionality.

 

Bob Schor

Languages like 'R', 'Python', and MatLab (yes I use the old name) have these.  They are useful.

 

One of the key ideas in LabVIEW is that the user needs minimal interventions to code a useful result.  As more information is encoded in a data type there is more opportunity to make "hands free" code that "just works".  I think these two data types can do that.

 

Data Frame:

  • Primary data type in R
  • It is Array-like, but NOT an array
    • each column contains one measurement (row) on one variable
    • It acts like a list of vectors, but the vectors have the same number of rows, and indexing allows a return of all or subset from all columns
    • column types are heterogenous - they can be different
    • column types can be set.  A column of 0 vs. 1 can be set as factors/binomial values or as continuous.

There are functions that data analysis folks do every day that are informed by variable type, so the function operating on the inputs doesn't need type specified because it is interior to the table.  This means you can say "plot(mydata)" and if your data is set up well, the graph parameters are already specified and useful.

 

Some references:

 

Data table:

This is from Hadley Wickham, a very famous person in 'R'.  He does great work, and his name has high brand value in data-analysis.

 

It:

  • Uses the "data.table" package
  • is able to be screaming-fast (think roller-coaster) especially when used with the "split-apply-combine" approach to data analysis, and SQL-like operations.
  • is built for handling huge data (100GB tables) quickly and efficiently.

In many applications the same operation is not possible due to memory constraint or viable due to processor overhead can execute adequately (aka wonderfully) by using this data type on the same hardware.

 

References:

 

 

 

 

I would love an update to the signal processing VI's contained in NI_MAPro.lvlib to support waveforms with a SGL Y-value representation. The library is locked and most VI's call dll's that are not able to be modified anyways (by me that is, I am not all that strong in traditional text based languages). It would be nice to also support SGL waveforms within the .llb's contained in vi.lib/measure; although these are mostly unlocked and able to be modified.

 

Working with a cRIO, the FPGA to host DMA channels encourage the use of SGL data type so I went with it and kept it as SGL throughout my application. For some functions I turn my SGL array into a waveform with SGL Y-value representation. I was disappointed to learn that most of the signal processing waveform tools contained in NI_MAPro.lvlib do not support the SGL Y-values.

 

 The predessesor application was done on the usb cDAQ line that i was using DBL representation Y-values. I want to re-use a lot of code and was hoping the waveform signal processing VI's would accept SGL Y-values. For now I am stuck converting my data type for the sole purpose of re-using code; at 50kHz on 36 channels this can become a performance issue.

The Report Generation Toolkit provides functions to set various Format aspects of Excel "areas", and Excel Graphs, but doesn't provide the complementary "Get" functions to return the current values.  For example, I wrote a LabVIEW function that will set the Font Color of a row of a WorkSheet to Red (using Excel Set Cell Font), but if I want to find the row with the Red font, there is no "Excel Get Cell Font" that can return the property to me.  Of course, I could cobble something together using ActiveX calls, perhaps, but these are poorly documented, and since NI is already doing the "heavy lifting" to provide the Set functions, it would seem "relatively simple" for them to also add the corresponding Get functions, as well.

 

Bob Schor  (using Excel as a "controller" for some experiments controlled by a LabVIEW Real-Time system)

We use TDMS extensively for large analysis tasks (multi Gbyte) files, and find the current two levels very restrictive, at minimum would like to see an one more level than Group and Channel. Without this, storing series of spectra requires overloading the current levels, or the Channel concept gets bastardized, by creating a new channel for each spectrum. Carsten Thomsen

Charts and graphs in LabView have been driving me nuts for years. What I would like to see is a simple chart (maybe even an "Express Chart") that has a single dimensional array input and a timestamp input. Values in the array would automatically be plotted on their own plot and the time stamp would be put in the X-axis starting from the left. An option to just use the current time if no timestamp was input and amount of data displayed on the X-axis can be adjusted on the fly by setting the X-axis TIME/DIV

I'd like to see native Ternary Logic support (aka three-valued or trivalent logic, or 3VL). True and False could still be represented by 1 and 0 respectively, and the third option could be represented by -1.

 

Not much else to say about this really... I think it's pretty self-explanatory.

This idea will probably have a narrow audience... those of us who use the "zip" functions in LabVIEW. There is currently an unzip function that takes a zip file on disk, then writes the unzipped files back to disk. To manipulate zipped files, you must then access the disk and load into memory. In other words, 3 disk operations... read zip, write file, read file.

 

There needs to be a function that unzips the files into memory, with the output of this function as an array of flattened strings, byte arrays, or data pointers.

 

 

UnzipToMem.png


Message Edited by Support on 06-09-2009 08:35 AM

We have cloud computing, virtual Machines, CPU virtualization etc. - There are numerous ways of achieving parallel and distributed computing, available at different architectural levels. The inherent parallel nature of the LabVIEW graphical programming means we can often achieve parallel computing without thinking.  

 

-But in cases where the programmer actually needs to make a decision we now have the Loop Iteration Parallelism option.
If an action is to repeated multiple times and the execution of each run takes longer than the overhead of communicating the input data, execution code and/or output data across to multiple targets, parallelization can reduce the total execution time, and/or reduce the load on each target. Now, in some cases the execution time can justify parallelization even across slow communication channels. 


What if we expanded the user-friendly loop iteration parallelization mechanism to also support remote processors?

 

  • On the targets we want to offer as execution hosts we will need to install a host service. This service might offer us the choice of offering all, or just a subset of the available cores.  Perhaps even decide this based on the current load on the target, or time of day(!). The targets can be of different platforms as long as the code is possible to recompile for it.

 

So how would this look like to the programmer? Well, we simply extend the for loop parallelization function dialog to something like this:

 

For Loop Iteration Parallelism Across Targets.png

 

  • The loops should also allow this setup to be changed at run-time. You could have a general VI to define the default targets and establish a link to them, and each for loop could have input terminals to specify the parallelism options to be used at the time of execution.

  • Another fun consequence of this functionality would be that you can really distribute *any* part of you code across multiple targets simply by wrapping it in a 1-iteration only loop.

 

With this functionality in place getting 10 machines to work on a heavy problem instead of just one would really be as simple as drawing a for loop...Smiley Very Happy

LabVIEW should ship with a Set Operations palette item, including the common operations such as Union, Intersection, Complement, and Cartesian Product, as well as more advanced operations that I don't know about. These operations would act on 1D arrays of almost anything. Floating point numbers and those would have to have some kind of "error in value" input that defaults to the machine epsilon or equality checking. Output would be a 1D array on the input datatype with the values, and perhaps an output for matched indices.

 

Yes, these are quite easy to make yourself (loop+search), but I think that it would be beneficial for the community for NI to provide it. The VIs found at http://zone.ni.com/devzone/cda/epd/p/id/3929  are... Interesting, to say the least. NI might be able to provide better performance, too.

Currently, whenever you filter data it is initialized with all zeroes.  If your data is fairly constant, there is a large section where the filtered value slowly goes from zero to the steady state value.  This is annoying when doing a quick look at the filtered data, because you have to rescale a graph to ignore the ramping portion of the filtered data.

 

My suggestion is to provide an option for either providing a starting value for the filter or automatically using the first value of the data being filtered.  For fairly constant data, this would eliminate the large ramp from zero.  For varying data, it would start better as well.  It seems the initial value is fairly arbitrary, so it shouldn't violate any mathematical rules.  It would be the equivalent of subtracting the first value from all the data, filtering, then adding the first value back to the filtered data.

 

Bruce

In the real world, machine epsilon is a function of the binary representation of a floating point number.

 

The labview help describes it as:

 

 


Machine Epsilon

 

"Represents the round-off error for a floating-point number with a given precision. Use the machine epsilon constant to compare whether two floating-point numbers are equivalent."


 

 

From the term "given precision", we would assume that epsilon depends on the representation. In fact, we can right-click on the machine epsilon and select between SGL, DBL, and EXT.

 

However, if we look at the actual value, we can see that machine epsilon has the identical decimal value for SGL, DBL, and EXT. No matter what representation we chose, we get the value for DBL.

 

This is not right!

 

Suggestion: the machine epsilon must depend on the representation. Since the exact representation of EXT depends on the architecture (64, 80, 96, 128 bits total), machine epsilon for EXT needs to adapt accordingly.

 

Here's one possible way to calculate machine epsilon explicitly. Note the discrepancy for SGL and EXT.

 

 

 

 

 

Download All
Hi

 

I think the title itself is self explanatory. ๐Ÿ™‚ ๐Ÿ™‚

 

How about modifying the thermometer vi so that there are options in the vi itself which help in setting the colour (which fills the thermometer) for a range of temperature.

For example. if the range is from 0 to 100, then:

                                                                            GREEN for 0 to 30

                                                                            ORANGE for 31 to 60

                                                                            RED for  61 to 100

 

(not trying to make a traffic light system......hahaha)

 

the colour and the range can be set by the user.    

When you compare two Vis, one with a control, and the other one with the same control but with a change on the test justification (You can modify it by clicking on the dropdown menu of text settings, next to the pause button)

 

daguero_0-1680115378128.png

 

 

If you compare the vis at this moment it only shows this

----------------------------

Difference Type: text justification

 

As we can see there is missing information regarding other comparissons like label name, object type and detailed description like the following example

 

----------------------------

Label Name: Label1

Object Type: String

Difference Type: moved

Detailed Description: Changed from "(168,132)" to "(12,71)"

 

daguero_1-1680115378130.png

 

 

 

Consider that this is a small scenario (only comparing two indicators) where the difference can be found easily, but if you try this with bigger VIs it will have a lot of missing information

 

The fact that this information is not available is time consuming, eventhough they can be walked manually the record of changes is compromised

Many advanced funtions in the optimization and fitting palettes allow the use of a "VI model" given as a strictly typed VI reference to be defined by the user. A great feature!

 

LabVIEW provides various templates containing the correct connector pattern. Here is a list of the templates I found:

 

in labview\vi.lib\gmath\NumericalOptimization:

  • ucno_objective function template.vit

  • cno_objective function template.vit

  • LM model function and gradient.vit

in labview\vi.lib\gmath

  • Zero Finder f(x) 1D.vit

  • Zero Finder f(x) nD.vit

  • 1D Evolutionary PDE Func Template.vit

  • 2D Evolutionary PDE Func Template.vit

  • 2D Stationary PDE Func Template.vit

  • ODE rhs.vit

  • Global Optimization_Objective Function.vit

  • DAE Radau 5th Order Func Template.vit

  • function_and_derivative_template.vit

  • function_template.vit

As you can see, the naming is quite inconsistent:

  • all files are templates as is immediately obvious from the file extension! (*.vit )
  • some containing the word "_template" (undescore/lowercase t)
  • some containing the word " template" (space/lowercase t)
  • some contain the word " Template") (space/uppercase T)
  • some don't contain the word template in any form (good! :D)

 Since the extension fully defines them as templates, maybe the word "template" could be scrubbed from all the filenames, making things more uniform and consistent.

 

Idea Summary: remove the word "template" from all model template names that contain it.

We really need the LabVIEW Statechart Module to run in LabVIEW 64-bit! Many of our projects are large vision projects requiring lots of processing and hence LabVIEW 64-bit.

 

Please make Statechart Module available in LabVIEW 64-bit!