LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

If you have open the Bookmark Manager and
you change one bookmark, the Manager gets an "Bookmark Info Change" event and reloads all bookmarks from all
VIs in the project. The event that triggers the update is the
application event "Bookmark Info Change". At this event there is no
info in which vis a bookmark has changed. If the event would provide the info which VIs have changed,
the Bookmark Manager would be able to reload only the bookmarks from this particular vis.
On larger projects the Bookmark Manager needs minutes (Time and CPU
load) to update the whole list. The additional event data node can be an array of vi names or references.

 

This changes will fixed also the EventQ Problem of the Bookmark Manager. If the Bookmark Manager is open and you change several Bookmarks the manager reloads all VIs for x times because everey change generates a "Bookmakrs Info Change" Event.

When a folder under source control is added to the VI analyzer the subfolder for version control is inserted too. Lets get around this by allowing patterns for folders (or files too) which should be ignored. So adding ".svn" would solve the problem of files of subversion being checked.

I see three possibilities to save the patterns:

- for each top level folder

- in the configuration file

- globally (not preferred, other users do not get the setting)

 

(There is a discussion about VI Analyzer and SVN under http://forums.ni.com/ni/board/message?board.id=170&view=by_date_ascending&message.id=290839#M290839)

 

Greetings,

shb

 

It is a little know feature that "interpolate 1D array" also accepts arrays of xy points. (Here's an example from this post)

 

 

However, these days I mostly do xy graphs using complex data (RE=x, IM=y), and it would be useful if "interpolate array" would also accept complex arrays the same way, interpolating IM based on RE, in this case.

 

Here's similar code (fragment) using complex, but now it's not possible to interpolate the xy data directly because complex is currently not a valid input.

 

 

Suggestion:


Similar to xy graphs, interpolate array should accept complex arrays and it would act the same as when an array of xy points is wired to it instead.

 

Currently, we have to use Unbundle By Name from Cluster and select an element for Case Section

 

1.png

 

It would be great if we could just wire the Cluster Directly and have a Right-Click Option at Case selector to select an element (one element only).

 

2.png

 

P.S. If it is a reasonable suggestion and gets enough Kudos to get R & D team’s attention for feasibility of this idea, then we ask for more logical operators support that would be useful. Also multiple elements and/or more statement node i.e. (type == Array and # elements <= 2)!!!

Only sometime I miss “if statement” support in LabVIEW. 

 

 

Message Edited by Support on 07-16-2009 11:56 AM

Currently, the TDMS File api does not offer a way to get the TDMS file size.

 

Our use case is that we'd like to limit the size of the TDMS files and span them accross multiple individual files (and I've posted an idea suggestion for adding that as a native feature, too).  To do this, we need to be able to monitor the TDMS file size, so that we can save/close the current file and then create the next file in the span for continued use (until we hit the size limit again).

 

 

Jim_Kring_0-1707938415587.png

 

It would be helpful if the LabVIEW Python node natively supported Python dictionaries as LabVIEW Maps. This would make it simpler/easier to support a frequently use Python structure. You can work around but you have to do extra data preparation/formatting in both LabVIEW & Python. It would be nice if the node handled that converting to list of tuples and building the LabVIEW map or vice versa for us.

Dear all Labview fans,

 

Motivation:

I'm a physicist student who uses Labview for measurement and also for evaluation of data. I'm a fan since version 6.i (year 2005 or like)

My typical experimental set-up looks like:  a lot of different wires going every corner of the lab, and it is left to collect gigabytes of measurement data in the night. Sometimes I do physics simulation in Labview, too. So I really depend on gigaflops.

 

I know, that there is already an idea for adding CUDA support. But,not all of us has an nvidia GPU. Typically, at least in our lab, we have Intel i5 CPU and some machines have a minimalist AMD graphics card (other just have an integrated graphics)

 

So, as I was interested in getting more flops, I wrote an OpenCL dll wrapper, and (doing a naive Mandelbrot-set calculation for testing) I realized 10* speed-up on CPU and 100* speed-up on the gamer gpu of my home PC (compared to the simple, multi-threaded Labview implementation using parallel for loops) Now I'm using this for my projects.

 

What's my idea:

-Give an option for those, who don't have CUDA capable device, and/or they want their app to run on any class of calculating device.

-It has to be really easy to use (I have been struggling with C++ syntax and Khronos OpenCL specification for almost 2 years in my free time to get my dll working...)

-It has to be easy to debug (in example, it has to give human readable, meaningful error messages instead of crashing Labview or making a BSOD)

 

Implemented so far, by me, for testing the idea:

 

-Get information on the dll (i.e..: "compiled by AMD's APP SDK at 7th August, 2013, 64 bits" , or alike)

 

-Initialize OpenCL:

1. Select the preferred OpenCL platform and device (Fall back to any platform & CL_DEVICE_TYPE_ALL if not found)

2. Get all properties of the device (CLGetDeviceInfo)

3. Create a context & a command queue,

4. Compile and build OpenCL kernel source code

5. Give all details back to the user as a string (even if all successful...)

 

-Read and write memory buffers (like GPU memory)

Now, only blocking read and blocking write are implemented, i had some bugs with non blocking calls.

(again, report details to the user as a string)

 

-Execute a kernel on the selected arrays of data

(again, report details to the user as a string)

 

-close openCL:

release everything, free up memory, etc...(again, report details to the user as a string)

 

Approximate Results for your motivation (Mandelbrot set testing, single precision only so far.):

10 gflops on a core2duo (my office PC)

16  gflops on a 6-core AMD x6 1055T

typ. 50 gflops on an Intel i5

180 gflops on a Nvidia GTS450 graphics card

 

70 gflops on EVGA SR-2 with 2 pieces of Xeon L5638 (that's 24 cores)

520 gflops on Tesla C2050

 

(The parts above are my results, the manufacturer's spec sheets may say a lot more theoretical flops. But, when selecting your device, take memory bandwidth into account, and the kind of parallelism in your code. Some devices dislike the conditional branches in the code, and Mandelbrot set test has conditional branches.)

 

Sorry for my bad English, I'm Hungarian.

I'm planning to give my code away, but i still have to clean it up and remove non-English comments...

The current set of Bessel functions supplied in LabVIEW Core only allow for real arguments and outputs. This limits the usefulness of LabVIEW in certain areas of science where complex Bessel functions are required for calculations (i.e.. acoustic modeling). The Mathscript RT module has Bessel function calls that support complex arguments so it's not like the coding doesn't exist. This is one area where LabVIEW is deficient as compared to Mathematica and Matlab and can be easily corrected without forcing the user to buy the Mathscript RT Module.  

When looking for unexpected behaviour in time or memory usage of a project the Profiler is useful and easier than the execution tracer, but it could be made much more useful by adding the ability to monitor for changes and analyse the issues.

 

Mads_0-1695798309247.png

 

Issue:
Currently detecting how the memory or run counts e.g. of a VI changes over time you have to take snapshots, save them and then compare the values in e.g. Excel. (which the saved traces do not directly fit into either...)

Proposed feature: Trends
It would be nice if you could just set the tool to automatically sample and log all/selected numbers regularly and then be able to view the trends. 

 

Proposed feature: Automated Analysis
Having trends will help in manually detecting issues, but the profiler could also have tools that helped you in this, e.g. highlighting which VIs show a continous growth in memory. This could also then be expanded by being able to call a VI analyzer on any given VI - preferably made/set up to identify possible reasons for a memory leak e.g. (unclosed references, continous array building e.g.).

Hello,

 

the current functionality doesnt allow to asynchronously call a method that has any dynamically dispatched inputs. This causes a need to create a statically dispatched wrapper around the dynamic method which then can be called.

 

This is a source of frustration for me because it forces you to have code that is less readable, and this doesn't seem to be any reason for having functionality like that. Since you allready need to have a class loaded in the memory to provide it as an input for the asynchronously called VI why not just allow to use dynamic dispatch there (the dynamic method is allready in the memory).

 

How it is right now:

DynamicDispatchAsynchCall0.png

DynamicDispatchAsynchCall1.png

 

Solution: Allow to make asynchronous calls on methods with dynamic dispatch inputs.

The Logical Shift, Rotate, Boolean Array to number and Number to Boolean Array primatives should work with 64 bit values. Currently they are restricted to 32 bit values.

Random Number (0-1) Function from Numeric palette is widely used, but it misses such important feature as Seed to initialize a pseudorandom number generator.

Seed is present in TestStand's Random() function for instance and described there as: "An optional number the function uses to determine where in the virtual sequence of random numbers the function obtains its random numbers. When you seed the Random function with the same value, subsequent calls to Random return the same sequence of numbers. If you pass a seed value of 0.0, the function generates a seed value based on the current time. If you do not pass a seed value, the function returns the next number in the current sequence of random numbers."

Working in an GxP environment in the pharma industry, any initiative to make NI software products more compliant with the FDA guidelines would be most welcome. One such relatively simple measure would be to enable "sealing" of TDM files, such that any tampering with the data is either impossible or logged.

 

I believe I have passed this idea on to NI several years ago and I apologize if it has been implemented already.

 

Yours

 

Sebbe

Cygnus Data, Göteborg, Sweden

The PID Autotune would be much more useful if it were refactored so it could run on RT targets.  I would suggest replacing the embedded calls to viserver to open up wizard panels

with a MVC architecture that would allow a better separation of the core PID logic and the HMI components.

While the Image datatype is very useful for working with images, there are many functions that are not available (e.g. Square Root, or the Wavelet Transforms).  In order to achieve this, it is necessary to convert the Image to an Array, thereby duplicating the memory required, and then convert back again.  (IMAQ GetImagePixelPtr/IMAQ MemPeek also duplicates data).  I would like to be able to directly access the Image data as a LabVIEW Array.  Perhaps the In Memory structure could be used to achieve this, e.g.

ImageInPlace.png

 

One potential problem is the extra information (border pixels) which are part of Images - for most use cases it would probably be ok to retain these (i.e. the array is larger than the image, though perhaps there could be an option as to whether they are mirrored or zero etc. RGB images would provide an array of U32/64, but even better might be a cluster of arrays for each colour plane.

 

Bonus points for the ability to access an Array of Images as a 3D Array!

 

A graphical programming language deserves to have great graphical tools representing the design of big applications.

 

It is possible to use scripting to understand if a method of a class has another class as input or output terminal and to know if a class composes another class in private data. The only thing I cannot do right now myself is to show this information in Class Hierarchy window. Representing the usage and composition relationships only requires parsing through the project once, and then every time it changes. 

I am right now using a script to parse through all project classes to understand which have what relationships, which are actors and which are messages and I draw a Plant UML diagram from that. The intermediate step is to generate Plant UML text, but this should be integrated into LV.


https://forums.ni.com/t5/LabVIEW-APIs-Discussions/Project-to-Plant-UML-Diagram-Script/td-p/3984499

5.png

 

6.png

 

Please improve the readability of object oriented project with additional tools for LabVIEW. This is especially critical for NXG, where the currently the ability to understand an OO project is even more limited than LV19.

I suggest to add the following tools to the Number/string conversion palette:

 

Number to Roman Numerals

Roman Numerals to Number

 

Here's how they could look like on the block diagram. A simple draft of these function can be found here.

 

 

 

Idea Summary: Add conversion tools for roman numerals to LabVIEW

After looking at the problem encountered here, it turns out that LabVIEW seems to make some insane choices when mixing a waveform with simple datatypes. Some behavior is good and intuitive. For example multiplying a waveform with a scalar applies the multiplication to the Y component only, because it would not make sense to e.g. also multiply the t0 or dt values.

 

It is less clear what should happen if multiplying a waveform with an array. Intuitively, one would expect something similar to the above, where the Y component is multiplied with the array. Unfortunately, LabVIEW chooses something else: It creates a huge array of waveforms, one for each element in the array. (as if wrapping a FOR loop around it, see image). If the waveform and the array both have thousands of elements, we can easily blow the lid off all available memory as in the quoted case. Pop! 😄 But the code looks so innocent!

 

 

I suggest that operations mixing waveform and simple datatypes (scalars, arrays) simply act on the Y component as shown.

 

(not sure how much existing code this would break, but it actually might fix some existing code!!! :D)

For the most part coersions can be broken down into two classes: lossy (e.g. 64L to I8) and safe (e.g. U8 to U16). Why is there only one color option for coersion dots?  Could the vi analizer have seperate settings for max allowable safe and unsafe  coersions?

Support project based vi-analyzer configuration which containing target specific information (important e.g. to correctly analyze FPGA VIs) to be executed with the vi-analyzer api to be integrated into CI process.