LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

It's always a pleasure to uncover shared memory problems when working with base vi.lib methods that *should* work as preallocated clones and have uninitialized shift registers and then having to establish a chain of vi references to safely call those methods in an application with distributed/parallel threads. It feels like extra work but does fix the problem. It's a major unknown gotcha unless you're already familiar with the vi you are calling.

 

My most recent hiccup: the NI PID library. I needed two closed loop controllers in actor framework, and was getting very strange timing, setpoint, process variable, and control output crossovers until I realized the stock .vi's had shared memory in the form of uninitialized shift registers (despite being in the context of a pre-allocated actor). Creating references of the vi's I needed and storing them in the actor at launch fixed the problem, but at that point the effort in writing my own PID .vi starts to be a favorable time tradeoff. At least I am able to peek under the hood of the PID library, other aspects of vi.lib... not so much.

 

Or maybe this is a teaching problem. I haven't come across ways of navigating this issue from official NI documentation, in fact the way I learned I needed to call the PID.vi by-reference was from the forum and rather matter-of-factly. There are a couple of great blogs that cover this issue in detail, so I don't feel alone in my ignorance. Maintaining State Information in LabVIEW Applications, Part 5 - LabVIEW Field Journal Archives | LabVIEW Field Journal Archives

I like compile time type safety checks. I dislike using variants. Occasionally, and increasingly more often, I find myself going to great lengths to provide compile time type safety. At a point, the type check gets lost in the inheritance hierarchy and I am back to depending on runtime checks for errors. It's not uncommon for me to have a class method that needs to "just work" across the bulk of the base types, but it sure is a pain to make wrapper classes, static inlined methods, and a nasty polymorphic .vi to mimic this behavior. Perhaps I am ignorant to some features of LV (do malleable vi's fit in here somewhere?), but multiple dispatch/function overloading sure seems like the silver bullet for this issue without messy inheritance trees.

 

I'm open to discussion on alternatives. This "problem" has come up in a couple of recent projects of mine, and I always feel dirty using a variant or making a static API for a class that ought to be extensible.

Using property nodes, it's possible to retrieve a list of subVI references within a VI's block diagram. However, there's currently no way to link each reference to its specific subVI instance.

For example, if you place three instances of subVI "A" in a diagram, you can obtain three references to these subVIs, but there is no way to determine which reference corresponds to which instance.

The proposed idea is to introduce a "This SubVI" reference — a property or node that would return the reference of the current subVI instance (the one hosting the object or node). This enhancement would significantly improve scripting capabilities, support tool development, and aid in debugging tools.

Many new PCs running Windows run on ARM processors (like the Snapdragon), rather than x86 architecture. LabVIEW does not support ARM.

 

A few years back, my LabVIEW software would work on any Windows PC. That is no longer the case. That is a huge and increasing limitation.

Problem: Many native VIs use the Non-reentrant execution reentrancy setting.

 

Solution: The vast majority of native VIs should use the Preallocated clone reentrancy setting.

  • The native VIs that need to use Non-reentrant or Shared clone are few and far between - they should be identified on a case-by-case basis. Their Context Help and/or Detailed Help should explain why they need to be set to Non-reentrant or Shared clone.

The following is a selection of vi.lib VIs that should use Preallocated clone. This selection is meant to serve as a starting point and is not comprehensive.

 

1.png

2.png

3.png

 

Notes:

  • This idea is related to: The reentrancy of new VIs should be "Preallocated clone" . They both argue in favour of using the Preallocated clone setting more.
  • A significant number of native VIs are already configured to use Preallocated clone, which is great.
  • There are curious cases where closely related VIs are set to different reentrancy settings. For example, Color to RGB.vi is rightly using Preallocated clone, while RGB to Color.vi is Non-reentrant. Similarly, Trim Whitespace.vi is rightly Preallocated clone, while Normalize End Of Line.vi - which lives next to it on the String palette - is Non-reentrant.
    • This suggest that the reentrancy setting of some native VIs was chosen haphazardly. This needs to be rectified.
  • The fact that so many native VIs are non-reentrant partly defeats LabVIEW's remarkable ability to create parallel code easily. Loops that are supposed to be parallel and independent are in fact dependent on each other when they use multiple instances of these non-reentrant native VIs. When an application uses multiple instances of these native VIs it is as if there are "hidden semaphores" that are added between the various call locations that call these native VIs. This leads to less performant applications (more CPU cycles, longer execution time, larger EXE compiled code size).

This topic keeps coming up randomly.  A LabVIEW class keeps a mutation history so that it can load older versions of the class.  But how often does this actually need to be done?  I have never needed it.  Many others I have talked with have never needed it.  But it often causes problems as projects and histories get large.  For reference: Slow Editor Performance with Large LabVIEW Projects Containing Many Classes.  The history is also just added bloat to your files (lvclass and any built files that use the lvclass) for something most of us do not need and sometimes causes build errors.

 

My proposal is to add an option to the class properties to keep the mutation history.  It can be enabled by default to keep the current behavior.  But allowing me to uncheck this option and then never have to worry about clearing the history again would be well worth it.

This is an idea I've been working on for a while. It's time to let others start evaluating it. 🙂

000.png

001.png

 ^ I included the above for Dmitry Sagatelyan and similar folks who have asked me for these things over the years so they know the mindset to use when evaluating the idea. But it's written up below for LabVIEW users who only know LabVIEW as it stands today (Q3 2024).

002.png

003.png

004.png

005.png

006.png

007.png

 

Feedback and questions welcome. 

I realize that the DBCT is nearing its 20th birthday, and probably is not a priority for a fix, but this is so simple and has tripped up a number of LabVIEW programmers doing database applications for so many years.

 

The VI Rec Fetch Next Recordset (R).vi has a flaw which manifests itself in stepping past the last recordset in a multi-recordset return - then leaks a reference which causes mayhem in downstream code.  A simple test of the recordset reference would make this VI properly useful.

 

Among my other posts in reply to forum users about databases, the issue is captured succinctly here.

 

I long ago made the mods to the toolkit VI (actually, two), and reapply them with each new LabVIEW version as needed.  If anyone at NI/Emerson wants to look into this, I'd be happy to share the particulars.

 

Dave

The recently introduced Raspberry Pi is a 32 bit ARM based microcontroller board that is very popular. It would be great if we could programme it in LabVIEW. This product could leverage off the already available LabVIEW Embedded for ARM and the LabVIEW Microcontroller SDK (or other methods of getting LabVIEW to run on it).

 

The Raspberry Pi is a $35 (with Ethernet) credit card sized computer that is open hardware. The ARM chip is an Atmel ARM11 running at 700 MHz resulting in 875 MIPS of performance. By way of comparison, the current LabVIEW Embedded for ARM Tier 1 (out-of-the-box experience) boards have only 60 MIPS of processing power. So, about 15 times the processing power!

 

Wouldn’t it be great to programme the Raspberry Pi in LabVIEW?

See this github repository for a more complete proposal and an example implementation that gets us closer to achieving this in LabVIEW.

Some languages like Rust and Zig have a feature called Tagged Enums (or Sum Types) that allow you to create a data type that can be one of a few different types where there is a name associated with each type. In LabVIEW, however, Enums are limited to consecutive numeric integer values -- there's no way to associate a type with each named value.

 

The power of combining an Enum with a data type for each value is that we could potentially use a Case Structure as a switch statement with type assertion and data conversion built in! This would allow us to create robust, type-safe code that is easier to maintain and understand.

 

example_equipment_variant.png

See this github repository for a more complete proposal and an example implementation that gets us closer to achieving this in LabVIEW.

Typical question in development process: "How quickly does my code execute? What runs faster... Code A or Code B?" So, if you're like me, you throw in a quick sequence that looks like this:

 

TimingDuringDevelopment.png

 

AHHH! What a mess! It's so hard to fit it in, with FP real estate so packed these days!

 

We need this:

ProposedTimingDuringDevelopment.png

 Just like my other idea, and for simplicity's sake NI, I would be PERFECTLY happy even if you had to set up the probes during edit mode, and were not able to "probe" while running.

 

 As a bonus, this idea may be extrapolated into n timing probes, where you can find delta t between any two of the probes.

The LabVIEW compiler currently appears to use one core of a multi-core processor.  It would be nice if it fully utilized multiple cores to speed building of large projects, and recompilation of VIs when editing/opening source code.

There are times when I leave a VI with modal properties open and then I run the main application that also calls this VI if opened in the development environment. This locks all running windows due to the modal VI. I propose a button in the taskbar that aborts all running VIs OR perhaps a list is opened on right-click of all running VIs 🙂

 

abort_all_running_vi.png

 

 

Please implicitly consider array index during index / replace elements in In Place Elements Structure if I am starting from Index 0

 

Present method:
image.png

 

Expected method:
image (1).png

 

[admin edit 2021-02-24]: placed images in-line with text and removed them as attachments

Currently, the TDMS File api does not offer a way to get the TDMS file size.

 

Our use case is that we'd like to limit the size of the TDMS files and span them accross multiple individual files (and I've posted an idea suggestion for adding that as a native feature, too).  To do this, we need to be able to monitor the TDMS file size, so that we can save/close the current file and then create the next file in the span for continued use (until we hit the size limit again).

 

 

Jim_Kring_0-1707938415587.png

 

Background

 

The DAQmx apis allow streaming measurements to TDMS. This supports spanning multiple files (by setting the max file size for individual files in the span set), which is very useful.

We need a similar feature for files we're writing to directly (i.e. not using DAQmx) with the TDMS File functions in LabVIEW.

Jim_Kring_0-1707938647270.png

 

Note: The main reason we wanting this feature, right now, is that our files are growing quite large and when we run the TDMS Defragement function, we get out of memory errors in RT on cRIO (e.g. if we have 512MB RAM on our cRIO and the TDMS file is around the same size).

 


Alternatives

We're thinking to do this ourselves, however the TDMS file api does not support checking the file size from a TDMS file reference  (and that's a great idea, too).

I'm not totally sure how this would be added to the TDMS file api -- maybe there could be a "TDMS Advanced Spanning" palette with options for configuring and interacting with TDMS spanning.

As soon as we have more complicated data structures (e.g. clusters of arrays), a large portion of the FP real estate is wasted taken up by borders, frames and trims, etc.

 

We need a palette full of "Amish" 😉 controls, indicators, and containers that eliminate all that extra baggage. We have a few controls already in the classic palette, but this needs to be expanded to include all types of controls, including graphs, containers, etc.

 

A flat control consists of a plain square and some text (numerical value, string, ring, boolean text, etc). A flat container is a simple borderless container.  A flat graph is a simple line drawing that would look great on a b&w printer. A flat picture ring looks like the image alone.

 

They have a single area color and a single pixel outline, if both have the same color, the outline does not show. They can also be made transparent, of course. If we look at them in the control editor, there are only very few parts.

 

Now, why would that be useful?

 

Let's have a look a the data structure in the image. There is way too much fluff, distracting from the actual data. If we had flat objects, the same could look as the "table" below. Note that this is now the actual array of clusters, no formatting involved! It is fully operational, e.g. I can pick another enum value, uncheck the boolean, or enter data as in the cluster above.

 

Many years ago in LabVIEW 4, I actually made a borderless cluster container in the control editor and it looked fine, but it was difficult to use because it was nearly impossible the grab the right thing with the mouse at edit time.

 

The main problem of cours is that the object edges completely overlap, making targeted seletion with the mouse impossible. (For example the upper right corner pixel is the corner of an array, a cluster, another array, and an element at the same time.)

 

So what we need is a layer selection tool that allows us to pick what we want (similar to tools in graphics editing software). It could look similar to the context help shown in the picture with selection boxes for each line. Picking an object would show the relevant handles so we can intereact with the desired object. Another possibility would be to hover over the corner and hit a certain key to rotate trough all near elements until the right element is selected, showing it's resize handles. I am sure there are other solutions.

 

As a welcome side effect, redrawing such a FP is relatively cheap.

 

Message Edited by altenbach on 06-03-2009 09:20 AM
Message Edited by altenbach on 06-03-2009 09:20 AM

When looking for unexpected behaviour in time or memory usage of a project the Profiler is useful and easier than the execution tracer, but it could be made much more useful by adding the ability to monitor for changes and analyse the issues.

 

Mads_0-1695798309247.png

 

Issue:
Currently detecting how the memory or run counts e.g. of a VI changes over time you have to take snapshots, save them and then compare the values in e.g. Excel. (which the saved traces do not directly fit into either...)

Proposed feature: Trends
It would be nice if you could just set the tool to automatically sample and log all/selected numbers regularly and then be able to view the trends. 

 

Proposed feature: Automated Analysis
Having trends will help in manually detecting issues, but the profiler could also have tools that helped you in this, e.g. highlighting which VIs show a continous growth in memory. This could also then be expanded by being able to call a VI analyzer on any given VI - preferably made/set up to identify possible reasons for a memory leak e.g. (unclosed references, continous array building e.g.).

I don't know how many times I've added a case statement post-programming, but I do know that there isn't an easy way to make a tunnel the case selector.  Usually I delete the tunnel and then drag the case selector down and then rewire, there should be an easier way.  For loops and while loops have an easy way to index/unindex or replace with shift register, why can't a case statement be the same?

Case2.PNG 

Case3.PNG 

The Swap Values node only supports a Boolean input on the '?' terminal.

swap-values-error-cluster.png

It should also support an error cluster input, in the same way as the Select node (pictured) and many other nodes with Boolean inputs.