LabVIEW Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Whenever we create an executable file for example, abcd.exe from abcd.vi that can be run on a computer having the runtime engine of the corresponding labview version, it has certain default data control set at the time of creating the .exe file. I propose that there be an option to set the values default after creating the executable file.

I propose a property node to tell if a control was wired when This VI was called as a subVI.

 

Wired.png

 

Sometimes it is practical to be able to distinguish if the value in a control was wired to it (data arrived via dataflow), or if the control was unwired and the value hence the control's default value. Even comparison with the known default value of the control isn't enough, as the program logic may depend on the value source.

 

Consider an FG that stores two data elements:

 

FG.png

 

You'll have to have two Set functions for this, one for Data 1 ("Set data 1"), and one for Data 2 ("Set data 2"). If you have only one Set function, you add the constraint that both data sets have to be present (wired) every time you set any of the two data sets, or else the unwired set will get overwritten in the FG with the default value of the unwired control. If the FG could decide programmatically if the value was wired to the control, or if it was merely the default value of the unwired control, it could decide whether to overwrite its stored copy or not. The data values could be the same in the two cases, but the actions very different. This is just a simple example, but I have many much more complex cases, that would get much simpler if I could tell where the control value came from. In the above example you could even dispose of the Function input alltogether, if I could tell if any data inputs were wired.

 

A polymorphic VI will remedy some of these issues, but not all, and in other cases a polymorphic VI is a drastic solution to a simple problem. It's also not an issue in a Top-level VI, as such run only once (only subVIs get called again and again, while depending on prior runs).

LabVIEW should be able to update property like background color, and so on... for the whole table instead of changing active cell one-by-one.

It's coding difficulty, and very low speed.

Maybe there is a very good technical reason why this idea is not a good one.

 

Problem: If I want to load a vi into my application the subvi's of the dynamically loaded vi can not be found.

 

My Plug-In Application (EXE) is Not Calling My Plug-Ins Correctly

 

Solution: Add a boolean input (or an options flag) to Open VI Reference which will load the vi and modify the search path so the subvis can be found.

 

I know there are several workarounds but I would like this to be easier.

This post is intended to present a solution to a problem that I was struggling with using LV2 global variables in my applications.

I don't claim that this is a unique or elegant solution or even the first of its kind, but I'd be curious to hear comments and suggestions about it.

Since it might be a long post, I will split it into 3 parts.

Part 1 (below) will discuss the intended functionalities.

Part 2 will present my current implementation.

Finally, I will try to summarize all this into Part 3, opening the discussion.

 

So you may want to wait until that part is published before posting your comments (if any).

 

Part 1: What do I mean by Generalized Functional Global (or GFG)?

 

The LV2 global variable (or functional global, FG in short) is a well known structure (see for instance the beginning of this thread). It is a subVI which can store data locally in between calls. Usually it comes with a "Write" (or "Set") action and a "Read" (or "Get") action and a few inputs (and the same number of outputs). The inputs are the variables you want to update and the outputs, their corresponding values.

The advantage of this design over the standard LV global is that there is only one copy of your variables in memory. If you are only using scalars or string, etc., using LV globals might not necessarily be a big problem, but if you start storing arrays of large data structure, it could result in a significant performance penalty.

Note that there are other ways to pass data from VIs to VIs (queues and notifiers). Here, I am concerned with long term storage available from many parts of an architecture (e.g. subVIs or dynamically launched Vis).

To begin with, there are two major limitations with any design based on FG (at least that I am not happy with):

 

1) First, as emphasized above, due to the limited connectivity of a VI's connector pane, a FG cannot store a large number of variables. You can use clusters, but that is not always very practical.

 

2) Second, if you try to cramp as many variables in a single FG, you will probably run into the issue that you don't necessarily want to reset all variables at once, but maybe just one or two. That calls for a more sophisticated set of actions (Write Variable 1, Write Variable 2, etc, and possibly Write All, Clear All, etc).

In practice, if you use a lot of those FG, you will encounter a third problem:

 

3) In which FG did I store this @$%&! variable?

 

Some of my applications contain many of these FGs and I figured that this had become impractical to handle when I started duplicating them, having forgotten that I was handling variable "X" in an existing FG.

 

The obvious solution (apart from NOT using FGs) is to have a single FG that is designed such as to handle an unlimited number of variables of ANY type that can be modified at ANY TIME from ANYWHERE.

 

I first looked at the WORM (Write Once Read Many) design of tbob (you’ve got to go to the end of the thread to download the final version of the WORM VI). It is a very clever and compact solution to store different types of variables within a single variant.

Two of the limitations I saw in this design are that you need to know:

 

1) the NAME of the variable you want to access.

 

2) the TYPE of the variable that you are WRITING or READING.

 

Let me clarify those two points.

 

It seems obvious that you HAVE TO know the name of the variable you are interested in. Well, that’s maybe true when you are designing your application. But after a month or more doing other things, it is not obvious anymore whether you’ve stored the number of components as “# Components” or “Component #” in that other part of the program that…where did I save it, BTW? You get my point…

 

The second point is apparently taken care of by tbob’s solution of outputting the variable (as a variant) as well as its type. The problem is that this “type” provides a very limited description. For instance, what do you do with a “cluster” type?

 

Finally, since I want to be able to modify any variable at any time, the “Write Once” feature is not cutting it for me. This is could be easily modified, but considering the previous comments, I decided to change the architecture some more.

 

I will describe my solution in the next part.

 

Often times, I have a set of code in a for loop that takes a while to complete, so I put in a progress bar on the UI.  The problem is the progress bar defaults to 0-100 range, and my for loop normally does not have 100 iterations. 

So.... I can either rescale the progress bar to the number of elements, or I can use the "N" and "i" to calculate the percent complete and use that.

 

Since I like to keep my progress bars as 0-100 (since that works nicely to also show a percent complete) and % complete is useful elsewhere, I generally always end up calculating % complete of my for loops.

 

Well, why can't NI calculate this for me automatically???  Something like expanding the "i" iteration counter in the loop also have a "%" box right next to it.

 

Example shown below should probably wire as a double instead of an int, but you get the idea!

 

ForLoopProgress.jpg

It's good to have tool like Quick Drop...........but it will great if we can see our own VIs/Custom Controls/Custom Indicators/objects present in my PC in the list.

 

This can be achieved by remembering them while they are made or being used for the first time. So that every time I don't have to got to Select Vi, search the path & call a VI.

The only option to do this right now is by placing the VIs in user or inst directory of Labview.

But this not what we always do right? We manage the folders & VIs in different locations.

 

Just as good function as any desktop search.

I would suggest that NI makes the datatypes of the "Quotient and Remainder" VI consistent.

 

If you now wire a double to the VI the outputs are of the datatype "double" however in my opinion this is not consistent with the mathematical definition of quotient, floor(x/y) and remainder, x-y*floor(x/y). The output of the VI is always a integer and should therefore also be of that datatype, thus one of the following datatypes (I64, I32, I16, I8).

I had a customer call in that wanted to be able to increase the limitation that Excel allows us to import. He has found a work around in which he creates a Visual Basic Macro in excel. He uses Active X in LabVIEW to open excel, and this puts the macro into use and overwrites the limitation. He was hoping that LabVIEW could do this for him and others so he would not have to write separate code in Visual Basic.

VI Analyzer complains if you use the Build Array function or Concatenate Strings function in a loop because these functions negatively affect memory and processor usage in a loop. Using both of these functions in loops bring certain advantages to code. Primarily, you can dynamically build strings or arrays, which is helpful because sometimes it is hard (if not impossible) to know how big a string or array is going to be when a loop is run.

 

The first image below shows a loop with both of these functions that will cause a memory leak as the loop is run.

 Build Array and Con String.PNG

Build Array and Concatenate String in Loop - Will Cause Memory Leak

 

I got to wondering... Is there a way to dynamically build arrays and strings in a loop that will not cause a memory leak? In my limited testing, it looks like the code in the second image performs the same function as the build array and concatenate strings functions, but without the poor memory performance. I can run the code below for long periods of time and not have a memory leak. The instant I switch over to the case to use Build Array and Concatenate Strings the code beings to chew up memory. My idea is this: Replace the internals of the Build Array and Concatenate Strings functions with the code boxed in red in the image below. It is preferable that the code below gets contained in its own function because it clutters up any diagram with the extra nodes. 

 

If putting this code into the Build Array and Concatenate Strings functions is technically possible, it will allow people to dynamically build arrays and strings in loops without the costly memory performance. One note is that the code that builds an array in the second image is scalable, but the code that builds a string is not scalable. If this is implemented, the code to build a string should be scalable, but based on the code below that does not cause a memory leak.

Build Array and Con String - No Memory Leak.PNG

Replace Build Array and Concatenate Strings with the Code Boxed in Red

 

In many of my programs, it is frequent that I have to issue a command, then wait for a period of time before reading a response. Usually, if there is a timing delay in code, it is usual to end up with something like this:

 

timing2.png

 

To do this more neatly, I created a timing subVI with errors wired in and out, and a number wired to it, to instruct the program to stay in that subVI for that period of time. It takes up much less space.

 

timing3.png

 

By doing this, I can easily control timing and execution. However, it's be nice to rework the 'milliseconds to wait' diagram so we could wire errors through it, as this would make the block diagrams so much neater...

 

I am finding labview execution speed to be extremely slow.  I seem to recall documentation indicating that labview passes all data/variables by value by default.  I think that if labview would default to passing data/variables by reference instead of by value, then execution speeds would greatly improve.  Is there an option that would allow one to change this?

I've recently run into an issue when using external code (ActiveX to be exact) where I needed to explicitly call garbage collection in a similar fashion to how it's done in C# (C Sharp).

The "Request Deallocation" function isn't true garbage collection.

 

A C# example would be:

someComObject = null;

GC.Collect();

 

 

There are situations where it is neccessary to explicitly call garbage collection.

Multi-Threaded Interrupt Management Capabilities in LabVIEW

 

Background

 

            Event interrupt management works well in LabVIEW if ones’ programme is small and developed within a single overall Event Structure, encapsulated within a While Loop.

 

            The problem comes when the software architecture demands a separation between the User Interface management (interrupt) functions and the sequential and looped programme structure (executable) components of the code for modular programming. In this configuration, waiting for Events to occur before code execution, pausing code execution (once started), stopping code execution or even aborting code execution (with the ability to close the application down cleanly prior to code exit) become challenging operations to implement within LabVIEW.

 

            This software architecture design challenge is exacerbated, if branching within the looped programme structure (executable code segment) is also a requirement.

 

Two Key Requirements

 

            Ideally, the following two capabilities, if implemented within LabVIEW would solve all of the aforementioned problems:

 

  1. To be able to interrupt a While Loop, pause it and if appropriate reset the [i] loop index back to zero. If controlled via the Event structure, this would make both Event structures and While Loops compatible with each other, which is not the case today.
  2. To be able to interrupt a Flat Sequence or Stacked Sequence structure and break out at any individual “Frame” in that structure, based upon a specific Event interrupt or even an Abort, which when controlling hardware can be a critical request that requires immediate action! Today, it is not possible to break out of a Flat or Stacked Sequence structure, one has to wait until the complete Sequence has executed.

 

Current Tools

 

            Current LabVIEW, if used with care, can overcome to some extent the current While Loop and Flat Sequence software architecture limitations, but they are very cumbersome to use and require an "eagle eye" and good software tracking and debugging skills!

 

While Loop with Pause and Stop buttons

 

Set Occurrence

 

Wait on Notification (Front Panel activity)

 

A not yet fully complete explanation of the key multi-threaded programming issues is attached. If it requires updating to make it complete, do let me know.

Will be good when running an application in debug, keep other LabView project enable and editable, so I can continuous to work under other project waiting the debug stop at break point

I don't know if this idea belongs in the "LabVIEW" Ideas Exchange necessarily, but this idea is something that would really make a big difference to my LabVIEW development, so I offer it anyway: "Parallelise the FPGA Compiler to take advantage of modern multi-core computing power".

 

The FPGA compiler takes approximately four hours to compile my large FPGA VIs, which makes for long and tiresome debugging processes. It's clear that the compiler uses only one core of my CPU when compiling. If the compiler could be written to take advantage of the many cores of today's multi-core computers, it could potentially reduce my compilation times to an eighth! (Where I work we have an eight-core number crunching server ideal for just this task, and I'm sure we'll get even greater core counts in the near future - thinking GPU here).

 

I know the compiler is probably the intellectual property and responsibility of Xilinx Corp, and not National Instruments, but I expect NI can give them a big push if we all asked nicely for it!

 

Thoric

Its better to disable the functionality of the function keys while a code is running.

 

One incident i came accross is:

     I assigned ESC key for a control and while running the code i wrongly pressed F1 (Which is not assigned to any control) suddenly the Help file started opening and my whole code got hangged. I dont think that some one will need open the help file during code running So its better to disable the original functionality of the function keys just during the code is running.

 

Smiley Wink

After deciding to post an idea for a "parallel" structure, a search revealed the idea for a Parallel Execution Structure has already been proposed by gvholland here

I gave my kudos to this idea because I believe it would be very useful.  In order to make a parallel structure even more useful, I propose adding some features that would make it more convenient for those of us who might use it in code that must execute in parallel for performance and functional reasons.  It has been commented on the other thread that parallel code should be placed in subVI's, and I concur with this view.  However, there are instances where this is either inconvenient or impracticable.  Consider the following example:

 

An application needing to perform simultaneous PID control on 32 channels must execute in parallel (only 8 channels shown for clarity):

 

23998iF3A3D6145B22A221

 

Now quadruple the number channels in this scheme, and you can have a pretty big diagram with lots of wires.  Also consider the routine task of initializing that “clustosaurus”  or “classosaurus” as in this example:

 

24000iDFC8181255144D80

 

We've all probably tried the scheme wherein we put a case structure inside a FOR loop and wired the iteration terminal to the case selector, as in these examples:

 

24002i0E33EE6CA3D6316A

 

24004iB398DF500188AD49

 

That's clean and easy, and allows the user to create instances of the reentrant VI by duplicating cases.  But that architecture forces the vi's or code to execute sequentially.  The new parallel FOR loop can boost performance of these techniques, and create parallelism.  But I would like a basic parallel structure that cleanly handles some routine tasks by adding some useful I/O nodes, ala the InPlace Element Structure.

 

I propose the following structure, or something similar:

 

24006iEFA2DE50585DBE3A

 

This structure is drawn here with some proposed I/O nodes and tunnels.  This is by no means the complete set of I/O that might exist, but rather a starting point.

 

Cluster unbundle/bundle node:

 

 24008iB135E86434F20934

 

This node accepts only “brown” clusters, or clusters of Booleans.  The elements are passed to each frame in corresponding index order, element 0 to frame 0, and so on.  Once added to the structure, a single unbundle/bundle terminal pair appears in each frame. Much like a bundle function that has its center terminal wired, the bundle terminals may be left unwired.  The values of unwired elements remain unchanged.  Any cluster wired to this node must have the same number of elements as the parallel structure has frames.  If not, the wire is broken.

 

Array index/replace node:

 

24010i570ABF83C7044CDA

 

This node auto indexes an incoming array and provides a replace array element node on the right.  Note there is no index value IO as with the IPE, since the parallel structure auto indexes the array and distributes/replaces the elements across the frames.  If an array has less elements than the number frames at run time, the node returns default data for the undefined elements, exactly as an index array function does, but the structure returns a warning or error (I can’t decide which).  The output array would always have the same number of elements as the structure has frames, or the same number of input (can't decide which) .  The replace element node on the right must be wired in every frame, just as a replace array element structure must have all of its exposed elements wired.

 

Cluster unbundle/bundle by name node:

 

24012i2C2666661000FEB0

 

24014i1D92D4684B152FC9

 

This node is tricky, but I decided to take a stab at it anyway.  The node is created and visible on both sides of the structure.  However, unlike the IPE, the unbundle/bundle terminals on either side can be of different sizes and element selections, and can optionally be unused on either side, or both sides, within the individual frames.  Unused terminals appear with the same symbol as the center terminal of a bundle function, as shown in the proposal drawing.  If an element is selected for bundling within a frame, then it is unavailable for bundling in all other frames.

 

Indexing and non-indexing tunnels:

 

24016i58EE71B2884A8F01

 

Non-indexing tunnels function somewhat like they do on a sequence structure.  Input tunnels provide data to all frames, non-indexing output tunnels may only be wired in one frame.  Unlike sequences, however, the data arriving at output tunnels would be free to flow out of the structure immediately, which will seem weird, and violates the "whole structure must complete" convention.  But remember, this is a parallel structure.  Like sparks shooting off the bolts in the monster's neck while it's alive, it's gonna be be weird by default.

 

Indexing tunnels are different.  Like the auto-indexing node, auto-indexing input tunnels distribute the array elements across the frames.  If the array size is smaller than the number of frames, the frames either execute with default data, or the undefined frames don’t execute, and the structure returns an error or warning (help me define this).  Auto-indexing output tunnels behave like output tunnels from case structures; either all frames must be wired, or the tunnel must be configured to use default data if unwired.  Unlike the non-indexing output tunnel, data from this tunnel is not available until all frames have completed execution.

 

Error I/O Nodes:

 

24018iB1B7660563C17E73

 

There are error inputs/outputs for the structure as a whole, and for each individual frame.  The structure error IO is situated in the lower left and right corners, naturally.  The frame input and output terminals can both be optionally hidden or exposed in each frame, and also slide independently of each other up and down the left and right sides of each frame in which they are exposed.  The structure distributes the incoming error among the exposed frame error input terminals, and merges the frame output error values to the structure output terminal, along with any messages generated by the structure itself.

 

So what do you do with this “Frankenstructure”?  Well, here are a couple of the aforementioned examples rewired using this hypothetical beast:

 

24020iFA3810D047850DB3    24024i4CB073DB117956CF

 

 

Of course there could be other cool things, like a CPU core selector for the frames, etc.  Just let your imagination, (or nightmare, depending on how you see it) run wild!

Currently I'm busy with GOOP and I came across the following problem. 

 

I have a validator class. The purpose of this class is to validate data. The validator class as a number of childrens...

 

Validate IP address

Validate string length

Validate inRange number

Validate Alpha

 

The main, validator, class has a function called "valid?" This function has 3 inputs and 3 outputs

 

Inputs:

- Object

- The data that must be validated

- Error

 

Output:

- Object

- Valid?

- Error

 

The children classes must inherit this function and overwrite it. Now the problem is is that each of the above validators have a different datatype which must be validated....

 

Validate IP address has a string as input

Validate string length has a string as input

Validate inRange number has a number as input

Validate Alpha has a string as input

 

Now you might see the problem. To get the children to inherit the function from the main, validator, class the connector pane must be the same as of the datatypes... This means that I have to choose in my main function to use a string or a number as input... This is something that I don't want... I want to be able to select a datatype called "yet unknown datatype" in my main "valid?" function in the main, validator, class. So that I can use any datatype input in my children that is suitable for that implementation. 

 

 

My idea is thus to create a new kind of datatype which sort of represents "any kind of datatype known to labview" which can be used in functions of a main class that are inherited by its children, which are all using a different input datatype.

 

ps. Now you could maybe suggest why not use a variant datatype? Yes this is possible but the problem is;

- I would have to cast the data back

- It isn't very neat programming, the variant solution is in my opinion more a kind of hack to make the code work.

 

pss. Yes but if you would do this... then...

- Yes there are proberbly a few more work arounds thinkable, such as creating two "Valid" VI's one inherited (Valid?), one unique of the child (_Valid?), but these are in my opinion still workarounds and do not really provide the functionality that is needed. Which is pretty common in OOP languages.

 

psss. if anyone knows a better title for this described idea let me know it! 

Enable a Sub VI to launch as a daemon without having to open a reference to it using its path.  The VI Properties page would look like this:

 

21429iB830C4B88A795136

Wait until done would be checked by default, and auto dispose reference would be left false be default.  So instead of parsing the path to the VI on disk and using a method to run it:

21433i0CFC0F873277247C

I just set the correct properties in the VI Properties page and drop the subVI on the block diagram of the calling VI.