LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I'm working on large AF projects and I want to be able to pack my actors into lvlibps as there are lots of benefits of using PPLs. However, I'm a bit uneasy about distributing the PPLs with the executable because it's possible to reuse the public methods in a PPL. (The software I write is mostly licensable, but it would still be possible for competitor companies to reuse the public APIs)

 

What I would like is a way of securing/encrypting PPLs when distributing the executable.

The idea:

Make 'Get LabVIEW Class Parent And Member VI Information.vi' work on the type of data on the wire and not only the type of the wire to make it possible to use it together with 'Get LV Class Default Value.vi' so info on classes from source distributions and PPLs can be extracted.

This would make such classes easily available for use in dynamic factory pattern designs.

 

The reason:

The current implementation of 'Get LabVIEW Class Parent And Member VI Information.vi' cannot give me any info I don't already have access to in development. The only classes it will work on are the ones already loaded/known as it returns result based on the type of wire instead of the type of the class data on the wire. This limitation (as I see it) greatly decreases the use of the VI.

 

The same can be said for the 'Get LabVIEW Class Information.vi' as can be seen here:

Get LabVIEW Class Information.png

 

As can be seen the wire needs to, at some point, be present and wired into a 'To Variant' for the VIs to return correct results. This will sadly never be the case for any dynamically loaded class from a PPL or a source distribution!

 

I use classes widely for extending functionality and often I want to have a dropdown or a list or a tree of classes of a specific hierarchy or interface maybe even filtering the ones that implement a certain dynamic dispatch vi. I have been using config files created from the development environment for this, as a workaround, but in 2020 I wanted to get a live integration of new classes by scanning for new classes, loading them and registering their data. (In practice what I'm doing today is that I copy a PPL containing classes into a common PPL folder and register the classes and their heirarchy/members in a config file. This makes the new classes available to the application without a need for rebuilding it.)

 

An example could be a UI element that allows the user to launch an actor from a ring control. So what types of actors are available? The application cannot know this. Any actor that comes from a PPL or a source distribution will be unknown so the application cannot properly populate the ring control. It is possible to find all classes by looking at the extension, but not what hierarchy they belong to. In 2020 this could have been done using the following code (if it worked as I suggest):

Acquire class details.png

Sadly this will never return anything useful as it can only look at the LabVIEW Object wire.

 

I guess all the other data type VIs work in the same way, however that has never been an issue as the data types could only be mixed when they were variants already and there isn't (to my knowledge) a load data type by name/enum VI.

 

In my dream scenario I wouldn't even have to load the class into memory to acquire the info. A path based version of 'Get LabVIEW Class Parent And Member VI Information.vi' would be perfect for use with dynamic factory patterns. This is however a minor issue as the data can be stored between executions instead, but one can dream right :).

 

Sorry that my ideas/requests always end up being walls of text...

Path Type Function returns 0 Absolute Path if the path is empty.

 

It should return 2 <Not A Path> or better yet a 3 <Empty Path> when path is empty.

 

Sure I could check to see if the control = not a path or = empty path, but it would be nce to have one function to check.

Both the diagram disable structure and the conditional disable structure are intended to allow easy enabling or disabling of blocks of code. These nodes ought to have no effect on a diagram except to remove or add sections of diagram. But they currently have the side effect of formalizing the blocks of code they surround, as if they were a sequence structure. There are two cases where this is undesirable.

 

1. Without the disable structure, these two loops would run in parallel. 

 conddis_unwantedsync.png

2. This VI arguably ought to terminate because that wire dependency from the loop to the Stop primitive is only created because whatever is in the disabled frame needs the result of the loop, but the code in the enabled frame does not. But because the disable structure acts as part of the code, this loop runs forever.

 conddis_shouldterminate.png

When working with Events, the inability to resize the window for choosing new events and editing existing events is really annoying.

 

Every single time I need to create a "Value Change" Event, I need to scroll down.  I invested in a 24" monitor so that I wouldn't need to scroll as much.

 

Please please let us resize this window......

 

Shane.

We have developed an external DLL that is called from LabVIEW using CLFN. I have noticed a weird behavior regarding DLL detaching (unloading).

Scenario 1
I have 2 VIs calling the DLL. I run them and then close all of them. In this case, the DLL is detached when the last VI is closed.

Scenario 2
However, if I create project containing the same VIs as in scenario 1, when I close all the VIs after running them, the DLL stays loaded until I close the project.

This behaviour does not seem correct. I think LabVIEW should behave the same in both cases.

What do you think?

Sara

 

[admin edit: I changed the idea title per user request. The original title was "DLL shouldn't stay loaded when all the VIs calling it are closed".

At present case structures have options (right click) to create multiple cases if enums are attached to the case selector. If someone want's to reuse the same case structure with less number of items in a new enum, one has to manually delete one case at a time. Having an option to select multiple cases to delete, or automatically delete broken cases would be desirable to speed up things.

As of now Labview Mathscript can handle only 2 indices. for example:

Mathscript can execute the following

for i = 1:10

    for j = 1:10

        B(i,j) = i+j

    end

end

but it can't do the following:

for i = 1:10

    for j = 1:10

        for k = 1:10

            B(i,j,k) = i+j+k

        end

    end

end

 

I propose that Matscript should handle more than 2 indices.

The output then would be a 'n' dimensional array, where 'n' is the number of dummy indices.

According to Wikipedia:

In some languages a class may be declared as uninheritable by adding certain class modifiers to the class declaration. Examples include the "final" keyword in Java or the "sealed" keyword in C#. Such modifiers are added to the class declaration before the "class" keyword and the class identifier declaration. Such sealed classes restrict reusability, particularly when developers only have access to precompiled binaries and not source code.

The sealed class has no subclasses, so it can be easily deduced at compile time that references or pointers to objects of that class are actually referencing instances of that class and not instances of subclasses (they don't exist) or instances of superclasses (upcasting a reference type violates the type system). subtype polymorphism. Because the exact type of the object being referenced is known before execution, early binding (or "static dispatch") can be used instead of late binding (also called "dynamic dispatch" or "dynamic binding") which requires one or more virtual method table lookups depending on whether multiple inheritance or only single inheritance are supported in the programming language that is being used.

 


 

I have previously made appeals to allow for certain classes to restrict the ability to load new ancestors at run-time, thus restricting possibly allowing for inlining of dynamic dispatch VIs.  A similar request has already been made in order to facilitate ilining of DD VIs.  I believe that ultimately, it is support for a "Final" or "Sealed" class what I am looking for.

 

Such a modification on a class would effectively prohibit instantiation of any new version via factory method and would declare to the compiler that the code defined at compile-time is actually the full extent of code which will be available at run-time, thus allowing for the aforementioned optimisations to take place.

 

Questions would have to be answered regarding type propagation and some loopholes may remain regarding dynamic loading of new classes (only dynamic dispatch methods which are visible as instances of the "final" class can actually safely be inlined for example).  Especially for a specific RT application of mine, this would be a great benefit as the natural progression of this idea is indeed the ability to write code such that dynamic dispatch VIs are actually inlineable.

It is many times easiest to work with data in the form of an array.  Also helpful in many situations to use arrays inside of clusters.  This is a problem when working with functions that require a data type to be a fixed size since they don't accept the combination.  I'm asking for full support for fixed-size arrays in Labview and LabviewRT.

 

See example problems attached.  Also open to other ideas as long as they are similarly elegant and easy to use.  Also please correct me if I haven't shown the simplest workarounds, etc.

 

Thanks

 

(also possible this is a bug but seems to be a missing feature)

We are already using the shortcut Ctrl+i for opening the VI properties window.

 

But to open a VI's Icon Editor window, we do

  • Right Click on the Icon → select Edit Icon (or)
  • Double Click on the Icon

 

Think about the improvement in development speed if we have a shortcut Ctrl+Shift+i to open the Icon Editor window.

 

This will also come in handy if we are dealing with wider VI windows.

 

Current Environment takes ~2.5 seconds to open the Icon Editor

  • Move the cursor to the VI Icon (~1-2 seconds)
  • Two mouse clicks (double click) (~1 second)
  • (or)
  • Move the cursor to the VI Icon (~1-2 seconds)
  • Right click on (~1 second)
  • select Edit Icon (~1 second)

Proposed Environment will take ~1 second to open the Icon Editor

  • Press Ctrl+Shift+i (~1 second)

The "Run Continously" tools is typically used as a debugging tool to quickly play with inputs for "pure" subVIs that don't have a toplevel while loop (see also my comments here). This sandbox mode has the unwelcome side effect that the computer uses 100% of at least a single CPU core while doing so, starving all other processes that might already run on the computer. It gets even worse if I use "run continuously" on several subVIs at the same time. (Yes, I am a multitasker! :D)

 

Another use of the continous run button seems to be a certain other class of users. Also in this case, the high CPU use is a serious drawback.

 

All use scenarios I can think of suffer from this. There is no real advantage to run this fast without taking a breather.

 

My suggestion (first mentioned elsewhere)  is that there should be an implicit small wait (e.g. 10ms) between each restart so the continuouly running VI behaves more politely and is less demanding of system resources.

 

 

I searched but didn't see this idea anywhere yet.  There is problem with ring and combobox typedef constants.  Since the values are not actually part of the data type (like an enum), when a their typedef values are modified they are not propogated like you'd expect.  Since I've run into this problem I've seen a lot of posts, but not real solution. Search and replace hundreds of constants...I suppose.. Anyway, most assume this is a bug, but it's not.  See link below. 

 

Item Lists of Combo Box/Ring Constants Do Not Update from Type Definitions

 

While not a bug, there is missing functionatlity here, as is evident by the amount of posts, questions, and confusion around the topic. 

 

The idea is simple, make some way to force typedef constants for rings and combo boxes to update, not just "data type", but default values.  Maybe a 4th typedef type that includes data values?

 

Ring values not updated.JPG

 

**Side note (and plug for a related idea):  One motivating factor is the lack of sparse enums.  If you have large spread of name/value pairs, an enum could require hundreds of unused value "placeholders" which makes it too large and cumbersome to use. Either a ring with the appropriate values (that could update) or a sparse enum would probably help many people.**

 

LabVIEW idea exchange: Add sparse enums to LabVIEW

 

Thanks for reading.

I have the hadit of pressing Ctrl+Shift+S (Save All) which is annoying while working with Read only VIs. We always get a pop-up enabling us to save the VI in a different place or with a different name. As a result we get the browse window to Save the VI with a different name or in different path. This happens on clicking OK (which make sense) but also on click Window close button/ESC key

My suggestion is to continue the save operation only on clicking OK and abort the operation when the user click the window close button/ESC key. This would be more meaningful and would make more sense to the way the Save operation is handled.

I am also fine with adding a Cancel option along with the OK button.

 

AbortSaveOperation-1.PNG

 

AbortSaveOperation-2.PNG

 

Download All

When creating a LabVIEW installer, in the Source File Settings dialog, I might change the attributes of several files in the same way.  For example, I might make many files read-only and/or hidden.  Currently, I have to click on each file and change its attributes.  I would like to be able to choose multiple files (either by highlight, control-click, etc.) and change their attributes at the same time to the same setting.

 

 

Pulido Technologies LLC

Discussed here: http://lavag.org/topic/9570-new-wire-type-the-null-wire/

 

 

 

Summary: a right click menu option "Visible Items -> Sequence Container" on all VIs, prims, etc (almost anything on the diagram) to allow us to wire force flow control, without having to wire into/out of the actual node.

Properties or parameters of methods are often documented (context help ctrl+h, move mouse over properties/methods). When a control is created to apply data to a property node or a method parameter, the available documentation of a property/method parameter should automatically be added to the controls documentation (description).

 

Example: The following picture shows a subVI of a LabView driver basing on an Ag3352x IVI-COM driver. Documentation for the amplitude property is available. When the amplitude control is created, the available documentation should be added automatically to the controls description...

 

 

create control example

Having read THIS thread I realised that nearly the only time I ever use a sequence structure is to time code.

 

Then I thought, why shouldn't the Sequence structure have a timing output node (selectable) so that we don't have to manually do this every time we want to test code speed.

 

Then I thought a bit further and thought, why not have this for ALL structures, it's such a useful tool, it could be used for For loops and while lops too.

 

Timing for structures.PNG

Shane.

This is something that started as a way to get data back from Actors in non-actor code (for example, web services). I've never cared for the blocking nature of Reply Msgs and the only other built-in option for getting back data is to make everything an Actor, which is not always an option. Promises solve both of those issues.

 

The basic idea is an enforced single-writer, many-reader cross thread datatype. In the current implementation, they are not much more than a locked-down, single element queue but you can actually do some pretty cool stuff with just that.

 

In the message sender, we create the promise and return it. The idea here is that the Actor owns the promise and will fulfill it. This gives us very low coupling for free.

Actor_Example_lvlib_Start_Long_Running_Process_Msg_lvclass_Send_Start_Long_Running_Processd.png
Wait on Promise will wait for the Promise to be fulfilled. It is a malleable VI so that a Default Value (for timeout) and Type can be wired. Using the timeout lets us do other things while waiting for data.

Actor_Example_lvlib_Launcherd.png
Inside the Actor, we can set the value with Fulfill Promise. Remember, once the Promise is set, that's it, no changing it again. In fact, Fulfill Promise will error out if called twice on the same promise.

Fulfill Promise.PNG
Something else really cool we can do is fulfill a promise with another promise. This may seem pedantic at first but it can help keep your code cohesive by passing the responsibility of fulfilling a Promise to another process. For example, if you have a message broker that just forwards a message, you can have the message broker fulfill it's promise to the caller with a promise from the callee.

Fulfill with Promise.PNG
We can also reject a promise with an error message that will be returned by any Wait on Promise that tries to read it. Rejecting a promise does fulfill the promise so you still cannot set the value later and you cannot reject a promise again.

Reject Promise.PNG
Finally, we have Destroy Promise. It does exactly what it says on the tin.

 Destroy Promise.PNG


I'd love to hit some more of the design points from https://promisesaplus.com/ (mainly adding Then callbacks) but I figure it's at a pretty good spot to share and get some feedback. I'd also like to try to figure out network communication at some point but that's probably a ways away (without some help anyway Smiley Wink).

 

I'm hosting the code at https://github.com/kgullion/LabVIEW-Promise if you are interested in checking it out or contributing!

Dear all Labview fans,

 

Motivation:

I'm a physicist student who uses Labview for measurement and also for evaluation of data. I'm a fan since version 6.i (year 2005 or like)

My typical experimental set-up looks like:  a lot of different wires going every corner of the lab, and it is left to collect gigabytes of measurement data in the night. Sometimes I do physics simulation in Labview, too. So I really depend on gigaflops.

 

I know, that there is already an idea for adding CUDA support. But,not all of us has an nvidia GPU. Typically, at least in our lab, we have Intel i5 CPU and some machines have a minimalist AMD graphics card (other just have an integrated graphics)

 

So, as I was interested in getting more flops, I wrote an OpenCL dll wrapper, and (doing a naive Mandelbrot-set calculation for testing) I realized 10* speed-up on CPU and 100* speed-up on the gamer gpu of my home PC (compared to the simple, multi-threaded Labview implementation using parallel for loops) Now I'm using this for my projects.

 

What's my idea:

-Give an option for those, who don't have CUDA capable device, and/or they want their app to run on any class of calculating device.

-It has to be really easy to use (I have been struggling with C++ syntax and Khronos OpenCL specification for almost 2 years in my free time to get my dll working...)

-It has to be easy to debug (in example, it has to give human readable, meaningful error messages instead of crashing Labview or making a BSOD)

 

Implemented so far, by me, for testing the idea:

 

-Get information on the dll (i.e..: "compiled by AMD's APP SDK at 7th August, 2013, 64 bits" , or alike)

 

-Initialize OpenCL:

1. Select the preferred OpenCL platform and device (Fall back to any platform & CL_DEVICE_TYPE_ALL if not found)

2. Get all properties of the device (CLGetDeviceInfo)

3. Create a context & a command queue,

4. Compile and build OpenCL kernel source code

5. Give all details back to the user as a string (even if all successful...)

 

-Read and write memory buffers (like GPU memory)

Now, only blocking read and blocking write are implemented, i had some bugs with non blocking calls.

(again, report details to the user as a string)

 

-Execute a kernel on the selected arrays of data

(again, report details to the user as a string)

 

-close openCL:

release everything, free up memory, etc...(again, report details to the user as a string)

 

Approximate Results for your motivation (Mandelbrot set testing, single precision only so far.):

10 gflops on a core2duo (my office PC)

16  gflops on a 6-core AMD x6 1055T

typ. 50 gflops on an Intel i5

180 gflops on a Nvidia GTS450 graphics card

 

70 gflops on EVGA SR-2 with 2 pieces of Xeon L5638 (that's 24 cores)

520 gflops on Tesla C2050

 

(The parts above are my results, the manufacturer's spec sheets may say a lot more theoretical flops. But, when selecting your device, take memory bandwidth into account, and the kind of parallelism in your code. Some devices dislike the conditional branches in the code, and Mandelbrot set test has conditional branches.)

 

Sorry for my bad English, I'm Hungarian.

I'm planning to give my code away, but i still have to clean it up and remove non-English comments...