In one of my recent projects I had to call a custom made .NET Assembly in LabVIEW which provides several events to register for. As .NET support is generally available (within known limitations), there is but one unknown "feature" that caused big trouble to me and everybody who might want to do something similar:
#please notice that whenever I now talk about anything in .NET, I see the things from the view of C#
Let me give an example code in C# to demonstrate the implementation for use in LabVIEW:
using System; using System.Collections; using System.Collections.Generic; using System.Text
namespace SUPERSPECIALPROJECT { // delegates public delegate void myDelegate( object sender, myEventArgs e );
// the event args (plain example) public class myEventArgs : EventArgs {
// some member variable private string m_strMessage;
// the constructor myEventArgs(){}
// a public property public string Message { get{ return m_strMessage; } set{ m_strMessage = value; } } }
// the public object I want to use in LabVIEW public class myLVObject { // create the event for registration in LabVIEW public event onMessageChanged myDelegate;
/* * I don't want to write any more here, but basically the event above is * fired whenever my asynchronous process feels like it. */ } }
Okay, so now I...
- Build the .NET assembly
- Create an instance of myLVObject in LabVIEW
- Wire the class wire to the Register Event Callback and select the 'onMessageChanged' event.
- Create the callback VI and access the data of 'myEventArgs' within the property 'e', no problems.
So far so good.
However, even though my .NET object runs fine in C# (Yes, I tested it in C# first), LabVIEW does never ever run the callback VI. The reasons were hard to find and I actually had to call NI directly for this. We did some tests and I finally got a confirmation on my assumptions:
1) The asynchronous process does correctly fire the event, but for some reason the developer does not send any 'sender' (the object is just NULL). -> LabVIEW will not execute the callback VI 2) The asynchronous process does correctly fire the event, but for some reason a value of my implementation for 'eventArgs' is NULL (let's say the parameter 'm_strMessage' is NULL). -> LabVIEW will not execute the callback VI
As I have no access to the asynchronous process and have no chance that it will ever be updated for me, I wrote a wrapper that provides my own set of events that replace all null-values by empty values. (like m_strMessage will be "" instead of NULL) -> Finally LabVIEW will accept the love of my events
This behavior is confirmed by NI. The actual event is somehow compared to the callback VI, which will only be executed if the types match. NULL- Variables are not recognized and to LabVIEW it seems as a completely different data type. This will not be changed for future versions unless I get it through the Idea Exchange
The most annoying thing is, if there is any NULL variable somewhere within the scope of either sender or e, the event will fail executing in LabVIEW and LabVIEW only. I know NULL is not a very good practice, but there is no way to replace NULL if you don't have access to the sources (as in my case). The current solution by using a wrapper does work, but takes much time to implement and it has to be maintained.
Finally for anybody who kept reading so far:
I don't see any reason why this behavior could not be changed (LabVIEW does currently just do nothing) and therefore suggest two enhancements:
1) Allow NULL-Variables (auto-replace with empty variables) <-- might be a hard challenge 2) Warn me whenever a callback VI does not match the type of the connected event, as it is currently almost impossible to track properly.
The latter one might be the easiest to implement and would already cover issues with 1) partially. Anyways, we definitely need a better way to catch such issues in the future.
Presently LabVIEW and it's toolkits ship with a number examples. At the same time there is a massive amount of information and examples with content like the examples and the things that are much more. When I am searching for an example I often can't remember if I saw it on the web or in the examples. So first I open the Example Finder, a tool that I really like, and search for things there. Then if I don't find what I am looking for there I open up a web browser and search the NI Communities, then the NI forums and other forums (like LAVAG.org). It would be nice that instead of having to use to programs the Example Finder could give a list of possibly related items on the NI forums. I think this could be managed using the existing tagging feature of the forums with perhaps by having an #example_code tag. A crawler program could then parse though the tags and grab the links to pages of possible matches populating a database. Then this database could be access by the Example Finder for related info.
Another option would be for me to be able to find a link I like and attach it to my example finder or perhaps to my NI.com community account. That would be useful.
Actuallly, if there was an API for example finder that'd work too.
We are trying to work with an OPC server with some array tags defined. We can access the array tags perfectly.
If you request a single value from that OPC server (through an OPC client like Wonderware, Ifix, Control Maestro (Wizcon), WinCC, etc….), the OPC server sends a timestamp with the time in which that value was acquired. I have confirmed that there's no way to request single values from an array tag (neither with DSC, nor with DataSocket).
I don't know how hard to develop would this feature be, but I think it would be great.
The Duplicate Frame method of the DisableStructure class in scripting currently returns error 1072 ("This property or method is not yet implemented"):
If we could programmatically duplicate frames in Diagram Disable Structures or Conditional Disable Structures, Quick Drop and other G-based editor features could benefit.
I would like to see an option to drop while loops in the code with different colors. Now it takes a couple of steps to change the color but it could be easily done in automated way.
It would make discribing the code and adding comments easier and faster. Especially for smaller designs.
I was thinking that defining packed project libraries as the source of VIs used by the LabVIEW enviroment would speed up everything ex. loading LabVIEW, loading palettes linking dependencies, building executables etc. + solve linking dependecies conflicts.
The thing that we could do is just add the packed project library for the standard LabVIEW functions as the base of the palettes, but since we want users to be able to see what's inside we would still leave the VIs in the same folders - this way the user would be able to choose if he wants to use the packed versions or the versions from the disk - unpacked.
This way only the ones with a specific unpacked instance would have to be loaded as single VIs, and we would have everything nice and clean.
When Creating a LabVIEW Development Tool and Integrating into the LabVIEW Menus, placing items in the LabVIEW\wizard folder causes them to appear in the File>> menu of LabVIEW. However, there's a known issue that custom menu items in the wizard directory will be displayed in the "File" menu for VIs, but not for projects or in the Getting Started window. IMO, this is a bug. However, I'm guessing that to get this fixed, we should call it an "idea" that needs implementing 🙂 Either way, it would improve the ability of add-on developers to extend LabVIEW.
My "idea":
Item's placed in LabVIEW\wizard folder should show in File>> menu of LabVIEW Project Explorer and Getting Started windows (just like it does for VIs).
This is a suggestion a bit technical and is posted here following the suggestion of the National Instruments support center.
When building a shared library from a vi, the run time engine make use of a configuration file which has the following name: <name of the executable program which links in the DLL>.ini .
The fact that the name of this file is not configurable by the users is EXTREMELY annoying in my application (the LV DLL is a plug-in of third party software) because causes the corruption of the calling program configuration file.
It seems very easy to me solving this problem by making user-selectable the name of the ini file used by the DLL produced by LV. It also would improve the interoperability of LV with third party software.
I really hope to see this upgrade already in the next release of the software.
When setting the option "Separate compiled code from source file" will make the .vi-file NOT to contain executable code.
The forum filter accepts .vi files to attach to posts. But if selecting VIs with this option set, the posting is declined with the remark that the file type is invalid.
I recommend updating the attachment filter to accept VI files with compiled code removed.
The output from flatten from XML is not as elegant as it could be. A big advantage of XML is that it is both human and machine readable, but flatten to XML seems to really neglect the human aspect, and can be a bit akward to process.
Here is the output from a simple structure using .net serialization:
Its still not as elegant as the .net result, but I think it looks better than the original. This may seem like a trivial issue with the small XML files I presented, but when you are storing much larger data structures in XML it starts to become a big mess.
As part of a review of a shipping product, I want to make sure that all of our LabVIEW-built executables are using the same version of LabVIEW so that we also only have to ship one version of the LabVIEW Run-Time Engine. Everyone I've talked to has given me advice on things to change at export time, build time, or even run-time to know the version of LabVIEW it was built in (e.g. the App.Version property), but what if I want to know the version of an EXE I have already built?
Here are the methods I've tried so far: 1. I created a set of VIs that does it crudely by reading the EXE file in as if it were a text file, finding mention of lvrt.dll, and then scanning back a few hundred bytes to try and find a version X.Y token in there. (Of course this could be done in any language, not just LV). This solution can be automated for my product release's review process, but it would be prone to failure if the EXE format changes or my assumptions aren't correct. I wrapped the solution in one that searches our installation directories for all EXEs and returns all LV EXEs with versions (as they are detected here) and all non-LV EXEs so I can verify that my tool is not giving false negatives on LabVIEW-based executables.
(See attached .zip file for my LV implementation)
2. Run the EXE on a machine with no LV RTE and read the version from the error popup that you get when it launches. Obviously this requires user interaction, and requires launching the executable. 3. Install all LV RTE versions you think the EXE most likely requires. If it launches without incident and stays in memory, you can use Process Explorer to figure out what libraries it has loaded and see which version of LV RTE was called (since the lvrt.dll will be under "<NISHARREDDIR>\LabVIEW Run-Time\<Version>"). Obviously this isn't easily automated either, and requires launching the executable. 4. I tried using Dependency Walker to see what lvrt.dll dependency was found, but it didn't show me anything useful. I'm guessing it's dynamically loaded somehow, and DepWalker can't tell me?
What I'd really like are two things: 1. An LV Invoke Method like "Get VI Version" and "Get VI Editor Version" that works on built applications -- "Get App Version" with the same types of inputs and outputs as (file path input, string version output and U32 version output). An added output to determine platform and/or bitness would be nice, too.
2. An LV RTE property that tells me the Run-Time version in the file's properties. I could potentially automate checking of this via .NET calls or something like that
I would like to propose the inclusion of the possible errors to the help menu of each function.
If you are new to LabVIEW or have to work with something you don't know, it would be nice to see all the possible errors, so that you can handle them appropriately. Also it is easy to forget an unusual error or one that is so common that one hardly notices it anymore (e.g. timeout, reading non existent array elements or division by 0).
This information is included in the APIs of other programming languages, e.g. Java, as well and saves a lot of time in searching and testing for an exotic error.
With the increasing size of the LabVIEW ecosystem, there is a growing number of third party tools written in LabVIEW that are versioned independently from LabVIEW's version number. For example, I could create an API that has versions 1.0, 2.0, and 3.0, and all three versions could be compatible with LabVIEW 2009 or later. Tools like VI Package Manager make it easy for content creators to publish multiple versions of an API, and for users to upgrade and downgrade between those versions. However, this ease of use disappears if significant changes have been made to the VIs in an API, such as:
Changing VI connector panes
Renaming or moving VIs on disk
Adding VIs to a library
If any of the above changes are made to VIs in an API between versions, it can become impossible to migrate code between the two versions without a lot of manual searching, replacing, and relinking.
LabVIEW should provide a mechanism to define mappings between old and new versions of third party toolkit VIs. Consider the case where I make the following changes to a VI from my toolkit:
Version 1.0
Version 2.0
VI Path
<userlib>\mytoolkit\CompRes.vi
<vilib>\mytoolkit\Compute Result.vi
Owning Library
none
Mytoolkit.lvlib
Connector Pane
I should be able to create a mapping file included with version 2.0 of the toolkit that describes the changes made between versions 1.0 and 2.0 of the VI. This way someone could write an application that calls version 1.0 of the VI, then upgrade their toolkit to version 2.0, and the application source code would be able to find, load, and relink version 2.0 of the VI without any hassle.
In some cases we may have functions called from DLLs, which cannot be not terminated in "normal" way due to some reasons.
When source code for DLL is available, then this is not a problem, but for third-party DLLs this may be terrible.
All what we needed is Timeout for External Code Call.
Let me explain. For example, we have code like this:
when such external code called from VI, then this VI cannot be stopped
Suggestion:
add Timeout option to the following dialog:
When enabled, then the following input will be shown:
And timeout can be programmatically defined on the block-diagram:
by default -1, of course.
I fully understand, this is dangerous thing, but in some cases we needed this. Suggested is something like TerminateThread function.
Back to the code above, when this code should be terminated, then we needed wrapper something like this:
Then this function can be called in separate thread:
And now we can terminate execution with TerminateThread function:
Then in LabVIEW it looks something like this:
Now we can "Abort" our infinite DLL call without any dialogs like "Resetting VI", etc.
Again, this is dangerous feature (but not more dangerous as TerminateThread itself), and may be is necessary in the most extreme cases, otherwise the only way to stop the application is taskkill.
I very much like being able to export a labview.vi to a .dll or .net interop library .dll. Our higher level automation is in C# and having access to labview.vi via .net .dll is great. However, the current build capability only allows a single prototype/method to be created. Consequently, I have to provide every input/output when invoking this single method. This can work but it's just inconvenient and not as obvious as being able to create multiple methods smaller things, e.g.
write setup values
write DAQ sample rates
start storing samples
stop storing samples
stop DAQs
write summary info
When I create a C# class, I can create any number of methods or properties to manipulate class data members. Exported labview.vi files should enable this too. NI AE said I could create parent.vi files (setup.vi, write_DAQ.vi, etc) that all include the actual vi of interest. These various vis could be added to the exported .vi list. And, this would indeed be a work around. But, this is a hack to get around this problem and the AE agreed. Instead, I would like to be able to define multiple methods/prototypes on a single exported .vi.
what does the community think about configurable automatic type conversion between .NET and LabVIEW? We need to deal with a .NET API for a sensor which handles system time with nanoseconds since the 16th century. Don't know why they're doing this stuff.
To set the appropriate time you have to substract two .NET DateTimes. Everybody will know this will look like in LabVIEW.
You will have to use a method of the object to substract this. Now you're using DateTime.Substract. This means in LabVIEW.
Here it would be nice to have a checkable item in the right-click run-time menu for the invoke / property node to select or deselect the automatic type conversion so that the Substract() method would accept a DateTime class and not a LabVIEW timestamp.
Hope the idea is clear enough. I've checked the idea forum for this entry but didn't found it. If so, don't shoot me for a double post.
We were thrilled to see that NI developed an OPC UA API. We develop software for both VxWorks and Windows, so having OPC UA available on RT is great. But having to shell out for the entire DSC suite, run-time licenses and all, just to be able to use the same API on Windows is unreasonably costly and forces us to use a different API on Windows. If we could buy the API as an isolated component at a more reasonable price (and with easier licensing) we would jump for it immediately.
A generalized version of the idea:
The DSC can still function as a nice bundle, where the price for the bundle is lower than the total for each individual item, but when NI makes such packages please make it possible to pick-and choose amongst those components as well, so that in cases where you actually have a need for just one of them, you can get it at a price that is reasonable for that individual component.