LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Please add to AF:
Ability to seamlessly communicate over the network and create your own queue specifications. For example actors should be able to talk between physical systems. Using the Network Actor is not a good solution, because it is simply to verbose and difficult to understand. The philosophy of AKKA should be embraced here "
This effort has been undertaken to ensure that all functions are available equally when running within a single process or on a cluster of hundreds of machines. The key for enabling this is to go from remote to local by way of optimization instead of trying to go from local to remote by way of generalization. See this classic paper for a detailed discussion on why the second approach is bound to fail."

Please add to AF:

Ability to Get Actor Enqueuer by Path through hierarchy of actors e.g. /root/pactor/chactor could be used to get the enqueuer to chactor. This would probably require actors to have unique paths pointing to them. Having a system manager keeping all enqueuers on local system is not a bad thing. It is a good thing.

 

LabVIEW AF is a fantastic tool to create multi-threaded messaging applications the right way. It promotes best practices and help write big yet scalable applications. 

I feel however that there is not enough included in the box 🙂 When looking at many other industry implementation of the actor model e.g. Scala Akka, Erlang, Elixir, microservices in many languages etc. you find that the frameworks usually include many more features. https://getakka.net/

 

I understand the decisions behind the design for AF yet would like to start a discussion about some of these.

I would like to see the following features being included into AF:
1. Ability to Get Actor Enqueuer by Path through hierarchy of actors e.g. /root/pactor/chactor could be used to get the enqueuer to chactor. This would probably require actors to have unique paths pointing to them. Having a system manager keeping all enqueuers on local system is not a bad thing. It is a good thing.

2. Ability to seamlessly communicate over the network and create your own queue specifications. For example actors should be able to talk between physical systems. Using the Network Actor is not a good solution, because it is simply to verbose and difficult to understand. The philosophy of AKKA should be embraced here "This effort has been undertaken to ensure that all functions are available equally when running within a single process or on a cluster of hundreds of machines. The key for enabling this is to go from remote to local by way of optimization instead of trying to go from local to remote by way of generalization. See this classic paper for a detailed discussion on why the second approach is bound to fail."

3. Improved debugging features like MGI Monitored Actor being inbuilt. https://www.mooregoodideas.com/actor-framework/monitored-actor/monitored-actor-2-0/

4. Included Subscriber Publisher communication scheme as a standard way for communicating outside the tree hierarchy. https://forums.ni.com/t5/Actor-Framework-Documents/Event-Source-Actor-Package/ta-p/3538542?profile.language=en&fbclid=IwAR3ajPR1lvFDyPFP_aRqFZzxR4FCQXh2nB2z0LYmPRQlnvXnsC_GQaWuZQk

5. Certain templates, standard actors, HALs, MALs should be part of the framework e.g. TDMS Logger Actor, DAQmx Actor. Right now the framework feels naked when you start with it, and substantial effort is required to prepare it for real application development. The fact that standard solutions would not be perfect for everyone should not prevent us from providing them, because still 80% of programmers will benefit.

6. Interface based messaging. The need to create messages for all communication is a major decrease in productivity and speed of actor programming. It also decreases readability. It is a better with the BD Preview in Choose Implementation Dialog in LV19, but still 🙂

7. This is more of an object orientation thing rather than actor thing but would have huge implications for actor core VI, or Receive Message VI. Please add pattern matching into OO LV. It could look like a case structure adapting to a class hierarchy on case selector and doing the type casting to the specific class inside the case structure. You could have dynamic, class based behavior without creating dynamic dispatch VIs, and you would still keep everything type safe. https://docs.scala-lang.org/tour/pattern-matching.html

 

The natural way for programming languages to evolve is to take ideas from the community and build them into the language. This needs to also become the LabVIEW way. I saw a demo of features of LV20 and I am really happy to see that many new features were inspired by the community. Lets take some ideas about actor and add them in.

 

I wanted to share my view on the direction I think AF should go. I see in it the potential to become the standard way to do big applications, but for that we need to minimize the barrier to entry.

A graphical programming language deserves to have great graphical tools representing the design of big applications.

 

It is possible to use scripting to understand if a method of a class has another class as input or output terminal and to know if a class composes another class in private data. The only thing I cannot do right now myself is to show this information in Class Hierarchy window. Representing the usage and composition relationships only requires parsing through the project once, and then every time it changes. 

I am right now using a script to parse through all project classes to understand which have what relationships, which are actors and which are messages and I draw a Plant UML diagram from that. The intermediate step is to generate Plant UML text, but this should be integrated into LV.


https://forums.ni.com/t5/LabVIEW-APIs-Discussions/Project-to-Plant-UML-Diagram-Script/td-p/3984499

5.png

 

6.png

 

Please improve the readability of object oriented project with additional tools for LabVIEW. This is especially critical for NXG, where the currently the ability to understand an OO project is even more limited than LV19.

can we have an option of loading only user-specified rows from a delimited spreadsheet instead of all rows, perhaps allowing the user to specify which and how many rows to read? this is potentially useful if the user want to read the select or latest few rows (typically at the last few rows due to write append) without having to load unnecessarily large amount of data into memory.

 

implementation wise, can use a row index and length input optional terminals defaulted at index:0 and length:-1, to yield all rows by default for the rows (note: terminal name changed) output terminal. index:-1 to specify from end of file and non-zero length:L for how many rows from the end of file (-L for reversed order, last row on top). if index:-1, the data read will be replacing elements within a fixed sized array of L elements until the EOF returns true. for example, index:-1 and lenght:5 would yield last 5 rows from file. 

 

read delimited spreadsheet.png

thanks for reading and have a great day

I'm working on large AF projects and I want to be able to pack my actors into lvlibps as there are lots of benefits of using PPLs. However, I'm a bit uneasy about distributing the PPLs with the executable because it's possible to reuse the public methods in a PPL. (The software I write is mostly licensable, but it would still be possible for competitor companies to reuse the public APIs)

 

What I would like is a way of securing/encrypting PPLs when distributing the executable.

In Java, any errors/exceptions that a function can throw (to its caller) are listed directly in its signature/declaration.  This would be nice functionality to have in LabVIEW.  Otherwise, the only way to determine which errors a VI can return is to either:

  • examine the code manually, recursing through all its subVIs or
  • brute-force exercise the VI until you have observed all errors that it might throw in your particular application. 

Neither of these options seems particularly robust or efficient.

 

It may even be possible for LabVIEW to determine the throwable errors automatically, although at least having the ability to declare them manually would be a good start.  They could be registered through the development environment such that they are stored in the VI file itself.  In this way LabVIEW could summarize all the errors a given VI throws based on those declared within it and its subVIs. I acknowledge that this would likely require a more sophisticated error system than the current error codes (since these may not be known at compile time).  But, I would be surprised if it couldn't be achieved in a similar fashion to Java where all errors are classes that inherit from a common "error" class.

 

If I'm missing some methodical, pragmatic approach for achieving the same result, please let me know.

I'm a huge fan of the Stall Data Flow malleable VI except in the case that I have it wired on an error wire (my most common use case) and there's an error on the wire.  I generally trust and expect that VIs (especially those on the palettes) will no-op (with rare exceptions like close ref methods) and fail fast in the event of an error on the error in terminal.  I understand that this is kind of a unique case since the VI in question is malleable and doesn't have an actual error in terminal, but my guess is that most users:

 

1.  Wire this specific VIM on error wires most often

2.  Likely don't want the VI to stall in the event of an error

3.  Would prefer to see that error propagated as quickly as possible like most other VIs do

 

Thoughts?

hello forum

 

thanks NI for the new features in LabVIEW 2019, so far I am liking most of the things you have done; except for that 1 item, context menu change

 

Why change the context menu (on right-clicking wires)?

2010-2018 1..3: clean up wire, create wire branch, delete wire branch (7.1...7.3: create constant, create control, create indicator)

2019 1..3: create constant, create control, create indicator (4..6: clean up wire, create wire branch, delete wire branch)

 

surely NI have reasons, but can try implementing a customizable context menu as a feature instead? I used to be able to write or edit my codes with muscle memory, but now I spend more time undoing things... Smiley Indifferent

 

maybe the next patch? Smiley Wink

It would be nice if the LabVIEW primitives for TCP allowed for the creation of a socket without actually connecting it to an endpoint.  My thoughts are that there would be two new commands added to the TCP palette:

 

TCP Create Connection (Advanced)

TCP Open Connection (Advanced)

 

TCP Create Connection (Advanced) would create the socket but not perform the connect() method on that socket.  I would imagine that it would otherwise look and feel quite like the current TCP Open Connection, except that the user would need to manually run TCP Open Connection (Advanced) afterwards that would perform the connect() method on that socket.

 

I have a need to access the raw socket object before it is connected, using the TCP Get Raw Net Object.vi in the vi.lib\Utility\tcp.llb library.  This VI works to get the handle that can be used by winsock and other Windows DLLs, however, it only allows for setting properties for a socket that is already connected.

 

Specifically I have a need to enable Windows FastPath for TCP to optimize the rate at which I can stream data between two applications.  Per the specification, this option must be enabled BEFORE the socket is connected.  Right now I can set this for the listener on the server side (TCP Create Listener creates a socket but does not connect), but I cannot set it for the TCP connection on the client side as the first method it calls is TCP Open Connection which returns a socket that is already connected.

Using the application builder is problematic if the destination folder is monitored by syncing tools such as Google Drive (Backup&Sync), Onedrive, etc.

 

During building, there are tons of file operations in rapid sequence, potentially crashing the sync tools or causing false file conflicts (recent example).

 

To avoid these issues, I wonder if the building steps could be forced to take place in the temporary folder, followed by a final clean move to the destination. My guess this would make building more stable and compatible with external folder syncing tools.

 

I love the new data type "Maps", I just wish there was one more primitive in the Maps palette to be able to retrieve elements using a regular expression search.

 

That would rock!

A quick one-button solution to view pre-configured Design Rule results per VI. Not quite an analyzer.  One layer (VI) deep. Pull-down icon changes from green check mark to the Alert symbol suggested here if violations exist.

 

LV_LIVE_DRC.png

There are already a couple ideas to retain wire values on a hierarchy (here and here). This is a request to disable (or toggle) the 'retain wire values' option on all VIs of a hierarchy in the same vein as 'disable breakpoints on hierarchy'. This request was spurred by an investigation into VI performance on resizing a very large 2D array of strings, wherein several memory allocation strategies and lookup strategies (arrays, maps, variants) had been exhausted, only to discover that a subVI whose (front panel and diagram had been closed) was set to retain wire values. Merely toggling the retain wire value setting off on this particular subVI improved performance by approximately one order of magnitude for this particular project.

 

Investigating and accurately measuring run-time LabVIEW performance can be challenging with so many debugging tools available, the impacts of which are not immediately obvious. For this reason, I would like to see a menu option that recurses on VI hierarchy and toggles the 'retain wire values' option, in the same vein as the option which removes all breakpoints from VI hierarchy. The option would be very helpful with run-time performance investigation or optimization.

Hello,

 

So as it is The VI analyzer has some limitations:

 

1. You can't fully customize the toolkit as some functionality is still not exposed as an API.

2. You can't build an executable with a VI analyzer.

 

I was hoping that NI could expose some more API that would allow for programmatic control of the error highlighting feature of the VI analyzer.

This will allow for creation of a more standalone application rather than being piggy-backed into the LabVIEW environment.

 

Thanks, Sher

I get that the Application.ini file can contain configurations options which might wished to be changed after compile, and I get that the application shouldn't modify its own exe.

 

But sometimes the ini file is used for things which cannot simply be hard-coded (or at least, I've found no way). For example, I am using unicode (it isn't my fault I have to resort to "experimental" modes in order to gain basic, fundamental functionality which should have been implemented as standard decades ago, but I'm not going to apologize for living in 2019), and in order for an exe to properly render unicode strings in controls etc. it requires the UseUnicode=True flag in the ini. What this means is that, in order to even function correctly at all, the exe is burdened with the constant baggage of an ini file which end users will never interact with, never understand, and frankly have a less-than-zero need to be exposed to in the first place. 

 

I've wanted this functionality on other occasions, but I'd be happy if someone could explain why I'm wrong about the unicode thing. 

The "Convert Instance VI to Standard VI" right-click option on malleable VIs made me think if this technology could be used for debugging otherwise inlined VIs (and therefore also malleable VIs)?

 

VIM Convert.png

 

VIs are obviously inlined without their block diagrams, so my suggestion would require two changes:

 

1) 'Allow debugging' must be settable in combination with inlining in VI Properties >> Execution.

2) LabVIEW IDE and RTE would have to call inlined VIs in different ways, depending on this setting. Suggestion follows:

 

Running in the IDE

The IDE would now create a temporary unique instance for each running inlined subVI that has debugging enabled. This would allow us to double-click on an "inlined" subVI while it's running, and have that temporary instance open up ready for probing. Settings like the 'SubVI Node Setup' dialog would now also be available for inlined VIs, if they have 'Allow debugging' on. Set 'Allow debugging' off and inlined VIs will behave as today (so no change for any existing code).

 

Running in the RTE

Executables are built with 'Enable debugging' either on or off (Advanced page of the Application Builder). I suggest that Application Builder builds in unique instances in place of debug-enabled inlined VIs when 'Enable debugging' is on, and bakes in ordinary inlined code when 'Enable debugging' is off. This would allow us to debug into inlined (and thus also malleable) VIs inside executables with remote debugging. Loose VIs called with the RTE would have had their inlined callees compiled with either debug capability or not, depending on their 'Allow debugging' setting as discussed above in "Running in the IDE".

 

What do you think - would it be nice to be able to debug inlined VIs?

I'm writing a VI for a lab which writes to a database. The number of measurements will vary depending upon which fixture is set up in the lab, and whether the experimenter decides to add extra ones. This results in a varying size array of doubles which gets written into a single database with generic column names c1-c200. The column names are cross-referenced in another table keyed to the lab fixture. 

The method I found here is to convert from array->cluster->variant->database insert. The problem with this is that a cluster's size must be available at compile time, making the varying array size more difficult. It also limits the size of the array/database entry to 256 elements in the array->cluster conversion. 

I know I could just populate and write a 200 element array into the database, but that increases database activity , is inelegant, and forces me to put zeros in the unused columns, instead of leaving them null. 

 

Attached is a VI which demonstrates what I would like, though it only works for doubles and is still limited by the 256 element limit. It could very easily be changed for integers or strings, but it would be best if it could take any type of array input, with a much larger size limit. 

Download All

when stepping into a VI,

do not read controls before pausing

because latch value should not be reset prior to pause
and because debugging capability would be enhanced with ability to change control values at step in (like TS LV adapter)