LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I realize that the DBCT is nearing its 20th birthday, and probably is not a priority for a fix, but this is so simple and has tripped up a number of LabVIEW programmers doing database applications for so many years.

 

The VI Rec Fetch Next Recordset (R).vi has a flaw which manifests itself in stepping past the last recordset in a multi-recordset return - then leaks a reference which causes mayhem in downstream code.  A simple test of the recordset reference would make this VI properly useful.

 

Among my other posts in reply to forum users about databases, the issue is captured succinctly here.

 

I long ago made the mods to the toolkit VI (actually, two), and reapply them with each new LabVIEW version as needed.  If anyone at NI/Emerson wants to look into this, I'd be happy to share the particulars.

 

Dave

Many new PCs running Windows run on ARM processors (like the Snapdragon), rather than x86 architecture. LabVIEW does not support ARM.

 

A few years back, my LabVIEW software would work on any Windows PC. That is no longer the case. That is a huge and increasing limitation.

Problem: Many native VIs use the Non-reentrant execution reentrancy setting.

 

Solution: The vast majority of native VIs should use the Preallocated clone reentrancy setting.

  • The native VIs that need to use Non-reentrant or Shared clone are few and far between - they should be identified on a case-by-case basis. Their Context Help and/or Detailed Help should explain why they need to be set to Non-reentrant or Shared clone.

The following is a selection of vi.lib VIs that should use Preallocated clone. This selection is meant to serve as a starting point and is not comprehensive.

 

1.png

2.png

3.png

 

Notes:

  • This idea is related to: The reentrancy of new VIs should be "Preallocated clone" . They both argue in favour of using the Preallocated clone setting more.
  • A significant number of native VIs are already configured to use Preallocated clone, which is great.
  • There are curious cases where closely related VIs are set to different reentrancy settings. For example, Color to RGB.vi is rightly using Preallocated clone, while RGB to Color.vi is Non-reentrant. Similarly, Trim Whitespace.vi is rightly Preallocated clone, while Normalize End Of Line.vi - which lives next to it on the String palette - is Non-reentrant.
    • This suggest that the reentrancy setting of some native VIs was chosen haphazardly. This needs to be rectified.
  • The fact that so many native VIs are non-reentrant partly defeats LabVIEW's remarkable ability to create parallel code easily. Loops that are supposed to be parallel and independent are in fact dependent on each other when they use multiple instances of these non-reentrant native VIs. When an application uses multiple instances of these native VIs it is as if there are "hidden semaphores" that are added between the various call locations that call these native VIs. This leads to less performant applications (more CPU cycles, longer execution time, larger EXE compiled code size).

Problem: When creating a new VI using File >> New VI (Ctrl + N) its reentrancy setting is Non-reentrant execution.

 

Solution: The reentrancy setting of new VIs should be Preallocated clone reentrant execution.

 

Background:

In most applications the vast majority of VIs could and should be set to "Preallocated clone reentrant execution". In a nutshell, Preallocated clone ensures that each instance of a given VI is completely independent of all other instances. This is desirable in the vast majority of cases.

 

We should encourage best practices by changing the default reentrancy setting to the setting that is desireable in most cases, namely to Preallocated clone.

 

The other two options - Non-reentrant execution and Shared clone reentrant execution - are the best choice in specialised cases only.

  • Non-reentrant is necessary when needing to guarantee that multiple instances of a VI block each other from executing at the same time and/or when wishing to use uninitialised shift registers as a means of asynchronous data communication (e.g. FGV or Action Engine).
  • Shared clone is best when aiming to reduce memory usage. Shared clone can help on memory-scarce targets such as cRIOs or sbRIOs, or when dealing with massive amounts of data (arrays of millions or billions of elements). Worrying about memory usage is not a concern for the vast majority of VIs, especially when working on modern desktop systems that have 8, 16, or more GB of RAM memory, and when dealing with reasonable amounts of data (arrays with up to a few million elements).

New VIs set to Preallocated clone would be in good company, as detailed below.

 

1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • The vast majority of LabVIEW nodes rightly execute as if they were set to Preallocated clone. This enables us to use many instances of each node freely without worrying that it will create timing dependencies between vastly different caller VIs.
    • For example, one can use the Add node inside as many DQMH modules or Actor Framework actors as they wish, without creating any timing dependency between those callers, which is great.
    • As far as I am aware, the only nodes that execute as if they were set to Non-reentrant are those that require the UI thread. These nodes execute in a sequential, blocking manner.
  • Many of the VIs that ship with LabVIEW are rightly set to Preallocated clone.
    • I believe that most VIs that ship with LabVIEW and are set to something other than Preallocated clone should be set to Preallocated clone, but this should be addressed in a separate idea.
  • All inlined VIs (and therefore, all VIMs, which must be inlined) execute as if they were set to Preallocated clone.

Further notes:

  • If Non-reentrant was chosen as the default because it was judged to be the friendliest to new users (an argument that I believe does not outweigh the arguments in favour of Preallocated clone), then at least there should be a Tools >> Settings option to enable people to change the default reentrancy setting.
  • The fact that new VIs are by default Non-reentrant defeats the benefit that LabVIEW offers in terms of ease of creating parallel threads. Many of these threads will in fact not be truly parallel, because of the undesired one-instance-running-at-a-time blocking effect of Non-reentrant instances that execute in vastly different areas of an application.
  • I have never tested this, but EXEs are likely to be smaller (perhaps by a few KB, or even a few hundred KB) when the vast majority of VIs are set to Preallocated clone. When using multiple instances of Non-reentrant VIs LabVIEW must add some kind of mutex'es (mutually exclusive locks) in the compiled code. When VIs are set to Preallocated clone these mutex'es disappear, leaving behind smaller, cleaner compiled code.
  • The disappearance of mutexes might help enable the LabVIEW compiler to perform optimisations that are not possible when non-reentrant boundaries are present.

Reentrancy resources:

To achieve better performance and efficiency, the “In Place Element Structure” should be used instead of the “Bundle by Name” method for “VIs for Data Member Access” (specifically for write VIs). The "Mark as Modifier" option should also be activated.

 

TapioS_4-1740049307134.png

 

TapioS_3-1740049278327.png

If the compiler optimization can do this automatically, I apologize for this post. 

The database toolkit is limted by the database variant to data function. It can only cast to a labview datatype as long as you wire that datatype to the type input. This means that you have to know the datatype of any SQL query in advance (or convert to string). It would be very useful if the function would also accept a variant datatype. This way it would be possible to cast any complex type into labview datatype, without the need of a predefined cluster.

 

Image - casting the database input with a the variant type input (circled) doesn't work

aartjan_0-1735459849988.png

 

This is an idea I've been working on for a while. It's time to let others start evaluating it. 🙂

000.png

001.png

 ^ I included the above for Dmitry Sagatelyan and similar folks who have asked me for these things over the years so they know the mindset to use when evaluating the idea. But it's written up below for LabVIEW users who only know LabVIEW as it stands today (Q3 2024).

002.png

003.png

004.png

005.png

006.png

007.png

 

Feedback and questions welcome. 

It would be useful if LabVIEW offered the option to use an Error Collector Node inside structures.

 

The Error Collector Node would collect or capture any/all unhandled errors that occurred inside the structure that the node is part of. If one or more unhandled errors occurred, the Error Collector Node would output the first error that occurred (in chronological error). If no errors occurred, the node would output "No Error".

 

The following annotated screenshot shows how the Error Collector Node would help reduce the number of error wire segments and of Merge Errors nodes.

Combined 1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

The following screenshot shows a second example. Notice that using the Error Collector Node would result in a significant reduction of block diagram items: 10 fewer wire segments and 1 fewer Merge Error Node. The reduction in block diagram items could be even larger if we consider that the other cases of the case structure (potentially dozens of cases) also benefit from the same Error Collector Node being placed on the border of the case structure.

Combined 2.png

 

 

 

 

 

 

 

 

 

 

 

 

The two screenshots above are examples created specifically for the purpose of posting this idea. The following screenshot is a real-world example (taken from production code) of a VI that could benefit from the Error Collector Node, which would remove the need for numerous error wire segments and Merge Error nodes.

1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes

  • The Error Collector Node was prototyped as a hexagonal output tunnel in the screenshots above. I would, of course, be happy if a different glyph or shape is chosen to represent this new type of tunnel.
  • Error Collector Node would essentially behave similarly to a localised Automatic Error Handling functionality. It would collect or capture any unhandled errors inside its area of responsibility (the structure), and would convert those errors back into "manually-handled" errors - i.e. errors than are passed downstream via an error wire.
  • The Error Collector Node would be useful especially in situations where many code branches execute in parallel as it eliminates the need for lots of Merge Error nodes.
  • The first two screenshots mention the words "approximately equivalent to". "Approximately" because, if multiple errors occur inside the structure, the Error Collector Node does not guarantee which of those errors are output in the same way that the Merge Errors node does. For example, in the first screenshot, if all three VIs (VI A, VI B, and VI C) experience an error, there is no guarantee as to which of those errors the Error Collector Node would output. The node would output the first error that occurred (in chronological order), so it would depend on which VI finished execution first. This could change from execution to execution, and from machine to machine. Whereas the right-hand-side version, which uses Merge Errors, would always output the error generated by VI A.
    • This would usually not be an issue in practice. If more determinism is needed, the programmer could, of course, fall back on manually wiring the errors to define an exact error behaviour.
  • It should be possible to add a maximum of one Error Collector Node to each structure: Flat Sequence Structures, Case Structures, For Loops, While Loops, etc.
  • It would be useful if the Error Collector Node could be used outside of structures (directly on the outermost region of a block diagram). Again, enforcing a maximum of one Error Collector Node per outer block diagram would make sense. The Error Collector Node would execute after all other block diagram nodes and structures (would be the last thing to execute). The output of the Error Collector node could then be fed directly to an "Error Out" block diagram terminal. This would remove the need to wire most error wires inside the VI, while ensuring that no error goes uncaptured.
  • If the Error Collector Node exist only as a structure output tunnel, and not as a stand-alone node outside of structures, then a better name for it might be the Error Collector Output Tunnel.
  • The behaviour of the Error Collector Node would be unaffected by Automatic Error Handling being enabled or disabled in that VI.
  • Using Error Collector Nodes would benefit programmers in the following ways:
    • Would reduce the amount of "click-work" that programmers currently need to do (the number of wire segments and Merge Error nodes that need to be created), while ensuring that all unhandled errors are captured.
    • Would reduce the amount of block diagram "clutter". This "clutter" is apparent in the third screenshot, which shows many criss-crossing error and DAQmx wires.
    • Would decrease the size on disk of VIs thanks to fewer block diagram items needing to be represented in the VI file. This would help towards making git repositories a little bit smaller, and loading VIs into memory a little bit quicker.
  • Informally, using Error Collector Nodes would sit in-between the strictness of manually wiring all error outputs, and the looseness of relying solely on Automatic Error Handling. The error handling gold-standard would remain manually wiring all error outputs, but using Error Collector Nodes might be "good enough" in many situations, if used judiciously.

Problem: Currently wiring an error wire to a structure input tunnel that does not continue as a wire clears the error that exists on the wire.

 

Happy case: When running the VI shown below, Automatic Error Handling correctly detects that the error out terminal of Error Cluster From Error Code.vi is unwired, and handles the error (displays the error as a dialogue window).

2 (edited).png

 

 

 

 

 

 

 

 

 

 

Unhappy case: Wiring the error wire from the error out terminal of Error Cluster From Error Code.vi to a structure input tunnel clears the error. Automatic Error Handling does not detect or handle the error.

3 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

In my opinion this was simply an unfortunate design decision (can happen to all of us) back when it was made, decades ago. IMO there is no logical argument to support this behaviour. The fact that the error wire is wired to an input tunnel does not mean that the error was handled. At best, when a programmer intentionally used this technique, it represents a non-self-documenting coding practice (why not use the self-documenting Clear Errors.vi?). At worst, it means clearing errors simply because the programmer forgot to wire the wire through the structure. It means clearing errors when the programmer did not explicitly ask for this. It means "sweeping errors under the carpet", and can result in overly "optimistic" applications (apps that seemingly execute without error when in fact unhandled errors are being generated).

 

Please note that even though the screenshot above shows a Flat Sequence Structure input tunnel, the behaviour applies to every structure (case structure, for loop, while loop, etc).

 

To summarise, the problem is that the screenshot above is functionally equivalent to explicitly using the Clear Errors.vi, as seen below.

4 (edited).png

 

 

 

 

 

 

 

 

 

 

 

Clear Errors.vi is of course the self-documenting, recommend method of clearing errors. It should also be the only method of clearing errors.

 

Ironically, Clear Errors.vi itself uses the "clear error by wiring it to input tunnel" technique inside its "0" case, as seen below.

5 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

To its credit, Clear Errors.vi uses a correct technique for clearing errors inside its other, default case.

6 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Another example found "in the wild" of a VI using the "clear error by wiring it to input tunnel" technique. This VI ships with LabVIEW and is found at <LabVIEW installation folder>\vi.lib\Utility\EditLVProj\Identify VIs Among Project Items.vi.

1 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Solution: Disable the "clear error by wiring it to input tunnel" behaviour. This would fix what IMO is an incorrect design decision. Unfortunately, fixing this decision now would result in VIs that use the "clear error by wiring it to input tunnel" technique to start throwing unhandled errors if AEH is enabled. This is not ideal, but it might be worth accepting this short-term drawback for long-term gain.

 

Moreover, it may be useful to introduce a Clear Errors node (primitive function). The Clear Errors.vi could then make use of the Clear Errors node inside both of its cases. Alternatively, the Clear Errors node could simply replace and supersede the Clear Errors.vi.

Automatic Error Handling (AEH) is a useful feature. It captures errors that were otherwise left unwired by the programmer (intentionally or accidentally). It represents a "safety net" that can make the programmer aware of errors that they may otherwise remain unaware of.

 

Problem: Currently AEH functionality is only available in the development environment. It is not available in built executables. Even when all VIs in a project have AEH enabled, once built into an EXE, all VIs behave as if AEH was disabled.

 

Solution: It should be possible to honour each VI's AEH setting in built EXEs too, not just in DevEnv. The EXE build specification could contain a setting named "Honour each VI's Automatic Error Handling setting in EXE". When ticked (enabled), any VIs for which AEH was enabled in the development environment will continue to benefit from AEH behaviour in the EXE. Any VIs for which AEH was disabled will continue to have it disabled in the EXE. This means that, from an error handling/error manipulation point of view, the application would behave identically when being run as an EXE as when being run in Development Environment. This is more consistent, and can be helpful.

 

The current behaviour (forcibly removing AEH in EXE) means that EXEs are prone to having errors that were not discovered during DevEnv testing being "swept under the carpet". In other words, currently EXEs are overly "optimistic" - they can make the programmer believe that everything is ok when in fact one or multiple unhandled errors are occurring, errors that would have been visible in DevEnv. This is particularly relevant to apps that run for long periods of time (e.g. life cycle testers) that may encounter errors that were simply unforeseen or untested in DevEnv (e.g. error after one month of continuous running due to running out of disk space when saving measurements log file to disk).

 

The screenshot below shows how the new setting could look like in the Advanced page of the EXE build spec.

3 Screenshot 1 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes

  • To be absolutely clear: I am not asking for AEH to be enabled by default in all EXEs. I am also not asking for AEH to become enabled in all VIs when the new build spec setting is ticked.  This would override the AEH setting of all VIs - I am not asking for this.
  • The default value of the new setting should be False (unticked). When False, the built EXE would behave exactly as it does now - AEH would be disabled in all VIs. This would maintain the current behaviour as default.
  • The new setting would give the programmer more control - it would allow the programmer to decide whether they want AEH or not in their EXE. Currently AEH is taken away in EXE, even when we (professional LabVIEW programmers) might want it enabled.
  • I would be happy if the new setting was available only for desktop, non-real-time applications, and not on Real-Time targets.

TL;DR: the idea is please make building as fast in new LabVIEWs post 2023 Q3 as old ones when you have a clear compiled object cache and when your file timestamps have changed.

 

LabVIEW 2023 Q3 includes a major change to the app builder: 

 

LabVIEW 2023 Q3 has improved cache behavior for packed project libraries and applications.

 

The first build will populate the cache, and then the subsequent builds will be much faster.

Great! It is pretty easy to confirm this and it works! Unless...

 

1. File timestamps change - such as when you clean a directory and clone project files in your CI job

  • Git does not make file timestamps match "commit date" - when you clone fresh all the timestamps are now o'clock, when you change branches all the files that change are reset to now

2. The cache is cleared prior to the build - such as when you want a repeatable build unaffected by prior state so you set your CI jobs to clear it before each build

3. You are doing a "first build" on a new project - such as when you use a container or VM disk image to reset the entire environment to a clean state prior to your CI job

 

In all my testing, any of these conditions pretty strictly cause builds to be slower in LabVIEW 2023 Q3 than in earlier versions: with a clear cache by typically ~60%, with new timestamps and a persisted cache depends on how much of the code has new timestamps.

 

In other words, I expect most everyone using CI will find building slower in 2023 Q3+ than later versions. So, the idea is: whatever optimizations were done, include an option to revert to the old behavior - build spec, config ini, editor option, whatever.

I'm proposing a new certification, one certification to rule them all, the Certified LabVIEW Gangster Group Developer.

 

Or simply a fully functioning CLA that requires 3 people to take the test.

Architect

Developer

Embedded Developer


It should function using an FPGA, or a simulated interface.

 

 

I was recently trying to develop a function to navigate thru a deeply nested directory structure and came across system path length limitations which could potentially be addressed by use of a "change directory" function.

 

I realize I could use system exec with cmd /c cd <path>, but found this extremely slow

The array "Startup/Always Included" carries these Vi's into pre/post build action VI's.

The order of the vi's in this array depends on the order they have been added to/created in the project when, not the order they can be seen on the project window itself.

Here's the project window:

GICSAGM_0-1718176051948.png

Here's the order within prebuild vi:

GICSAGM_1-1718176157393.png

 

The order is NOT the same. "startup" is the first but if you delete from the project and then re-add it, it will become the last.

 

I suggest that the order in pre/post build VI should be:

first - startup.vi

following: always included vi's in the same order they appear in the project window.

 

Also, the always included list should have a 'mechanism' to change the order of the vi's in the list - be it up/down arrow buttons on the side that would move the selected vi or a similar command in a "right click" drop down menu.

 

In this way, any pre/post build action that may involve any of these vi's can be clearly defined and remain stable during the lifetime of the project without running into the risk of getting the wrong vi to "work" on or having to edit the pre/post build vi to add or edit the vi's names every time there is a change in the startup/always included list.

 

One of the things LabVIEW is best at is its innate parallelism. Parallelizing for loops with a right-click is something other languages wish they had. Having all code on the block diagram (or equivalent) innately run in parallel is something few other languages have even tried, much less succeeded at.

 

Crunching large datasets can be sped up through use of GPUs, and over the past decade or more, Nvidia has kept their promise of having a common interface to their hardware through CUDA. 

 

Having a LabVIEW add-on that is well maintained, and that can give GPU parallelism in the same ways we've seen LabVIEW deliver CPU parallelism would be a game changer for many labs and manufacturing environments. It could also help LabVIEW be a leader in the AI space.

 

I suggest that you create this add on package using CUDA as the underlying GPU calls, in order to keep the code easy to manage, while also providing the package for a large number of supported GPUs.

Hi all,

 

After some issues spending 1 week to get HTTP embedded server working in LV for a single application, I have some remarks that might trigger some need to a more flexible, simple and open HTTP configuration. The current implementation of a HTTP server is quite limited and outdated to my opinion.

First thing is the NI Web Server. This is a nice feature, however, NI recommends using it rather then the outdated Application Web Server but the problem is that this thing is only a single server running on a single port (for every application executable). Good enough for a single web server on a host using a web browser but how about implementing a LV HTTP server for each application (e.g. RPC server)? To my knowledge, every other programming language (e.g. Python, C++, ...) has a core implementation for this.

I have spent a lot of time to see what the best solution is for implementing a HTTP server belonging to a single application executable in LV. This executable is typically a application GUI or a backend service in our projects and we have a lot of them. Every application needs its own RPC server (running on a different port) and hence running its own RPC methods and I ended up implementing a Web Service using a LV Application Web Server, I can't see other ways at this moment using core LV functionality without the need for additional packages to install.

I also miss the enabling and disabling of the HTTP server during runtime. As in our project applications, we also have other transport layers for implementing RPC, such as zeroMQ (thanks to Martijn Jasperse's library on VIPM) and TCP (native built-in to LV). I would like to run only one of these transport services by configuration but here is a second problem here, once the application is running, the HTTP Web Service automatically registers and there is no controlled way for disabling it during runtime, which gives me headaches since I have to change the port number as another transport layer cannot use the same port as the HTTP server. One might say to built another application (actor based) exe and implement the Web Service from there in a different actor but this is a pain in the *** to have 2 exe's for each single application. Why can't the HTTP Web Service not switched OFF and ON again, both in development and runtime? I found a property node to disable the server but it apparently doesn't work (seems related to the native panel web server).

One of the major disadvantages that I also encountered is the HTTP methods that are programmed in a single VI and there is no way to pass data to these method VI's (like using actor framework or even classes in general). It seems we have to use FGV's (Functional Global Variables) to share data between my main application actor and these HTTP method VI's itself. Even then, the HTTP Service Request refnum is only valid in the HTTP method VI itself, once it finished executing, the refnum is flushed and not valid anymore, so no way to pass this refnum using actor framework messages to my application actors. That's quite frustrating since I have to use notifiers within the HTTP method VI's instead as a plan B backup solution signifying that the method VI can proceed and finish its execution once it Wait on Notifier function is complete (since I want to send an answer from my application actors, not from the HTTP method itself)!

Another issue I observed is that I can't "Start" the HTTP Web Service from the right-click menu in the project explorer, it simply crashes with some dubious error that the 'system is currently in an invalid state for the current message'. What does this mean, no clue from NI help docs?

 

Arrowin_0-1715670098941.png

 

I can only right-click and select "Start (Debug Server)" to make it work (but on the debug port 8001 by default). All other options just fail, the same for "Publish", it simply doesn't work in my LV2020 SP1 (32-bit) version and I have no clue why as there is not a single error message at all!

Also, why must we use MS Silverlight to control application webservers from LV? Silverlight is deprecated and I ended up using MS Edge in "Internet Explorer" mode to get the config page working (after spending another two hours to find out). Even then, some config panes just show up with error dialogs and no way to see active services being registered by the application HTTP web server. In the end I just used TCP View to see active services running. It is always frustrating to use third party apps to do simple things.

 

As you might notice from this message, I suffered a lot of days to figure out how to implement HTTP in a simple decent way using LV's core HTTP functionality. I wonder if this will be better using LV 2024 Q1?
If anyone has ideas on how to properly configure multiple application HTTP servers for each application on different ports while controlling itself, please share it with me. I am open to any idea's and wonder if there are other solutions for HTTP implementation (not using 3rd party packages). To my opinion, HTTP should be easy and open to configure properly in LV without a lot of current non-working Web Server issues.
Please note that I tried to reinstall the NI Web Server and other web service related stuff using NI package manager but no avail.

 

Best Regards,

Davy Anthonissen

It would be useful to have a node that could be fed any wire and return the size (in bytes) of the data on that wire. In other words, the node would return the number of bytes that the wire occupies in memory.

 

1 (edited).png

 

 

 

 

 

 

For example, the node would return a value of:

  • 1 byte when fed a U8 wire
  • 2 bytes when fed a U16
  • 4 bytes when fed a I32
  • 8 bytes when fed a DBL
  • 800 bytes when fed a 1D array that contains 100 DBL elements
  • 9 bytes when fed a cluster that contains a DBL and a U8
  • 9 bytes when fed an object that contains a DBL and a U8
  • 18 bytes when fed an object that contains two other objects that occupy 9 bytes each
  • and so on

Notes

  • The node would would enhance LabVIEW programmers' ability to monitor and audit memory usage.
  • The node may serve as an additional tool to detect memory leaks (by repeatedly calling the VI on the same wire and checking whether the size is going up).
  • The node would simply be interesting to programmers interested in performance and would enable programmers to learn more about LabVIEW internals.
  • The node would be useful especially to query the size of complex data structures, such as objects that contains other objects that themselves contain objects, or clusters that contain arrays, or arrays that contain clusters, or objects that contain DVRs.
  • I would be happy if the node had a second input named "Mode" (or similar). This input may be a typedef enum with items named "Shallow Measurement" and "Deep Measurement" (or similar). This input could be required, recommended, or optional.
    • When "Shallow Measurement" mode is selected, the node would return the size of all the by-value data fields in the main input wire, but would not add up the size of data referenced by DVRs or other references. For example, a wire that contains a cluster that contains a DBL, a U8, and a DVR would return perhaps 13 bytes (8 + 1 + presumably 4 bytes for the DVR reference itself). It would not add to the result the size of the data referenced by the DVR.
    • When "Deep Measurement" is selected, the node would recursively scan all data structures, including DVRs and other references.
  • In both "Shallow Measurement" and "Deep Measurement" modes there should be no limit to the scanning depth. In other words, if a cluster contains a cluster that contains a cluster and so on, they should all be measured regardless of the nesting depth. Similarly, if "Deep Measurement" is selected, and a DVR contains a DVR that contains a DVR and so on, the data behind all these DVRs should be added to the total.
  • When in "Deep Measurement" mode and fed a Queue reference wire, the node could perhaps return the size of all the data in the queue. In other words, the size of all the elements present in the queue.
  • Perhaps the LabVIEW compiler already uses a "Get Data Size" function internally? If such a function already exists, perhaps it would be relatively straight forward to expose it as a node in the palettes?
  • Perhaps the best location for this node would be in the Programming >> Application Control >> Memory Control palette.

2.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Thanks!

The LabVIEW compiler currently appears to use one core of a multi-core processor.  It would be nice if it fully utilized multiple cores to speed building of large projects, and recompilation of VIs when editing/opening source code.

See this github repository for a more complete proposal and an example implementation that gets us closer to achieving this in LabVIEW.

Some languages like Rust and Zig have a feature called Tagged Enums (or Sum Types) that allow you to create a data type that can be one of a few different types where there is a name associated with each type. In LabVIEW, however, Enums are limited to consecutive numeric integer values -- there's no way to associate a type with each named value.

 

The power of combining an Enum with a data type for each value is that we could potentially use a Case Structure as a switch statement with type assertion and data conversion built in! This would allow us to create robust, type-safe code that is easier to maintain and understand.

 

example_equipment_variant.png

See this github repository for a more complete proposal and an example implementation that gets us closer to achieving this in LabVIEW.

When a subarray is created, the data is copied from the origin-array.

 

The LabVIEW Compiler does not recognise if the subarray data is then only read.

It would be a major improvement for memory and even CPU usage if the compiler recognised that some part of the array is only read and pass only a reference to that part of the array in the memory.