LabVIEW Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

We have cloud computing, virtual Machines, CPU virtualization etc. - There are numerous ways of achieving parallel and distributed computing, available at different architectural levels. The inherent parallel nature of the LabVIEW graphical programming means we can often achieve parallel computing without thinking.  

 

-But in cases where the programmer actually needs to make a decision we now have the Loop Iteration Parallelism option.
If an action is to repeated multiple times and the execution of each run takes longer than the overhead of communicating the input data, execution code and/or output data across to multiple targets, parallelization can reduce the total execution time, and/or reduce the load on each target. Now, in some cases the execution time can justify parallelization even across slow communication channels. 


What if we expanded the user-friendly loop iteration parallelization mechanism to also support remote processors?

 

  • On the targets we want to offer as execution hosts we will need to install a host service. This service might offer us the choice of offering all, or just a subset of the available cores.  Perhaps even decide this based on the current load on the target, or time of day(!). The targets can be of different platforms as long as the code is possible to recompile for it.

 

So how would this look like to the programmer? Well, we simply extend the for loop parallelization function dialog to something like this:

 

For Loop Iteration Parallelism Across Targets.png

 

  • The loops should also allow this setup to be changed at run-time. You could have a general VI to define the default targets and establish a link to them, and each for loop could have input terminals to specify the parallelism options to be used at the time of execution.

  • Another fun consequence of this functionality would be that you can really distribute *any* part of you code across multiple targets simply by wrapping it in a 1-iteration only loop.

 

With this functionality in place getting 10 machines to work on a heavy problem instead of just one would really be as simple as drawing a for loop...Smiley Very Happy

The current bluetooth VIs (as of LV 2014) don't support communication with the new protocol Bluetooth 4.0, referred as Bluetooth Low Energy (or Bluetooth Smart).

 

New VIs dedicated to BLE or adding support on current VIs is needed for all developers of this new bluetooth stack.

 

When deploying a CAN db, an alias needs to be created to access the database. The writeup is deeply buried in 4-538 of the manual (1200 pages).

 

An example with a simple switch between the dev & exe should be added to help save time and trouble. Or simply add the functionality to an existing example.

 

Thank you!

 

 

Logical shift in LabVIEW discards the shifted out bit. Logical shift functions in assembly labguage settings shift that value to an overflow bit register allowing the bit in question to be tested and program flow to be altered.

 

This kind of function is useful when dealing with low level protocols at the bit level, or dealing with digital devices that have a parallel interface.

 

I suggest the logical shift function have an additional output that contains the boolean value of the bit just shifted out of the number.

 

Logical shift should be able to output a single boolean, or an array of boolean. For the single case, the value would be the last bit shifted out with other bits being lost. For the array output, all bits would be captured with the first bit shifted out being the 0th index of the array.

If you've ever tried to use the LV Web Service > Session VIs for sessios, authentication, user management, etc., then you'll quickly notice that they're lacking an important feature:  the ability to detect when a user session times out (expires).

 

I can think of two important use cases as of why you'd care:

(1) The user/client is viewing sensitive information (served to him by a LV web service), and then decided to walk away from his computer as we do sometimes.  Any Joe walking by might then get a glimpse of something he shouldn't.  

(2) For logging purposes.  It might just be nice to know when the user logged in and when the user logged out for your IT records.

 

With a securely built web service, you could detect the user isn't there anymore, and direct the web client (Chrome, FF, IE, etc.) to redirect the page and destroy the session.

 

 

There's a sister idea that goes along with this, but I don't think I'll post in a separate thread unless needed.  So, another way to detect session timeout events would be to get the session ID cookie (from Create Session VI), store each cookie in memory somewhere, and essentially poll them in a background loop for session information (ie session still exists?).

 

Ho hum, maybe I'm the only one building web applications of this type, but it sure would be a helpful feature in my opinion.

I'm communicating with a instrument with Ethernet, and the LabVIEW driver associated has a mistake. Based on RS232 protocol, the driver builds message and with VISA, you can easily choose RS232 or TCP/IP port. But the driver doesn't work for Ethernet because there is a really nice fonctionnality in RS232 that is not implemented in LabVIEW for Ethernet which is the end message caracter like \r or \n.

There is "Message Based Settings:Send End Enable" VISA Ethernet property but it doesn't fit well.

It would be nice and powerfull to have the equivalent of the RS232 property "Serial Setting: ASRL End Out" and "Serial Setting:TermChar", wouldn't it?

 

Extend on the concept of the termchar to include a multicharacter termination string for VISA reads.

 

Ideally, the termination string could be defined as a regex.

 

When I establish a connection with a linux based Device Under Test using a terminal server or TCP socket, the device ready prompt is the typical username@hostname:#

 

I currently read the VISA session in a tight loop a byte at a time and buffer the characters to compare to a regex of \n%s@.+?:[~|(/(\w)+)]+?# 

 

The time required depends on the length of the response from the device under test, so I have to keep track of the total time myself; I can't use the VISA timeout.

 

If NI-VISA supported a regex based multicharacter termination string, I could set my VISA session to look and wait for the prompt.

 

NI-VISA TermString.png

In the old days when VISA was first designed, I'm guessing that this sort of functionality would have been taxing on the memory and CPU. With today's 64 bit GHz multicore processors, abundant RAM and common regex libs, I don't hink this would affect timing.

 

 

There are a plethora of timestamp formats used by various operating systems and applications. The 1588 internet time protocol itself lists several. Windows is different from various flavors of Linux, and Excel is very different from about all of them. Then there are the details of daylight savings time, leap years, etc. LabVIEW contains all the tools to convert from one of these formats to another, but getting it right can be difficult. I propose a simple primitive to do this conversion. It would need to be polymorphic to handle the different data types that timestamps can take. This should only handle numeric data types, such as the numeric Excel timestamp (a double) or a Linux timestamp (an integer). Text-based timestamps are already handled fairly well. Inputs would be timestamp, input format type, output format type, and error. Outputs would be resultant format and error.

In the data dashboard I continously call a webservice (poll webservice) whose return values can be linked to indicatorson the dashboard.<br>Even in case the selected webservice supports parameters too, it would be nice to link these to controls on the dashboard.

Creating and modifying dashboard on the tablet / mobile device is quite painful.

As you can share them via mail or cloud, wouldn't it be nice to create and edit them on a PC and the share/deploy them to the tablet / mobile device?

 

Within LabVIEW or with a seperate standalone application...

LabView has supported SSL secure communications for the web server for some time now.

But, there is no support for using SSL to a data socket.

 

In these times, with the NSA and hackers snooping everywhere, it seems like a necessity.

 

There is a third party library "SSL Library" on the NI site.  It has no documentation.  It uses Microsoft .NET calls to do SSL, but it is not clear how to make it work.

 

We have been using a proxy written in perl to get SSL to a LV data socket.  It is a kludge solution.

 

Why can't LabView handle this?

 

Mike

 

 Connecting to remote panels using LabVIEW is difficult if private networks, local private and external public IPs (under NAT), and firewalls, etc. are involved. It requires significant knowledge as well as external networking configurations (port forwarding, etc.), and possibly admin privileges to modify those.

 

There are plenty of companies that have found a way around all this. The prime example is chrome remote desktop, which seamlessly works even if target computers are in hidden locations on private networks, as long as each machine can access the internet with an outgoing UDP connection. The way I understand it, each computer registers with the Google server, which in turn patches the two outgoing connections together in a way that both will communicate directly afterwards. All traffic tunnels inside the plain Google chat protocol (udp based). Similar mechanisms have been developed for security systems (example) and many more.

 

Since the bulk of the traffic is directly between the endpoints, the traffic load on the external connection management server is very minimal. It simply keeps an updated list of active nodes and handles the patching if requested.

 

I envision a very similar mechanism where LabVIEW users can associate all their applications and distributed computers with a given user ID (e.g. ni profile), and, at all times be able to get a list of all currently running remote systems published under that user ID. If we want to connect to one of them, the connection server would patch things together without the need of any networking configuration. Optionally, users could publish any given panel under a public key, that can be distributed to allow public connections by any other LabVIEW user.

 

This is a very general idea. Details of the best implementation would need to be worked out. Thanks for voting!

 

Hi everybody!

 

We are using the Webservices of Labview very extensively and are pretty happy with this new functionallity, although recently we had a client who requested

to access some services via http on a mac based environment. I was surprised when i found out that OS X is not supported for webservices (the same with linux). We had to hack around and found some solutions which were acceptabled but not really elegant, so what i think is that it would be great to have webservices besides windows at least on linux and OS X available. This feature will be used a lot in the future and it would be a pitty if you have only half of the labview capabilities on some systems.

 

Thanks a lot Andy

 
Currently, for an array variable, changing an element requires the developer to read the array, update the element and write the array out. Whilst this is fine for simple cases with not sophisticated data flows, it becomes very cumbersome for cases when parallel processing is introduced. For instance, multiple loops updating an array of status flags.
 
 
At present, if two loops (which may be in different Vis) both execute a read-update-write operation on a global shared variable, data can be lost if the operations happen at similar times. Consider the following example: 
 
1.png
 
 
 
Looking at the image above, let's imagine "array1" consists of two elements with values F and T respectively. Then the operation would happen as follows:
 

Step Number

Action taken

Array values

1.

 Read "array1"

 F and T

2.

 Read "array1"

 F and T

3.

 Update "array1" element 0 to T

 T and T

4.

 Update "array1" element 1 to F

 F and F

5.

 Write to "array1"

 T and T

6.

 Write to "array1"

 F and F

 
 
Hence all the data written by “Loop A” is lost. Putting user-defined locking using shared variables around the operations does not seem to work, presumably due to update rate of the locking variables.
 
What would be helpful to overcome these race conditions is to replicate the functionality present in other languages to do operations like a[3]=4 or printf(“%lf”,a[4]). In these cases, an atomic operation is performed to get or set the value in the defined memory location. The addition of this functionality to shared variables would be extremely powerful.
 
An example of such functionality could look something like this:
array.png

 

Hello,

 

the current functionality doesnt allow to asynchronously call a method that has any dynamically dispatched inputs. This causes a need to create a statically dispatched wrapper around the dynamic method which then can be called.

 

This is a source of frustration for me because it forces you to have code that is less readable, and this doesn't seem to be any reason for having functionality like that. Since you allready need to have a class loaded in the memory to provide it as an input for the asynchronously called VI why not just allow to use dynamic dispatch there (the dynamic method is allready in the memory).

 

How it is right now:

DynamicDispatchAsynchCall0.png

DynamicDispatchAsynchCall1.png

 

Solution: Allow to make asynchronous calls on methods with dynamic dispatch inputs.

Please make the LabVIEW Web Server more efficient! I love the idea of the LabVIEW Web Server as it allows me to create powerful, dynamic web pages for my clients. However, I recently published my VI using the "Embedded" option of the LabVIEW Web Server and found that this uses 3 Mbps per instance. An NI support tech was kind enough to point me to the following article: http://www.ni.com/white-paper/3277/en. This article confirmed my fears: the LabVIEW Web Server is transmitting all of the data for each object (even the invisible ones) at the 10Hz update rate of my application.

 

For this reason, I've decided to use Windows RemoteApps (using the Windows Remote Desktop engine) to publish my VI. In doing so, my network bandwidth is reduced from 3 Mbps to 10 Kbps with zero loss of functionality. However, RemoteApps are a pain to set up and aren't nearly as nice to the end user as a web published LabVIEW front panel. I would like to suggest that you all look at the algorithm behind Windows Remote Desktop and use something similar for the LabVIEW Web Server. In my understanding, Remote Desktop simply sends CHANGED pixels from server to client, and sends mouse and key clicks from client to server. Why would you need anything else?

Hello all, 

 

I have a suggestion about Automotive Diagnostic Command Set. It is currently designed for CAN-based diagnostics only and diagnostics on serial lines (K-line and L-line) are not in the scope of the Automotive Diagnostic Command Set.

 

I would like the Automotive Diagnostic Command Set to support diagnostics on serial lines (K-line and L-line) too. 


Automotive Diagnostic Command Set User Manual
http://www.ni.com/pdf/manuals/372139a.pdf

 

Best regards

Rutsu Kenmoku 

I know this is another Raspberry Pi idea, but hopefully the answer is simpler than some of the past requests (maybe as simple as "no"). I am wondering if it would be possible to run a simple labVIEW executable on a Raspberry Pi with the sole purpose of viewing network published shared variables. This could provide a low cost UI terminal for distributed hardware. My hope is that the required drivers are minimal in that only a network connection is required and no hardware drivers for the NI products would be required. Basically, it would be similar to the data dash board app but would allow much more customization by the developer for software based analysis and display.

Hello,

 

I'd like to humbly and respectfully suggest that "Internecine Avoider.vi" be rewritten or at the very least, refactored extensively.  (again)

 

This VI is found in "TCP Listen.vi", which is on the TCP palette.  It maintains a registry of existing listener connections and attempts to reuse them.

TCP.png

 

What's the big deal, you ask?  Well, when I'm having problems with listeners and need to figure out what's going on, sometimes I need to look into this VI.  Like a lot of NI code that I generally trust, I would ordinarily skip over this and disregard it as a possible source of problems.  The trouble is, every time I look at it I can't easily decipher the nuances of what it does, given its messiness.  Thus, though it may be perfectly functional, I don't trust it.

 

I realize the code could be a whole lot worse.  I also realize that someone has been in there since LabVIEW 2011 was released and has made some improvements.  Smiley Wink  Kudos to that individual for all the new free label comments.

 

Nonetheless, here are some factors that obfuscate this VI:

 

  1. There is no documentation under "VI Properties".  What does it do, at a glance?  (I know the answer because I've stared at it over a few versions of LabVIEW)
  2. "TCP Listen Internal List.vi" also has no documentation under "VI Properties"
  3. Without nitpicking, there are some sloppy practices employed in here that don't make a lot of functional sense.
  4. Deprecated "Retrieve Element" case in the internal list VI.  "Item Requested" is no longer a requested item, but the result of a search on elements.
  5. "net address" is confusing and ambiguous  Couldn't this be named, "Address of Listening NIC?" (Yes, I know it's documented under LabVIEW help for the TCP listener VI)
  6. "Conflict" is ambiguous.  This should be renamed to something more intuitive like, "Conflict: Multiple NICs Using Port" or something similar.
  7. Un-typedef'ed clusters in the internal list VI, with deceptive ordering of elements, combined with unnamed unbundler functions.

... etc.

 

I got to thinking... I know it works, or at least I think it does, but couldn't this be done more simply and elegantly?

 

I know, I know, "If it ain't broke, don't fix it."  ... but I think it could still use some work to make it more intelligible.

 

Respectfully,

 

Mr. Jim

 

InternecineAvoider.png

We were thrilled to see that NI developed an OPC UA API. We develop software for both VxWorks and Windows, so having OPC UA available on RT is great. But having to shell out for the entire DSC suite, run-time licenses and all, just to be able to use the same API on Windows is unreasonably costly and forces us to use a different API on Windows. If we could buy the API as an isolated component at a more reasonable price (and with easier licensing) we would jump for it immediately.

 

A generalized version of the idea:

The DSC can still function as a nice bundle, where the price for the bundle is lower than the total for each individual item, but when NI makes such packages please make it possible to pick-and choose amongst those components as well, so that in cases where you actually have a need for just one of them, you can get it at a price that is reasonable for that individual component.