LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

 Connecting to remote panels using LabVIEW is difficult if private networks, local private and external public IPs (under NAT), and firewalls, etc. are involved. It requires significant knowledge as well as external networking configurations (port forwarding, etc.), and possibly admin privileges to modify those.

 

There are plenty of companies that have found a way around all this. The prime example is chrome remote desktop, which seamlessly works even if target computers are in hidden locations on private networks, as long as each machine can access the internet with an outgoing UDP connection. The way I understand it, each computer registers with the Google server, which in turn patches the two outgoing connections together in a way that both will communicate directly afterwards. All traffic tunnels inside the plain Google chat protocol (udp based). Similar mechanisms have been developed for security systems (example) and many more.

 

Since the bulk of the traffic is directly between the endpoints, the traffic load on the external connection management server is very minimal. It simply keeps an updated list of active nodes and handles the patching if requested.

 

I envision a very similar mechanism where LabVIEW users can associate all their applications and distributed computers with a given user ID (e.g. ni profile), and, at all times be able to get a list of all currently running remote systems published under that user ID. If we want to connect to one of them, the connection server would patch things together without the need of any networking configuration. Optionally, users could publish any given panel under a public key, that can be distributed to allow public connections by any other LabVIEW user.

 

This is a very general idea. Details of the best implementation would need to be worked out. Thanks for voting!

 

Hi!

 

We are trying to work with an OPC server with some array tags defined. We can access the array tags perfectly.

 

If you request a single value from that OPC server (through an OPC client like Wonderware, Ifix, Control Maestro (Wizcon), WinCC, etc….), the OPC server sends a timestamp with the time in which that value was acquired. I have confirmed that there's no way to request single values from an array tag (neither with DSC, nor with DataSocket).

 

I don't know how hard to develop would this feature be, but I think it would be great.

 

Regards!!

It'd be great if NI Spy had an "always on top" selection under the View menu.

 

thanks! Smiley Happy

I would like DSC to have the ability to use array variables from a PLC as shared variables arrays. This is currently not supported, and adding individual variables to then make up an array is cumbersome. Using methods other than shared variables are not as portable, and equally cumbersome,

Between program executions it without resetting the cards the pxi cards retain there previous settings. For example if I set ao0 to 5v, stop my program, restart it (without a reset) the card still generates 5v. with digital cards and one particular series of cards (6255 is an example though) the ao channels can be set as internal channels then measured wrt gnd. surely it should be possible to interogate any analogue pxi card for there current value (min and max etc are available).

 

 

Most of the time, I put remote devices on our internal network. If I set up the device so the IP Address is dynamically assigned, the IP often changes. But, the project needs me to point to a device via specific IP address. If the IP Address changes, I constantly have to update the remote devices properties in my project, often times not realizing this is even an issue until a deployment fails because the device can't be found.

 

I'm proposing an idea to link to a devices' alias (or some unique ID other than IP) and allow the project to automagically update the IP Address.

 

 

 

Hello,

 

the current functionality doesnt allow to asynchronously call a method that has any dynamically dispatched inputs. This causes a need to create a statically dispatched wrapper around the dynamic method which then can be called.

 

This is a source of frustration for me because it forces you to have code that is less readable, and this doesn't seem to be any reason for having functionality like that. Since you allready need to have a class loaded in the memory to provide it as an input for the asynchronously called VI why not just allow to use dynamic dispatch there (the dynamic method is allready in the memory).

 

How it is right now:

DynamicDispatchAsynchCall0.png

DynamicDispatchAsynchCall1.png

 

Solution: Allow to make asynchronous calls on methods with dynamic dispatch inputs.

When using the ECU M&C toolkit to handle XCP communication on an XNET interface port (using the the xxx:yyy@nixnet syntax), I've found that Disconnecting and Closing the XCP connection will cause the whole XNET interface to stop, even if other XNET sessions are still using it. 

 

This seems to be bad behavior for an XNET port.  Typically, when a session is closed, the interface will not be closed until every other session is done using it.  In this case, though, the interface gets closed without regard to other XNET sessions that may be running on the same port, and they consequently stop receiving or transmitting CAN frames, without any indication of why.

 

This is particularly problematic for me if I am using a Stream In session to log all messages on the interface.  If a self-contained XCP code module is run (Open, Connect, Read, Disconnect, Close), then the interface gets closed/stopped, and the Stream In session stops receiving frames.

 

I believe that this issue is happening with the NI Automotive Diagnostic Command Set (ADCS) as well.  It also seems to close the interface when a Disagnostic session is closed (see here and here).

 

The files with extension (.txt), are one of the most used. But when you want to export data from a graph to a file (.txt), you can not make directly. It would be interesting to have this option.

 

 

EXPORT.png

I posted this in the discussion forums here, but after forming my thoughts, I realize that it is really a suggestion for product improvement, so I'm cross posting here.


In short, I would like the ADCS toolkit to use Frame In/Out Queue Sessions instead of Frame In/Out Stream Sessions when running on XNET hardware (or at least make that an option).  This would enable the separate use of a Frame In Stream Session to log all of the CAN traffic on a port.

 

This was recently made possible for the ECU M&C toolkit in version 2.3, and I think the same concept should be extended to the ADCS toolkit.

 

A cursory test of ADCS 1.1.1 suggests that this may already be an undocumented feature of the toolkit, but there are some subtle bugs that may need to be worked out for this to be fully supported.

I know this is another Raspberry Pi idea, but hopefully the answer is simpler than some of the past requests (maybe as simple as "no"). I am wondering if it would be possible to run a simple labVIEW executable on a Raspberry Pi with the sole purpose of viewing network published shared variables. This could provide a low cost UI terminal for distributed hardware. My hope is that the required drivers are minimal in that only a network connection is required and no hardware drivers for the NI products would be required. Basically, it would be similar to the data dash board app but would allow much more customization by the developer for software based analysis and display.

Some software updates ("NI Update Service") are quite large, it would be quite nice an option that would allow the system shutdown when downloads finish.

 

NI update sofware.png

I have some VME hardware that uses A16 addressing only, so I can communicate with them using VXI, but they do not support VISA at all.  After some time spent conversing with NI support, it appears that VXI has been abandoned, and all low-level VME register access through the NI VME-MXI-2 controller card must be done through VISA now.  I have been able to add VXIin.vi and VXIout.vi (from the old VXI libraries in NI-VXI 3.6) to the latest version of NI-VXI in LabVIEW 32-bit to get the hardware working through VXI communication.  However, these VXI vi's were 32-bit only, and never updated to 64-bit, so I am stuck running LabVIEW 32-bit on my 64-bit OS.  Updating these vi's to 64-bit would be greatly appreciated.

 

Thanks,

Rich

I think it is very difficult to make a UI that runs on Windows and interacts with targets. Here are two suggestions to improve this:

 

1. We currently can't have \c\ format style in a file path control on windows. It would be nice to allow user to specify the OS syntax to use instead of assuming it should always be the local sytax.

2. The icing on the cake would be to have the File Path Control support a property node for an IP Address so when the user clicks on the browse button, it automatically browses the target (this is already an idea mentioned in the link below) and uses the syntax of the target. This becomes especially useful as we start to have targets that may have an alternative way of browsing files besides FTP. It would be a pain to figure out which SW is installed on the target and use the correct method to enumerate files/folders.

 

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Path-control-of-VI-under-real-time-target-should-browse-target-s/idi-p/1776212

 

These two features could be implemented by having an enum property node for the File Path Control called Syntax which include options like: Local, Various OSes (Windows, Linux, Mac, etc), or IP Address. If IP Address is specified, another property Node called IP Address will be used to determine what target's OS to use (if it's not specified or invalid, we default to Local).

According to the increasing number of questions about this communication protocol, it would be time to rewrite the MODBUS library. I also suggest to add it to the NI device drivers installer.

 

This could be the place to list the expected modifications. Some comments and bugs are already listed in above linked page.

Hi Guys!

My idea is quite simple. What I would like to do is to use DAQmx Read and Write to accept a DVR as input so that I can read/write data directly to memory. This would be really appriciated if it could be applied on a TDMS read/write also (i know that there is a feature like this in TDMS now but it is only applicable on external dvrs).

 

 DVR.png

 

Pros: Everything

Cons: None

 

🙂


Sincerely,

 

Andreas

 

 

 

Estoy tratando de hacer una adquisición de datos mediante la placa Arduino mega, pero no se bien como deba hacerlos en labview, porque no debo usar el toolkit de arduino, si no tomar los datos de texto desde la placa arduino y convertirlos a número, y despues discriminar cada una de las señales, ya que estoy leyendo 6 de las entradas de Arduino, agradecería sus comentarios y ayuda, gracias

For ethernet and GPIB communication I have the choice between the LabVIEW "native" drivers (residing in function palettes "Instrument I/O" and "Data Communication") and VISA. LabVIEW 6.1 was the latest version which also supports "native" drivers for serial interface. In LabVIEW 7.0 and later you are forced to use VISA.

 

Besides all advantages of VISA - the biggest disadvantage of it is its HUGE communication overhead it produces. If you use interface sniffer like PortMon, you will see that VISA heavily communicates with the interface chip (requesting its status etc.). So for sending a simple "Hello" over RS232 you don't have only four actions (configure port, open port, sending "Hello", close port), but ten and more.

 

As consequence VISA often lockes out if you have heavy traffic on your serial interface (e.g. if you have to send data every 250ms over the interface) - and if VISA lockes out, you have a serious problem...

 

So PLEASE, give us the native driver for serial interfaces back!

taking ideas like that seen here  , I got this idea: 

 

 

When working with "Path Constant", which works with a path too long, we observe look like this:

path2editado.PNG

Would be much nicer if we could double-click the “Path Constant” for us to see something like this:

path3 editado.PNG

The idea is this.

path idea labview editado.PNG

THIS IS A REPOST FROM THE MAX IDEA EXCHANGE BECAUSE WE DON'T ACTUALLY HAVE A DRIVER IDEA EXCHANGE AND I DONT KNOW IF THE MAX IDEA EXCHANGE IS CORRECT

 

Before I start, I want to make clear that I am fully aware that my suggestion is probably linked to some crazy amount of work....  That being out of the way:

 

I often have to switch between LV Versions and have on more than one occasion run into the rpoblem that different versions of LV work with MUTUALLY EXCLUSIVE sets of drivers.  This means that I cannot (_for example) have LabVIEW 7.1 and 2011 on the same machine if I need to be coding GPIB functionality over VISA because there is no single VISA version which supports both 7.1 and 2011 (image below).

 

 

Of course these days we just fire up a VM with the appropriate drivers but for much hardware (Like PCI or Serial or GPIB) this doesn't work out too well.

 

Why can't we have some version selection ability for hardware drivers.  Why can't I have VISA 4.0 and 5.1.1 installed in parallel and then make a selection of which version to use in my project definition?  I know that ehse drivers probably share some files on the OS level so it clearly won't work for existing driver packages but for future developmend it would be utterly magnificent to be able to define which version of a hardware driver (Or even LV Toolkit like Vision) should be used ina  project.

 

Shane.