Data Acquisition Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Hey all,

 

Just passing an idea for some new HW and SW for the PXI and LV. I think a touch panel PC monitor would be an ideal O/P for a PXI as you could plug it in the back and eliminate the use for a mouse and keyboard whilst using the PXI chassis. This also leads onto my second idea, the creation of a front panel customization tool kit. This tool kit will enable users to easily customize the front panels of their VI's much in the way a function pallet would work. This way you are able to quickly customize your front panel to suit the end users needs.

It matters less to me right now because I am about to change jobs but instead of a frame grabber to connect to cameras, we could use frame grabbers that acquire both Hi-Def and standard-def video. We've found some that usually work but get a bit flaky when we are using two simultaneously. Having robust LabVIEW drivers from NI instead of writing our own would save a lot of time.

Support for .net 4.5

We do atomic physics experiments with everything run off of hardware time. Low noise electronics are fairly crucial to getting things to work properly.

 

It would be fantastic to have some very low noise analog output modules with >16-bit resolution. We currently use the cRIO platform and the NI 9263 analog output modules. However, these have poor noise performance. The best module I have seen from NI is the PXI-6733, but it would be great to have something with an output voltage noise density on the order of ~5-10 nV/sqrt{Hz} in the range of ~100 Hz to a few MHz. The Analog Devices AD5791 20-bit DAC seems like a good candidate for this.

 

Any thoughts?

Why new NI MAX does not sort global virtual channels by name? It used to be very handy to have that option, however, after 2014, they decided to remove this very useful function away without any reason (at least I can not think of any reason). I am currently using NI MAX 15.0 and I could not find anything in the menu about how to sort my global virtual channels by name. They could have implemented something in the menu easily to sort channels by name or something else instead of removing that useful function.

I configure DAQmx channels, tasks, and scales as well as CAN messages and channels in MAX. It would be nice, if I could change the ordering of these elements after creation. It would also be nice to have an option to remove all configured channels (and tasks and scales) as well as CAN messages and channels, if I want to load the configuration of another project. Now I have to go to every section and delete the configuration by hand.

 

It would also be cool, if I could configure DAQmx variables in MAX, which I can use (write and read) in LabVIEW, too. E.g. I have a lot of tasks, which all use the same aquisition rate. If I have to change the rate, I have to change every task by hand. If I could use a variable, I just would change the variable. This would save a LOT OF WORK with huge DAQmx configurations.

Hello everybody,

 

I use always MAX to configure my DAQ cards to remove the burden of writing every time the same code just to create a DAQ task. MAX is part of my LabVIEW tool base and every project where I use a DAQ card does have a MAX config file.

 

Everything is nice, until I have to add some third party hardware! Then I have either use the driver provided with the hardware or write my own driver. I cannot use MAX to configure it, therefore I have to write VIs to allow online configuration. I also have to write VIs to load and save the configuration of the hardware. It would be much simpler, if the supplier of the third-party hardware or I could write plug-ins for MAX (and DAQmx) to incorporate the hardware in LabVIEW (and of course other software, which uses MAX). I could use the same API for nearly all of my hardware. One example from a competitor for this kind of hardware integration is Ipemotion of Ipetronik. There is even a plug-in for DAQmx in Ipemotion!

 

Regrads,

Marc

The size of for example the NI-Rio driver package is 4GB in the most recent version which is comparable to size of common operating systems. This is too much in my opinion if someone needs only a specific driver for a specific NI hardware. Therfore i suggest granularity reduction of driver packages to a more mouth friendly morsel (for ex. 200MB max).

 

Including me, there are couple of other LabVIEW users, those wish to have this feature available, wherein we could be able to create Virtual channels (or even Global tasks) for an internal channel of a DAQ or SCXI.

This feature implementation should also include, allow to configure and use internal channels while using DAQ Assistant (though I personally don't prefer using DAQ Assistant).

 

Check this post here. This feature wish is around the same line.

Occasionally, I need to create global virtual channels that are used to acquire AC voltage signals. Currently, I just acquire the instantaneous values and take the RMS average in LabVIEW. However, this does not let you calibrate the global virtual channel in MAX (because the acquisition is the instantaneous DC voltage).

 

It would be nice to have the custom scales allow user customizable LabVIEW programming plug-ins, such as RMS average point by point, so that I can calibrate an AC voltage channel in MAX.

I'm working with some B class devices and can't work on the project at home because I don't have the hardware and can't simulate it. Can you add the NI USB-6008 and 9 (whole class would be nice) to the MAX 'create simulated device'  list?

 

thanks

 

frank9

 

I could program a whole panel to allow the user to modify the setup parameters for a DAQmx task, but decided that it's easer to simply stop the task and launch MAX with LaunchExecutableEx and let the user play with the task settings there. Unfortunately there seems to be no way to tell MAX, e.g. through command line parameters, to open up and display a particular DAQmx task upon startup. Might I suggest some facility for doing this, possibly through simple command-line parameters or even through an ActiveX utility?

Have Max maintain a database of your transducers calibration due dates that can be monitored in LabView. Currently I maintain a lot of transducers that are used throughout my programs. I have them selectable through the Custom scale input. Unfortunately I cannot conduct a quick check to make sure that my transducer is in or out of calibration through the program. I would be nice to have that capability.

For each device, MAX will use an unique device number.

This is no problem with fixed measurement equipments.

With USB devices this may become a problem.

On a school, a student will work with different combinations of computer and device.

 

If the student wants to use his program with a different device, he will get an error.

Even if the device is the same type, but has a different serial number.

New Picture (1).png

To solve it, the student needs to open all DAQ routines and to alter the device number.

Or he needs to change the DAQ assistant routine into a VI and change the constant device number by a routine as shown in DAQmx device to use.vi.

 DAQmx device to use.png

This same problem occurs when using NI-IMAQdx devices.

 

Solution:

Make it possible to select a device by type instead of a device by number.

New Picture.png


I recently had a customer create a global virtual channel in Measurement and Automation Explorer (MAX).  They then set the maximum and minimum values for the input range of their signal. 

 

GlobalVirtualChannel.jpg

 

 

 

 

minmax.jpg

 

My customer wanted to access the +2 and -2 values entered above and display them in LabVIEW.  However, the property nodes for global virtual channels only accesses the limits of the board.  For example, the customer's board may only be able to handle voltages between +/- 10 Volts.  No matter which property node we chose... all that was returned was the +/- 10 Volt range.  Could we please give customers access to this information?

Problem:

  1. Many applications need multiple DAQ chassis synced across 100s of meters. Ethernet is not used due to its indeterminism.
  2. While NI's TimeSync uses special hardware (&1588), it seems like NI could build into the Ethernet drivers a way to do time syncing without any other hardware modules (cDAQ, cRIO, PXIe, etc.). The customized NI-Ethernet would do the master-slave timing for you. It would be built into the platform. The key may be to use the lower boundary of histogram distribution in the statistics of loop time. Not using an average loop time, but use the bounded minimum as a special loop time statistic. See the image at the bottom.
  3. Ethernet time to send a message and receive an answer is not deterministic. But, if all the Ethernet chassis are on a dedicated subnet with no other traffic, then there should be some deterministic MINIMUM time for one chassis to send a packet to another chassis.

Possible solution. Configuration: Suppose you had 5 Ethernet cDAQ 8 slot chassis. Start off by making a simple configuration work first, then extrapolate to more complicating network configurations. Therefore make all the chassis on the same subnet and maybe a dedicated subnet. Each cDAQ is 100s of feet from the other. You want to sample data at 1000 samples per second on all chassis and either lock all the sample clocks, or adjust the clocks on the fly, or know how much the sample clocks differ and adjust the times after the data is transferred.

  1. LabVIEW tells each slave chassis that it is to be a slave to a particular master cDAQ chassis (gives the IP address and MAC address).
  2. LabVIEW tells one of the cDAQ chassis to be the master and it gives the IP address (and MAC address) of all the slaves to that master.
  3. The local Ethernet driver on the chassis then handles all further syncing without any more intervention from LabVIEW. Avoids Windows’ lack of determinism.
  4. The master chassis sends an Ethernet packet to each slave (one at a time, not broadcast). The slave's Ethernet driver stores the small packet (with a time stamp of when received) and immediately sends a response packet that includes an index to the packet received (and the timestamp when the slave received it). The master stores the response packet and immediately sends a response to the slave response. This last message back to the slave may not be necessary.
  5. The local Ethernet driver for each cDAQ has stored all 1000 loop times and their associated timestamps.
  6. Now each master slave combination has a timestamp of the other's clock with a time offset due to the Ethernet delay. But this Ethernet delay is not a constant (it is indeterminate). If it were constant, then syncing would be easy. BUT
  7. One characteristic of the loop time should be determinant (very repeatable). On a local subnet the minimum loop time should be very consistent. After these loop messages and time stamps are sent 1000 times, the minimum time should be very repeatable. Example: Suppose we only want 10 us timing (one tenth of a sample period). After sending 1000 time stamped looped messages, we find that the minimum loop time falls between 875us and 885 us. We have 127 loop times that fall into this minimum range (like the bottom “bucket” in a histogram plot). If we were to plot the time distribution, we would notice an obvious WALL at the minimum times. We would not have a Gaussian distribution. This 2nd peak in the distribution at the minimum would be another good indication that this lower value is determinant.
  8. Now the master and slave chassis communicate to make sure they have the same minimum loop times on the same message packet loops. The ones that agree are the ones used to determine the timestamp differences between the master and the slaves. The master then sends to each slave the offsets to use to get the clocks synchronized to one tenth of a sample time.

This continues to go on (in the background) to detect clock drift. Obviously after the data acquisition starts the network traffic will increase, and that will cause the number of minimum packet loop times to be less than 127 out of 1000. But we should be able to determine a minimum number of minimums that will give reliable results, especially since we know beforehand the amount of traffic we will be adding to the network once the data acquisition starts. We may find out that an Ethernet bus utilization of less than 25% will give reliable results. Reliable because we consistently get loop times that are never less than some minimum value (within 10 us).

 

Remember, a histogram of the loop times should show a second peak at this minimum loop time. This 2nd peak is an indication of whether this idea will work or not. The “tightness” of this 2nd peak is an indication of the accuracy of the timestamp that is possible. This 2nd peak should have a much smaller standard deviation (or even a WALL at the minimum - see image). Since this standard deviation is so much smaller than the overall average of the loop time, then it is a far better value to use to sync the cDAQ chassis. If it is “tight” enough, then maybe additional time syncing can be done (more accuracy, timed to sample clocks, triggers, etc.).

 

Example, now that the clock are synced to within 10 us, the software initiated start trigger could be sent as a start time, not as a start trigger. In other words, the master cDAQ Ethernet driver would get a start trigger from LabVIEW and send all the slaves a packet telling them to start sampling at a specific time (computed as 75 milliseconds from now).

 

Ethernet_Loop_Time_Distribution_Shows_Determinism_at_the-Wall.jpg

 

I mentioned parts of this idea to Chris when I called in today for support (1613269) on the cDAQ Ethernet and LabVIEW 2010 TimeSync. Chris indicated that this idea is not being done on any of your hardware platforms yet. By the way, Chris was very knowledgeable and I am impressed (as usual) with your level of support and talent of your team members.

The DAQ Assistant was presumably created to simplify data acquisition.  The idea seems to be to put all of the needed pieces in one place, so that all the low level 'traditional' DAQ vi functions are not needed.

 

Consider the following simple vi:

 

Demo VI

 

This could be as simple as one analog input channel.

 

The program will compile into an .exe and work just fine, as long as you don't use one of the features of the DAQ Assistant:  Custom scales.

 

Custom scales are not stored with the VI or project, but in a system file that does not automatically get included in an .exe build.  The .exe will work fine on the original PC that built it, but it will not work when the .exe is loaded on a different PC.

 

There is a method that allow the user to port the custom scales to another PC, but it is not automatic.

 

http://digital.ni.com/public.nsf/allkb/12288DEB3C6A185B862572A70043C353

 

 

The fundamental problem is that the DAQ Assistant is intended to make life simple and give you everything you need to make a program.  Custom scales are included in the DAQ Assistant so that the programmer does not need to manually create scaling in their vi.  But what good does that do if they are not included in the .exe build, and there is no obvious clue that this requires extra work or what that work is?

 

The build .exe process needs to be upgraded to automatically include custom scales and possibly other MAX settings that are essential to the operation of a compiled program.  It does not matter if the build process ciphers and includes only the specific scales or setting used by the particular program / vi, or if it just takes all the settings. 

 

These are critical pieces to make the final compiled program run on another machine.  The user should not have to somehow know that these pieces are separate but need to be included, and have to take extra steps to go out and select them in order for them to be used in the build.  That is totally counter intuitive to the simplicity intended by the DAQ Assistant.

Background question here:

http://forums.ni.com/t5/Multifunction-DAQ/Channel-Calibration-information/m-p/1295908

 

Channel calibration in MAX is too limited to be very useful. According to NI AE, from MAX you can only create a table for calibration, not a polynomial fit. I assume intermediate values are linearly interpolated, but that wasn't specified. Programmatic calibration to enable polynomial cal as described in the above listed topic makes traceability problematic. 

 

 

I propose that the channel calibration capabilities in MAX be amended to allow the user to select calibration mapping, either TABLE or POLYNOMIAL FIT. 

OF course, the user should be able to select the polynomial order, and the R-value should be clearly indicated.

 

All of these properties should be included in any report generated from MAX. 

 

 

 

 

There is a variety of connection posibillities (link below), but I really miss one for high channel counts, like a SubD25.

http://sine.ni.com/nips/cds/view/p/lang/de/nid/1721 

 

Right now lossy and lossless compression can be achieved as presented here: Data Compaction for High-Speed Streaming to Disk where AI.RawDataCompressionType and AI.LossyLSBRemoval.CompressedSampleSize are used (see figure below).

lossy and lossless compression

In this case raw data are stored and additional header info has to be added. The idea is to implement and optimize it in DAQmx (DAQmx Configure Logging VI). This will allow high-speed streaming and to save disk space for higher sampling rates or long-term measurements.

 

Afterward implementation in the TDMS API could help to read directly compressed raw data without additional operations in LV. This will allow to work on TDMS files in Excel or Matlab using nilibdds.dll.

 

The issue is discussed a little here: Why the TDMS file is larger than it should be.

 

What do you think about it?

 

Lukasz Kocewiak