Data Acquisition Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Hi Idea Exchange team,

 

VirtualBench App looks great since last update, by the way it could be great to add features like boxed scopes have, like :

  • Channel scale customization : actually, we can only use the "10x", could be nice to be able to use also a custom scaling (equation and unit)
  • Native phase shift measurement : actually we can use cursors to retrieve this data, but will be great if directly displayed
  • Update mode : could be also great to switch between Strip, Scope and Sweep mode like in LabVIEW waveform chart 

Any suggestion welcome !

 

Mathieu,

 

Have had this feature idea passed on to me. Person sees this popup every time they wake up their computer:

 

capi1.PNG

This person was confused at how to prevent this from showing again. They did not know you had to scroll:

 

cap2.PNG

 

Maybe make this easier to discover? 

Hi, It would be useful if daqmx has a property node to read the device pinouts.

Idea:

We've come across a few use cases where it would be nice to pull samples from the DAQ buffer based on position in the buffer instead of sample number. This gets a little hard to describe, but a NI applications engineer referred to it as absolute addressing without regard to FIFO order.

 

In its simplest form we could use a read operation that just pulls from the beginning of the buffer as it is (probably?) in memory, maybe using a RelativeTo option of "Start of Buffer", with no offset.

 

The thought is that sometimes a properly set up buffer can contain exactly the data we need, so it'd be nice and clean to just get a snapshot of the buffer.

 

Use cases:

Our use cases involve continuously and cyclically sending AO and sampling AI in tasks that share a time base, ensuring that every n samples will be the beginning of a new cycle. A buffer of that same size n will therefore be circularly updated like a waveform chart in sweep mode.

 

In other words, the sample at the first position in the buffer will always be the beginning of the cycle, no matter how many times the sampling has updated the buffer.

 

If we could take a snapshot of the buffer at any moment, we'd have the latest readings made at every point in the cycle.

 

Alternatives:

The idea is that the buffer at all times has exactly the samples we need in the form we need them. What's lacking in existing functionality?

 

With RelativeTo First Sample, we don't know exactly what samples are there at any moment. We can query Total Samples and do math to figure out what samples the buffer contained, but while we're doing math sampling continues, leading to the chance that our calculation will be stale by the time we finish and we'll get a read error.

 

RelativeTo Most Recent Sample can return an entire cycle worth of samples, but they'll probably be out of phase. The sample beginning the cycle is likely to be somewhere in the middle of the FIFO order.

 

RelativeTo Read Position requires that we constantly read the buffer, which is a hassle if we only want to observe a cycle occasionally. It kind of means we'd be duplicating the buffer, too.

 

Best alternative:

In talking with engineers and on the forums, it sounds like the best option for us is to use RelativeTo First Sample and Total Samples to calculate the sample number at the beginning of a sample, and then make sure the buffer is many cycles long to mostly guarantee that the sample will still be there by the time we request it.

 

Forum post: http://forums.ni.com/t5/forums/v3_1/forumtopicpage/board-id/250/page/1/thread-id/91133

NI support Reference# 2407745

Since the drivers support the basic communication for the GPIB-USB-HS+ devices, the code should be ported to support the analyzer functionality as well. Create the driver API to support analyzer and then make that available to LV.  Create cross platform LV routines for the analyzer functionality so it is available on all platforms.

Support for .net 4.5

We do atomic physics experiments with everything run off of hardware time. Low noise electronics are fairly crucial to getting things to work properly.

 

It would be fantastic to have some very low noise analog output modules with >16-bit resolution. We currently use the cRIO platform and the NI 9263 analog output modules. However, these have poor noise performance. The best module I have seen from NI is the PXI-6733, but it would be great to have something with an output voltage noise density on the order of ~5-10 nV/sqrt{Hz} in the range of ~100 Hz to a few MHz. The Analog Devices AD5791 20-bit DAC seems like a good candidate for this.

 

Any thoughts?

Why new NI MAX does not sort global virtual channels by name? It used to be very handy to have that option, however, after 2014, they decided to remove this very useful function away without any reason (at least I can not think of any reason). I am currently using NI MAX 15.0 and I could not find anything in the menu about how to sort my global virtual channels by name. They could have implemented something in the menu easily to sort channels by name or something else instead of removing that useful function.

The size of for example the NI-Rio driver package is 4GB in the most recent version which is comparable to size of common operating systems. This is too much in my opinion if someone needs only a specific driver for a specific NI hardware. Therfore i suggest granularity reduction of driver packages to a more mouth friendly morsel (for ex. 200MB max).

 

Including me, there are couple of other LabVIEW users, those wish to have this feature available, wherein we could be able to create Virtual channels (or even Global tasks) for an internal channel of a DAQ or SCXI.

This feature implementation should also include, allow to configure and use internal channels while using DAQ Assistant (though I personally don't prefer using DAQ Assistant).

 

Check this post here. This feature wish is around the same line.

Occasionally, I need to create global virtual channels that are used to acquire AC voltage signals. Currently, I just acquire the instantaneous values and take the RMS average in LabVIEW. However, this does not let you calibrate the global virtual channel in MAX (because the acquisition is the instantaneous DC voltage).

 

It would be nice to have the custom scales allow user customizable LabVIEW programming plug-ins, such as RMS average point by point, so that I can calibrate an AC voltage channel in MAX.

I'm working with some B class devices and can't work on the project at home because I don't have the hardware and can't simulate it. Can you add the NI USB-6008 and 9 (whole class would be nice) to the MAX 'create simulated device'  list?

 

thanks

 

frank9

 

I could program a whole panel to allow the user to modify the setup parameters for a DAQmx task, but decided that it's easer to simply stop the task and launch MAX with LaunchExecutableEx and let the user play with the task settings there. Unfortunately there seems to be no way to tell MAX, e.g. through command line parameters, to open up and display a particular DAQmx task upon startup. Might I suggest some facility for doing this, possibly through simple command-line parameters or even through an ActiveX utility?

Have Max maintain a database of your transducers calibration due dates that can be monitored in LabView. Currently I maintain a lot of transducers that are used throughout my programs. I have them selectable through the Custom scale input. Unfortunately I cannot conduct a quick check to make sure that my transducer is in or out of calibration through the program. I would be nice to have that capability.

For each device, MAX will use an unique device number.

This is no problem with fixed measurement equipments.

With USB devices this may become a problem.

On a school, a student will work with different combinations of computer and device.

 

If the student wants to use his program with a different device, he will get an error.

Even if the device is the same type, but has a different serial number.

New Picture (1).png

To solve it, the student needs to open all DAQ routines and to alter the device number.

Or he needs to change the DAQ assistant routine into a VI and change the constant device number by a routine as shown in DAQmx device to use.vi.

 DAQmx device to use.png

This same problem occurs when using NI-IMAQdx devices.

 

Solution:

Make it possible to select a device by type instead of a device by number.

New Picture.png


I recently had a customer create a global virtual channel in Measurement and Automation Explorer (MAX).  They then set the maximum and minimum values for the input range of their signal. 

 

GlobalVirtualChannel.jpg

 

 

 

 

minmax.jpg

 

My customer wanted to access the +2 and -2 values entered above and display them in LabVIEW.  However, the property nodes for global virtual channels only accesses the limits of the board.  For example, the customer's board may only be able to handle voltages between +/- 10 Volts.  No matter which property node we chose... all that was returned was the +/- 10 Volt range.  Could we please give customers access to this information?

Problem:

  1. Many applications need multiple DAQ chassis synced across 100s of meters. Ethernet is not used due to its indeterminism.
  2. While NI's TimeSync uses special hardware (&1588), it seems like NI could build into the Ethernet drivers a way to do time syncing without any other hardware modules (cDAQ, cRIO, PXIe, etc.). The customized NI-Ethernet would do the master-slave timing for you. It would be built into the platform. The key may be to use the lower boundary of histogram distribution in the statistics of loop time. Not using an average loop time, but use the bounded minimum as a special loop time statistic. See the image at the bottom.
  3. Ethernet time to send a message and receive an answer is not deterministic. But, if all the Ethernet chassis are on a dedicated subnet with no other traffic, then there should be some deterministic MINIMUM time for one chassis to send a packet to another chassis.

Possible solution. Configuration: Suppose you had 5 Ethernet cDAQ 8 slot chassis. Start off by making a simple configuration work first, then extrapolate to more complicating network configurations. Therefore make all the chassis on the same subnet and maybe a dedicated subnet. Each cDAQ is 100s of feet from the other. You want to sample data at 1000 samples per second on all chassis and either lock all the sample clocks, or adjust the clocks on the fly, or know how much the sample clocks differ and adjust the times after the data is transferred.

  1. LabVIEW tells each slave chassis that it is to be a slave to a particular master cDAQ chassis (gives the IP address and MAC address).
  2. LabVIEW tells one of the cDAQ chassis to be the master and it gives the IP address (and MAC address) of all the slaves to that master.
  3. The local Ethernet driver on the chassis then handles all further syncing without any more intervention from LabVIEW. Avoids Windows’ lack of determinism.
  4. The master chassis sends an Ethernet packet to each slave (one at a time, not broadcast). The slave's Ethernet driver stores the small packet (with a time stamp of when received) and immediately sends a response packet that includes an index to the packet received (and the timestamp when the slave received it). The master stores the response packet and immediately sends a response to the slave response. This last message back to the slave may not be necessary.
  5. The local Ethernet driver for each cDAQ has stored all 1000 loop times and their associated timestamps.
  6. Now each master slave combination has a timestamp of the other's clock with a time offset due to the Ethernet delay. But this Ethernet delay is not a constant (it is indeterminate). If it were constant, then syncing would be easy. BUT
  7. One characteristic of the loop time should be determinant (very repeatable). On a local subnet the minimum loop time should be very consistent. After these loop messages and time stamps are sent 1000 times, the minimum time should be very repeatable. Example: Suppose we only want 10 us timing (one tenth of a sample period). After sending 1000 time stamped looped messages, we find that the minimum loop time falls between 875us and 885 us. We have 127 loop times that fall into this minimum range (like the bottom “bucket” in a histogram plot). If we were to plot the time distribution, we would notice an obvious WALL at the minimum times. We would not have a Gaussian distribution. This 2nd peak in the distribution at the minimum would be another good indication that this lower value is determinant.
  8. Now the master and slave chassis communicate to make sure they have the same minimum loop times on the same message packet loops. The ones that agree are the ones used to determine the timestamp differences between the master and the slaves. The master then sends to each slave the offsets to use to get the clocks synchronized to one tenth of a sample time.

This continues to go on (in the background) to detect clock drift. Obviously after the data acquisition starts the network traffic will increase, and that will cause the number of minimum packet loop times to be less than 127 out of 1000. But we should be able to determine a minimum number of minimums that will give reliable results, especially since we know beforehand the amount of traffic we will be adding to the network once the data acquisition starts. We may find out that an Ethernet bus utilization of less than 25% will give reliable results. Reliable because we consistently get loop times that are never less than some minimum value (within 10 us).

 

Remember, a histogram of the loop times should show a second peak at this minimum loop time. This 2nd peak is an indication of whether this idea will work or not. The “tightness” of this 2nd peak is an indication of the accuracy of the timestamp that is possible. This 2nd peak should have a much smaller standard deviation (or even a WALL at the minimum - see image). Since this standard deviation is so much smaller than the overall average of the loop time, then it is a far better value to use to sync the cDAQ chassis. If it is “tight” enough, then maybe additional time syncing can be done (more accuracy, timed to sample clocks, triggers, etc.).

 

Example, now that the clock are synced to within 10 us, the software initiated start trigger could be sent as a start time, not as a start trigger. In other words, the master cDAQ Ethernet driver would get a start trigger from LabVIEW and send all the slaves a packet telling them to start sampling at a specific time (computed as 75 milliseconds from now).

 

Ethernet_Loop_Time_Distribution_Shows_Determinism_at_the-Wall.jpg

 

I mentioned parts of this idea to Chris when I called in today for support (1613269) on the cDAQ Ethernet and LabVIEW 2010 TimeSync. Chris indicated that this idea is not being done on any of your hardware platforms yet. By the way, Chris was very knowledgeable and I am impressed (as usual) with your level of support and talent of your team members.

The DAQ Assistant was presumably created to simplify data acquisition.  The idea seems to be to put all of the needed pieces in one place, so that all the low level 'traditional' DAQ vi functions are not needed.

 

Consider the following simple vi:

 

Demo VI

 

This could be as simple as one analog input channel.

 

The program will compile into an .exe and work just fine, as long as you don't use one of the features of the DAQ Assistant:  Custom scales.

 

Custom scales are not stored with the VI or project, but in a system file that does not automatically get included in an .exe build.  The .exe will work fine on the original PC that built it, but it will not work when the .exe is loaded on a different PC.

 

There is a method that allow the user to port the custom scales to another PC, but it is not automatic.

 

http://digital.ni.com/public.nsf/allkb/12288DEB3C6A185B862572A70043C353

 

 

The fundamental problem is that the DAQ Assistant is intended to make life simple and give you everything you need to make a program.  Custom scales are included in the DAQ Assistant so that the programmer does not need to manually create scaling in their vi.  But what good does that do if they are not included in the .exe build, and there is no obvious clue that this requires extra work or what that work is?

 

The build .exe process needs to be upgraded to automatically include custom scales and possibly other MAX settings that are essential to the operation of a compiled program.  It does not matter if the build process ciphers and includes only the specific scales or setting used by the particular program / vi, or if it just takes all the settings. 

 

These are critical pieces to make the final compiled program run on another machine.  The user should not have to somehow know that these pieces are separate but need to be included, and have to take extra steps to go out and select them in order for them to be used in the build.  That is totally counter intuitive to the simplicity intended by the DAQ Assistant.

Background question here:

http://forums.ni.com/t5/Multifunction-DAQ/Channel-Calibration-information/m-p/1295908

 

Channel calibration in MAX is too limited to be very useful. According to NI AE, from MAX you can only create a table for calibration, not a polynomial fit. I assume intermediate values are linearly interpolated, but that wasn't specified. Programmatic calibration to enable polynomial cal as described in the above listed topic makes traceability problematic. 

 

 

I propose that the channel calibration capabilities in MAX be amended to allow the user to select calibration mapping, either TABLE or POLYNOMIAL FIT. 

OF course, the user should be able to select the polynomial order, and the R-value should be clearly indicated.

 

All of these properties should be included in any report generated from MAX. 

 

 

 

 

There is a variety of connection posibillities (link below), but I really miss one for high channel counts, like a SubD25.

http://sine.ni.com/nips/cds/view/p/lang/de/nid/1721