Data Acquisition Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I am not an electircal engineer, so I have no idea if there is some reason this has not been implemented in exiting versions of teh cDAQ chassis.  But there are a whole host of applications where a user wants to do Hardware timed digital output to different channels using DIFFERENT time bases.  It would be nice to have more than one DO timing engine available.  I would love to see that in future versions of the cDAQ chassis.

 

Thanks

Matt

For each device, MAX will use an unique device number.

This is no problem with fixed measurement equipments.

With USB devices this may become a problem.

On a school, a student will work with different combinations of computer and device.

 

If the student wants to use his program with a different device, he will get an error.

Even if the device is the same type, but has a different serial number.

New Picture (1).png

To solve it, the student needs to open all DAQ routines and to alter the device number.

Or he needs to change the DAQ assistant routine into a VI and change the constant device number by a routine as shown in DAQmx device to use.vi.

 DAQmx device to use.png

This same problem occurs when using NI-IMAQdx devices.

 

Solution:

Make it possible to select a device by type instead of a device by number.

New Picture.png


I recently had a customer create a global virtual channel in Measurement and Automation Explorer (MAX).  They then set the maximum and minimum values for the input range of their signal. 

 

GlobalVirtualChannel.jpg

 

 

 

 

minmax.jpg

 

My customer wanted to access the +2 and -2 values entered above and display them in LabVIEW.  However, the property nodes for global virtual channels only accesses the limits of the board.  For example, the customer's board may only be able to handle voltages between +/- 10 Volts.  No matter which property node we chose... all that was returned was the +/- 10 Volt range.  Could we please give customers access to this information?

A customer of mine was trying to read 8 thermocouple readings simultaneously over the course of a week and then store the data.  She quickly found out that there are memory limitations with Signal Express.  Eventually no matter how She saved or logged their data, she would run out of memory.  My product suggestion is to write code that will determine the RAM on a customers computer as well as available hard drive space and show customers (at the start of a task) exactly how long they can acquire signals without filling up their hard drive or seriously draining their resources.  That way when they start a process, they are aware of what they're working with in terms of space at that instant rather than when an error pops up three hours or so later.  Most customers would figure out these needs in advance of any acquisition, but for the ease of the customer, this would be a welcomed additional feature.  It would also be nice to have a document that explained how this time frame was calculated.  I suggest that when a task is ran, there is a pop up that explains the maximum amount of samples that can be acquired and time that can pass.  The document I mentioned could be available via hyper-link, and the hyper-link text could read "How did we calculate this number?", which could explain the process in more depth.

 

I have submitted this idea for Signal Express at the product suggestion page found at www.ni.com/contact ("feedback" link in bottom left of window).  I figured this would be a good idea for DAQ in general for any extended signal acquisition.

 

Shawn Shaw

Applications Engineering Department

National Instruments

Currently it is hard to find out whether a property can be set for a specific channel, or only per module. An example is the Strain Gauge Excitation property, which can only be set per module. Other properties can be different per channel.

 

Idea: Add a device specific comment in for example Max, about the different properties.  For example:

 

MAX idea.png

 

 

If NI already has the product, please let me know. I need it in my project. If otherwise, I would like to suggest NI to develope one based its NI-9234 and WLS-9163. It can be readily developed by just remove three channels of the NI-9234 and make a smaller version of WLS-9163 for one channel. I would like the product be one unit to achieve a small size. The one channel wireless data aquisition module will have two imporatnt advantages: 1) much smaller size, therefore can be fit into difficult locations; 2) much lower power consumption, therefore can make much longer recording. These two advantages are essential for wireless measurement, thus the product will have good market potential.

What I would like is a contious sampling DAQ task which resets the count to zero evey time the DAQmx Read.vi is called.  This way you see which direction and by how much the encoder has moved betwen samples.  If it also provided the delta time that would be ideal. 

 

There are ways to do things like this currently but I have run up aginst two applications which would benifit forom something like this and I cannot be the only one. 

While in the development environment it is easy to get a reference to a Fieldpoint IO channel - just drag and drop it from the target in the project file. Things are much more cumbersome if you need to create such a refnum dynamically. (If you are making software that should be possible to just copy onto different controllers (without involving LabVIEW) and then configure to match the available/required IO, static refnums are not usable. )

 

To dynamically get access to an IO point in a built general purpose application you currently have to save and download an iak file to the controller from Measurement and Automation Studio and then open that iak file in your code (Using FP Open) to get the server reference you need as an input to the FP CreateTag function. The need for FP Open and the iak file to create a server refnum is the main problem.

 

If FP Create Tag could do its job without any iak-file (e.g. just with the IP of the controller as an additional input)...OR even better - if there was a Create Fieldpoint IO Refnum VI avilable (no such today!?) that took the controller IP (use local if not wired), device name (or ID number) and channel name (or ID number)..then things would be much more flexible and intuitive.

 

There is one function that allows you to address channels just by the use of slot and channel numbers - and that is the Input Range function (which should not be called the input range either by the way as it could be an output...), so another good solution would be to offer that same functionality for the IO read and write functions.

We really need a hard drive crio module for long term monitoring and reliably storing large amounts of data remotely.

 

Hard-Drive-Module-Concept.png

 

Options:

 

1. Solid State Drive: Fast, reliable, and durable. Extremely high data rates. It would be a very high price module but it could be made to handle extreme temperatures and harsh conditions. It should be available in different capacities, varying in price.

 

2. Conventional Hard Drive: This would give any user the ability to store large amounts of storage, in the order of hundreds of Gigabytes. This type should also come in varying storage capacities.

 

For this to be useable:

 

1. It would need to support a file system other than FATxx. The risk of data corruption due to power loss/cycling during recording makes anything that uses this file system completely unreliable and utterly useless for long term monitoring. You can record for two months straight and then something goes wrong and you have nothing but a dead usb drive. So any other file system that is not so susceptible to corruption/damage due to power loss would be fine, reliance, NTFS, etc.

 

2. You should be able to plug in multiple modules and RAID them together for redundancy. This would insure data security and increase the usability of the cRIO for long term remote monitoring in almost any situation. 

 

 

Current cRIO storage issues:

We use NI products primarily in our lab and LabVIEW is awesome. I hope that while being very forward about our issues, we will not upset anyone or turn anyone away from any NI products.  However, attempting to use a cRIO device for long term remote monitoring has brought current storage shortfalls to the forefront and data loss has cost us dearly. These new hard drive modules would solve all the shortfalls of the current storage solutions for the crio. The biggest limitation of the cRIO for long term monitoring at the moment is the fact that it does not support a reliable file system on any external storage. The SD Card module has extremely fast data transfer rates but if power is lost while the SD card is mounted, not only is all the data lost, but the card needs to be physically removed from the device and reformatted with a PC. Even with the best UPS, this module is not suitable for long term monitoring. USB drives have a much slower data transfer rate and are susceptible to the same corruption due to power loss.

 

When we have brought up these issues in the past, the solution offered is to set up a reliable power backup system. It seems that those suggesting this have never tried to use the device with a large application in a situation where they have no physical access to the device, like 500 miles away. Unfortunately, the crio is susceptible to freezing or hanging up and becoming completely unresponsive over the network to a point that it can not be rebooted over the network at all. (Yes even with the setting about halting all processes if TCP becomes unresponsive). We would have to send someone all the way out to the device to hit the reset button or cycle power. Programs freeze, OS' freeze or crash, drivers crash, stuff happens. This should not put the data being stored at risk.

 

I would put money on something like this being already developed by NI. I hope you guys think the module is a good idea, even if you don't agree with all the problems I brought up. I searched around for an idea like this and my apologies if this is a re-post.

 

 

Problem:

  1. Many applications need multiple DAQ chassis synced across 100s of meters. Ethernet is not used due to its indeterminism.
  2. While NI's TimeSync uses special hardware (&1588), it seems like NI could build into the Ethernet drivers a way to do time syncing without any other hardware modules (cDAQ, cRIO, PXIe, etc.). The customized NI-Ethernet would do the master-slave timing for you. It would be built into the platform. The key may be to use the lower boundary of histogram distribution in the statistics of loop time. Not using an average loop time, but use the bounded minimum as a special loop time statistic. See the image at the bottom.
  3. Ethernet time to send a message and receive an answer is not deterministic. But, if all the Ethernet chassis are on a dedicated subnet with no other traffic, then there should be some deterministic MINIMUM time for one chassis to send a packet to another chassis.

Possible solution. Configuration: Suppose you had 5 Ethernet cDAQ 8 slot chassis. Start off by making a simple configuration work first, then extrapolate to more complicating network configurations. Therefore make all the chassis on the same subnet and maybe a dedicated subnet. Each cDAQ is 100s of feet from the other. You want to sample data at 1000 samples per second on all chassis and either lock all the sample clocks, or adjust the clocks on the fly, or know how much the sample clocks differ and adjust the times after the data is transferred.

  1. LabVIEW tells each slave chassis that it is to be a slave to a particular master cDAQ chassis (gives the IP address and MAC address).
  2. LabVIEW tells one of the cDAQ chassis to be the master and it gives the IP address (and MAC address) of all the slaves to that master.
  3. The local Ethernet driver on the chassis then handles all further syncing without any more intervention from LabVIEW. Avoids Windows’ lack of determinism.
  4. The master chassis sends an Ethernet packet to each slave (one at a time, not broadcast). The slave's Ethernet driver stores the small packet (with a time stamp of when received) and immediately sends a response packet that includes an index to the packet received (and the timestamp when the slave received it). The master stores the response packet and immediately sends a response to the slave response. This last message back to the slave may not be necessary.
  5. The local Ethernet driver for each cDAQ has stored all 1000 loop times and their associated timestamps.
  6. Now each master slave combination has a timestamp of the other's clock with a time offset due to the Ethernet delay. But this Ethernet delay is not a constant (it is indeterminate). If it were constant, then syncing would be easy. BUT
  7. One characteristic of the loop time should be determinant (very repeatable). On a local subnet the minimum loop time should be very consistent. After these loop messages and time stamps are sent 1000 times, the minimum time should be very repeatable. Example: Suppose we only want 10 us timing (one tenth of a sample period). After sending 1000 time stamped looped messages, we find that the minimum loop time falls between 875us and 885 us. We have 127 loop times that fall into this minimum range (like the bottom “bucket” in a histogram plot). If we were to plot the time distribution, we would notice an obvious WALL at the minimum times. We would not have a Gaussian distribution. This 2nd peak in the distribution at the minimum would be another good indication that this lower value is determinant.
  8. Now the master and slave chassis communicate to make sure they have the same minimum loop times on the same message packet loops. The ones that agree are the ones used to determine the timestamp differences between the master and the slaves. The master then sends to each slave the offsets to use to get the clocks synchronized to one tenth of a sample time.

This continues to go on (in the background) to detect clock drift. Obviously after the data acquisition starts the network traffic will increase, and that will cause the number of minimum packet loop times to be less than 127 out of 1000. But we should be able to determine a minimum number of minimums that will give reliable results, especially since we know beforehand the amount of traffic we will be adding to the network once the data acquisition starts. We may find out that an Ethernet bus utilization of less than 25% will give reliable results. Reliable because we consistently get loop times that are never less than some minimum value (within 10 us).

 

Remember, a histogram of the loop times should show a second peak at this minimum loop time. This 2nd peak is an indication of whether this idea will work or not. The “tightness” of this 2nd peak is an indication of the accuracy of the timestamp that is possible. This 2nd peak should have a much smaller standard deviation (or even a WALL at the minimum - see image). Since this standard deviation is so much smaller than the overall average of the loop time, then it is a far better value to use to sync the cDAQ chassis. If it is “tight” enough, then maybe additional time syncing can be done (more accuracy, timed to sample clocks, triggers, etc.).

 

Example, now that the clock are synced to within 10 us, the software initiated start trigger could be sent as a start time, not as a start trigger. In other words, the master cDAQ Ethernet driver would get a start trigger from LabVIEW and send all the slaves a packet telling them to start sampling at a specific time (computed as 75 milliseconds from now).

 

Ethernet_Loop_Time_Distribution_Shows_Determinism_at_the-Wall.jpg

 

I mentioned parts of this idea to Chris when I called in today for support (1613269) on the cDAQ Ethernet and LabVIEW 2010 TimeSync. Chris indicated that this idea is not being done on any of your hardware platforms yet. By the way, Chris was very knowledgeable and I am impressed (as usual) with your level of support and talent of your team members.

This question goes for usb-6008 and 9

 

From the User Guide and Specs:

"The progammable-gain amplifier provides input gains of 1, 2, 4, 5, 8,
10, 16, or 20 when configured for differential measurements and gain
of 1 when configured for single-ended measurements"

 

Why can't I use the PGA in single-ended modes?

 

best

Niels

The DAQ Assistant was presumably created to simplify data acquisition.  The idea seems to be to put all of the needed pieces in one place, so that all the low level 'traditional' DAQ vi functions are not needed.

 

Consider the following simple vi:

 

Demo VI

 

This could be as simple as one analog input channel.

 

The program will compile into an .exe and work just fine, as long as you don't use one of the features of the DAQ Assistant:  Custom scales.

 

Custom scales are not stored with the VI or project, but in a system file that does not automatically get included in an .exe build.  The .exe will work fine on the original PC that built it, but it will not work when the .exe is loaded on a different PC.

 

There is a method that allow the user to port the custom scales to another PC, but it is not automatic.

 

http://digital.ni.com/public.nsf/allkb/12288DEB3C6A185B862572A70043C353

 

 

The fundamental problem is that the DAQ Assistant is intended to make life simple and give you everything you need to make a program.  Custom scales are included in the DAQ Assistant so that the programmer does not need to manually create scaling in their vi.  But what good does that do if they are not included in the .exe build, and there is no obvious clue that this requires extra work or what that work is?

 

The build .exe process needs to be upgraded to automatically include custom scales and possibly other MAX settings that are essential to the operation of a compiled program.  It does not matter if the build process ciphers and includes only the specific scales or setting used by the particular program / vi, or if it just takes all the settings. 

 

These are critical pieces to make the final compiled program run on another machine.  The user should not have to somehow know that these pieces are separate but need to be included, and have to take extra steps to go out and select them in order for them to be used in the build.  That is totally counter intuitive to the simplicity intended by the DAQ Assistant.

Background question here:

http://forums.ni.com/t5/Multifunction-DAQ/Channel-Calibration-information/m-p/1295908

 

Channel calibration in MAX is too limited to be very useful. According to NI AE, from MAX you can only create a table for calibration, not a polynomial fit. I assume intermediate values are linearly interpolated, but that wasn't specified. Programmatic calibration to enable polynomial cal as described in the above listed topic makes traceability problematic. 

 

 

I propose that the channel calibration capabilities in MAX be amended to allow the user to select calibration mapping, either TABLE or POLYNOMIAL FIT. 

OF course, the user should be able to select the polynomial order, and the R-value should be clearly indicated.

 

All of these properties should be included in any report generated from MAX. 

 

 

 

 

In MAX, you can open up a test panel for a DAQmx device.

 

I woudl like to format the numbers on the axis of the graphs. I have a calibration routine that requires that the signal get as close to 5 V as possible. When you get to less than 10mV, you the numbers on the vertical axis go from 5.01 to 5. So all you see on the graph is a bunch of 5's. It would be nice to be able to see the values in as much resolution as the channel will handle. Even at the maximum range, it can still do 2mV per bit. It would be nice to see 5.004 instead of 5.

 

 

 

Suggest NI produce an inexpensive (<$100) USB "stick" that has 2 hardware counters on it for optically isolated measurement of encoders, or other high-speed devices. The stick would have a standard connector it for easy wiring of differential encoders with ABZ lines. The device would enable measuring two separate encoders or track two sections of a shaftless drive line that needs to position-follow. One or two DIO lines would be a bonus. This would seem to be a good fit for the industrial machine markets (at the very least). Today you need to buy a multifunction daq for a several hundred dollars if you want two counters.

 

Contact me with any further questions.

 

 

Thank you!

 

Rick Yahn

QuadTech, Inc.

414-566-7938

rick.yahn@quadtechworld.com

 

NI USB-4432 has 5 input,But only There are 4 channels with software-selectable IEPE signal conditioning (0 or 2.1 mA).

 

 

Make NI USB-4433  5 channels with software-selectable IEPE signal conditioning (0 or 2.1 mA).

 


 


 

I would like to see a new line of HW on the lines of the lego NXT brick.

basically something between the sbrio and low end usb daq.

this could be a bare bones arm processor, low to mid end daq (8-32 dio (fpga to make them optional ctrs i2c/spi pwm  or timmed io, a few Ai and AO).  this line is for stand alone robotics or data logging.  lower cost and power and expandablilty than the crio but not requiring a PC (except as a client) like the crio.  This would fill in to the labview everywhere model, and programmed just as the crio with labview RT and FPGA.  The cost of the crio makes it not practical and an overkill for some applications and for the hobbiest/education robotics market, maybe something like the NXT brick but higher end would be nice to see.

I am using DAQmx Physical channel controls in user Interface to select the particular DAQ modules.I would like to display only the particular type (AI,DI,AO,DO.....) of modules which is connected in the system.For example i need to display only, DO-NI-9477 cDAQ modules in the physical channel not the other DIO models like NI 9403...IO name filtering option not useful to filter the other models which is same type.

 

That should be great if NI provides the option for filtering the modules name based on their  product type or user configurable naming (for example, if cDAQ1 device renamed as "DEV1" means user can enter filter the device based on the string "DEV").

There is a variety of connection posibillities (link below), but I really miss one for high channel counts, like a SubD25.

http://sine.ni.com/nips/cds/view/p/lang/de/nid/1721 

 

We normally have a DAQ system consisting of several elements:

-Sensor

-Custom filtering/attenuation

-Signal conditioning

-NI-DAQ device

 

When we use scales in DAQmx we have to create a scale for every 'route' we use (sometimes we have to use a 4 kAmps sensor for a 100 amps signal).

If we could define a scale in a task consisting of multiple scales, we could directly pick the sensor and signal conditioning we use for each signal. A change in one of these elements could easily be adjusted.

 

Ton