Data Acquisition Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

It would be great if NI offered something similar to the PLC world in the form of expandable IP20 I/O slices that don't require a chassis. It could offer the same kind of functionality in the modules that the CompactDAQ line offers, but without the need to lock into a chassis. I've found myself leaning towards using PLC I/O for some projects where the end-state is a bit murky, but the programming gets really awkward when using third-party APIs like PVI.

The interface to the PC could be similar to that of a cDAQ chassis: USB or Ethernet. Given the applications of most slice I/O users, 24 VDC would be the preferred way to power such a system.

NI offers some Oscilloscope Devices (see here) but they come with PCI or USB interface only.

These interfaces are good enough for a laboratory device, but they have some limitations if you want to use it in a custom product.

 

I think that an ethernet interface would be a great improvement (it allows a longer cable and it's much more flexible).

On our 845X user manual (https://www.ni.com/docs/en-US/bundle/ni-845x-hw-dsw-getting-started/resource/371746e.pdf ), most of the the minimum timeout can be set to 1000 ms (1 sec). We hope that this timeout can be set to even smaller, less than 1 sec.

Or perhaps there is a workaround to further reduce timeout on 8451?

Howdy!

 

I am trying to use a data acquisition system using python. There are thermocouple  modules and voltage modules that I would like to read from. It is set up and ran in 2013 LabVIEW and I am trying to put the test system into python for easy changing of the test and user control. 

 

I am wondering if the NI-DAQmx python library is kept up-to-date and if this is possible. I have been doing a lot of nitty gritty reading through the documentation on this library because there are not many examples of data collection using python to talk to the NI sensors. After trial and error attempts I have gone back to the basics to try and see if I can even change settings in the configuration of thermocouple channels. All I am trying to do is to take measurements from the thermocouple and I am changing the units from Fahrenheit to Celsius in separate runs. I can't even get this to work, even looking at example of this from Stackoverflow and the documentation where it specifically says how to configure the thermocouple channel (https://nidaqmx-python.readthedocs.io/en/latest/ai_channel.html and Ctrl-F to find thermocouples). 

 

Here is a snippet of the code I'm writing:

try:
    with ni.Task() as task:
        
        #Add the thermocouple channels to read from NI-9214's
        task.ai_channels.add_ai_thrmcpl_chan("cDAQ1Mod1/ai0:11", name_to_assign_to_channel='',
                                             min_val=0.0, max_val=100.0, units=ni.constants.TemperatureUnits.DEG_F, 
                                             thermocouple_type=ni.constants.ThermocoupleType.T)
        task.ai_channels.add_ai_thrmcpl_chan("cDAQ1Mod2/ai0:7", name_to_assign_to_channel='',
                                             min_val=0.0, max_val=100.0, units=ni.constants.TemperatureUnits.DEG_F,
                                             thermocouple_type=ni.constants.ThermocoupleType.T)
        
        #Set the thermocouple type to T
        #task.ai_thrmcpl_type = ThermocoupleType.T
        
        #Add the voltage channels to read from NI 9209
        task.ai_channels.add_ai_voltage_chan("cDAQ1Mod3/ai0:7")
        task.ai_channels.add_ai_voltage_chan("cDAQ1Mod3/ai9:12")
        task.ai_channels.add_ai_voltage_chan("cDAQ1Mod3/ai19:27")
        task.ai_channels.add_ai_voltage_chan("cDAQ1Mod3/ai29:31")
        
        #Set the rate of the Sample Clock and samples to aquire
        task.timing.cfg_samp_clk_timing(rate=Hz, sample_mode=AcquisitionType.CONTINUOUS)
        
        #Set the ADC Timing Mode to speed up the collection
        #task.ai_channels.ai_adc_timing_mode =  ADCTimingMode.HIGH_SPEED 
        task.ai_channels.ai_adc_timing_mode = ADCTimingMode.AUTOMATIC

 

This is a little frustrating to filter through the problems because there is not much out there that has the python use for help. If you search in the documentation search, the links of the results that you can click on are broken so that is another pitfall to this method.

 

Any help is greatly appreciated but I am curious if NI will keep the python library updated and running for future use.

 

NI-DAQmx python documentation: https://nidaqmx-python.readthedocs.io/en/latest/index.html 

Stackoverflow example help: https://stackoverflow.com/questions/47479913/python-nidaqmx-to-read-k-thermocouple-value

Thermocouple example: https://www.youtube.com/watch?v=NMMRbPvkzFs

This Topic is base on other topic in PXI Forum (Link).

The main problem is the Range passing the Min/Max limitation.

 

I would like to simulate a channels in "Nominal" Case.

1. The range should be based on the Min/Max limitation.

2. I could select input style (Sin Wave, Triple Wave, Random in sub range, DC with error range, based on external data,...)

3. Near to Real timing simulation include cases of sync between different channel rates.

 

Currently,

1. The range is limited to the Min/Max Values. Excluded the cases show below.

2. Only Sin Wave. Excluded an Odd Case show below.

3. I tried case of sync between 2 channel with different rates. the simulation is not close to the real DAQ case. 

 

 

Here are the cases from the PXI Forum (Link😞

1st problematic case

1. Define a Simulated Device - PXIe-4353 in MAX.

2. Open "Test Panel"

3. Use the default parameters of the test:

  a. Measurement type - thermocouple

  b. Max Input Limit - 100

  c. Min Input Limit - 0

  d. Units - Deg C

  e. Thermocouple type - J

  f. CJC Source - Built In

4. Click Start.

 

The result is Sin Wave.

The sample values range ~[-1030,-74.7], out of the Max/Min Limits.

=> BAD sample results, values lower than -273 C ?!?!?!?!

 

2nd Problematic Case

If I change the Max/Min Limits to other valid range like [-200,1200]

The sample values range is BAD

And, the result is some kind of Non-Sin Wave

 

3rd Non-temperature Case

If I change the Measurement type to Voltage.

The sample values range is based on the Max/Min Limits.

=> So it is OK

 

Tnx,

Raz

 

 

Today, we can create a DAQmx custom scale in MAX or Labview via the "DAQmx Create Scale" VI. This VI changes a pre-scaled value, e.g. 5Volts, to a scaled value in a physical unit, e.g. 100Newtons. This scale can be Linear, Map Ranges, Polynomial or Table.

For all of these four options, only a mono-channel sensor can be used.

 

However, multi-channel sensors are not so rare and there is actually no way to scale a 6-components load-cell, for instance. For this kind of transducer, F = M*U where F is a 6-components vector including the 3 forces in Newtons and the 3 moments in Newtons*meters, U is a 6-components pre-scaled vector including the 6 input channels in Volts and M is a 6*6 matrix. M is never diagonal, because the forces affect the moments.

 

Finally, what I'd like to have is an extension of the "DAQmx Create Scale" VI which could enable a multi-channel pre-scaled input to be scaled in physical units through a matrix.

 

Thanks.

Make the hardware capable of having more than one task tied to the Analog outputs . Each analog output should be capable of having the hardware timing for every analog output, so each output can generate its own separate hardware timed waveform.

 

In SignalExpress, you are able to simultaneously acquire new data and view the entire history of your log in a single plot. In DAQExpress, you can drag a recording into a new tab, but it only lets you see the data of the recording from it's start time up to the time you performed the drag operation. It would be nice if DAQExpress allowed you to somehow view your entire recording history in the same graph that is acquiring new data.

This is something I had to design for our company's test benches because nothing existed off-the-shelf. (What does exist off-the-shelf is beyond incredibly expensive.) It's a card chassis holding 4 cards with 8 individual (5 amp) relay channels per card. Each relay channel has eight selectable sources or sinks. In our case, we have V+ (from a programmable bench power supply), ground, 4 individual resistive dummy loads each with analog DAQ channel, digital input, and an external header connector. There are four programmable 50 watt resistive dummy loads ranging from 1 ohm to 255 ohms. There are also two analog output channels. All of the switches, power supply, and dummy loads are controlled from a PC serial port (I would use USB now, if I had a chance to re-design it.) The inputs are all fed to a PCI-6221 and a PCI-6514. Again, if I had a chance to re-design it, I would ditch the PCI cards and use an FPGA with several A/D converters and digital opto-isolators. I would also consider getting rid of the mechanical relays and try solid-state relays, although we have not had a relay failure in any of the 292 relays (per chassis) in over ten years of daily use.

 

Something like that, only more widely expandable would be very useful to anyone testing a wide range of devices with different signal connections and high output currents. I would design these myself to sell, if I were more ambitious.

Hi,

     i need to create a vi with 16 bit channel output to show a waveform graph and my point is i need to display a list and the user should select which channel they want to display in a graph.

For example, if the user wants to see a graph for the channel 1 & channel 2, whatever it is, up to 16 channel, is anybody got a idea? that what i was telling actually.

When creating a vi and wiring the terminals, there should be an option that allows for an automatically generated help file.  Seeing how the vi knows the names and types of inputs, it should be able to generate at least a rudimentary template for the vi Help.  Next, allow the user to fill in the details plus the required/optional connections.  A few simple steps and now we have a functioning help screen with each vi. 

As far as I can tell, DAQmx Configure Logging only allows for raw data and scaling information to be saved as part of a DAQmx task. This is great for throughput and disk space considerations, but a problem arises when using DIAdem to analyse the TDMS files with raw + scaling info - it is incredibly slow for large files (~500MB).

 

Some basic tests show it's around 10 times slower to process raw + scale TDMS files (stored as I16s) vs. already scaled TDMS files (stored as SGLs). DIAdem crawls when trying to generate calculation previews, zoom in and out of graphs, and so on.

 

It'd be great if the DAQmx logging provided the option to log scaled data (in the user's preferred datatype), and an option to not include the scaling information in the TDMS metadata.

It would be a Godsend if the PDF documents for all NI products, include the model number or part number.

 

Naming the PDF's the pdf document number I'm sure makes sense internally to NI. But these documents are for us customers to use - thus having a PDF I download have some obscure number - that does not relate to the product is irrating.

 

I'm constantly renaming the PDF to include the device model.

 

I know it's common sense...so rare it should be considered a Super Power.

The niRFSA Fetch IQ VI provides me access to the absolute time at which the first sample of the IQ recording was required, as well as the IQ samples.

I have two requirements for the data types involved:

  1. I need to access the absolute timestamp t0 using LabView's 128-bit fixed-point timestamp format, because a 64-bit floating point format simply does not have enough mantissa bits to uniquely identify each sample-clock edge accurately across the potential lifetime of the application, and because using floating-point numbers for timestamps is generally a pretty bad idea (as their absolute resolution continuously decreases as time increases, causing difficult to test problems late in product life).
  2. I also need the IQ samples in unscaled I16 format, which I assume is the most compact, native format that my NI PXIe-5622 down-converter records internally. (I want to do the scaling myself much later, in off-line processing)

Unfortunately, at present, I can only have one or the other (high-res 128-bit timestamps or native, unscaled 16-bit integer samples), but not both simultaneously. This is because the polymorphic niRFSA Fetch IQ VI offers me either unscaled I16 IQ data along with a wfm info cluster that contains the absolute timestamp in the inappropriate DBL format, or it offers me complex WDT records with nice 128-bit timestamps, but then the IQ data comes in inappropriate scaled complex single or double format, which are not the compact native unscaled integer data format I would prefer, and which is tedious to scale back into I16 (leading to an unnecessary I16->float->I16 conversion roundtrip).

 

Feature request: Could you please provide in the next RFSA release a variant of the unscaled I16 niRFSA Fetch IQ VIs that outputs the absolute timestamp in a fixed-point type (either scaled in LabView's128-bit timestamp type, or unscaled as a simple sample-clock integer count)?

 

Application: I'm acquiring IQ data in multiple frequency bands, and I need to know exactly (with <1 sample accuracy) the relative timing between these acquisitions. As my NI PXIe-5667 acquires IQ values with 75 megasamples per second, the required timestamp resolution is at least 13.3 nanoseconds, or 0.0000000133 seconds. But as explained here, absolute timestamps in DBL have only 5 decimal digits resolution left. Therefore I can only determine the relative timing between multiple recordings with an accuracy of slightly better than a millisecond.

 

Generally, I would recommend that event timestamps should always be provided in APIs in fixed-point timestamp format, to guarantee uniform resolution. They can always easily be converted into floating-point representation later. Floating-point timestamps are a pretty dangerous engineering practice and should be discouraged by APIs.

I have an application where I need to continuously acquire data, but I want to start logging that data (With file spanning) concurrent with a hardware trigger.  Pause logging will only align to a read block so that isn't useful in this application.  As it stands now (LabView 2016), this type of functionality requires manual buffering of data, use of TDMS file VIs, and custom logic for spanning TDMS files to implement.

The NI-9203 noise probelem is discussed in http://forums.ni.com/t5/Multifunction-DAQ/NI-9203-generates-noise-pulses-when-acquired/m-p/3546439

 

Attached document describes how noise measurements were perfromed with oscilloscope. 9203 emits peaks with the same frequency as acquisition rate. These spikes do affect 4-20mA measurents accuracy quite a bit. It should not be like that. Probably 20pF capacitance on each input inside the module shoud be increased to 200pF or 2nF. I am sure that NI R&D knows better how to improve 9203 🙂

 

Support reference#: 2703970

I am new to posting ideas, so I posted this someplace else and now see this is the more appropriate form for my need.....

My customer wants to be able to calculate mV/Pascal at various pressures to make sure the response is linear over the range.  The problem is I have not been able to find a VI or property node that will give me access to the raw voltage of the input signal when using the DAQMx Pressure Bridge VI, which I would then calculate the actual voltage measured vs Pascals measured.  I find it amazing that I can't easily get this raw voltage from a property node.  I do NOT want to use the Analog Measurement (as suggested by my NI service request 2399613) and then do all the translation to pressure because that would aliminate all the great stuff the Pressure Bridge Measurement VI takes into account.

To help with system recovery in the case of system errors/crashes, it would be useful to have a tool to configure a more frequent backup of the system without having to manually export the system configuration. 

Hi,

 

It will be great if VirtualBench have an option that we can choose 7 or 8 digits to be decoded for I2C address. I understand that we usually decode 7 digits for address and the eighth digit for read/write for I2C protocol but sometimes we need to decode 8 digits for I2C address and the eighth digit still for read/write to satisfy our work.

 

Thanks,

 

Brian

The minimal range and the test current for resistance measurement of NI's DMMs (ex PXI-4072) are :

 

NI.jpg

 

A DMM with a lower range and a higher test current (eg Keysight M9183A) would be very useful in some cases.

 

Keysight.jpg