Data Acquisition Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Including me, there are couple of other LabVIEW users, those wish to have this feature available, wherein we could be able to create Virtual channels (or even Global tasks) for an internal channel of a DAQ or SCXI.

This feature implementation should also include, allow to configure and use internal channels while using DAQ Assistant (though I personally don't prefer using DAQ Assistant).

 

Check this post here. This feature wish is around the same line.

By default, DAQmx terminal constants/controls only show a subset of what is really available.  To see everything, you have to right-click the terminal and select "I/O Name Filtering", then check "Include Advanced Terminals":

 

Untitled 1 Block Diagram _2013-06-04_16-16-29.png

AdvancedTerminals.png

 

I guess this is intended to prevent new users from being overwhelmed.  However, what is really does is create a hurdle that prevents them from configuring their device in a more "advanced" manner since they have no idea that the name filtering box exists.

 

I am putting "advanced" in quotes because I find the distinction very much arbitrary.

 

 

As a more experienced DAQmx user, I change the I/O name filtering literally every time I put down a terminal without thinking about it (who can keep track of which subset of DAQmx applications are considered "advanced").  The worst part about this is trying to explain how to do something to newer users and having to tell them to change the I/O name filtering every single time (or if you don't, you'll almost certainly get a response back like this).

 

 

 

Why not make the so-called "advanced" terminals show in the drop-down list by default?

I bought a NI USB-6251 BNC but the support explained me that it would have no Linux support out of the box. I will have to find out how to use it on Linux systems myself now (perhaps with help of the forum). It would be a nice feature, if it would ship with Linux support.

Measurement and Automation Explorer MAX's Test Panel's Analog Input provides a quick method to examine a signal and vary acquisition parameters.  It would be useful to be able to zoom the time axis and have a cursor display so that for example noise level or rise time could be looked at in more detail.  The time axis limits can currently be manually overwritten as a way to zoom but that is cumbersome.  Assuming the graph being used in this test panel is built from a standard NI graph, it should have zoom and cursor capability already part of it and thus easily added.

 

Steve

 

Absolute encoders have been around for some time, but NI's motion hardware still supports only incremental encoders.  I would like to see support for absolute encoders in NI Motion or NI Soft Motion.

 

Currently when streaming analog or digital samples to DAQ board, output stays at the level of last sample received when buffer underflow occurs. This behavior can be observed on USB X Series Multifunction DAQ boards. I have USB-6363 model. The exact mode is hardware-timed, buffered, continuous, and non-regenerating. The buffer underflow error code is -200290 “The generation has stopped to prevent the regeneration of old samples. Your application was unable to write samples to the background buffer fast enough to prevent old samples from being regenerated.”

 

I would like to have an option to configure DAQ hardware to immediately set voltage on analog and digital outputs to a predefined state if the buffer underrun occurs. Also, I would like to have an option to immediately set one of PFI pins on buffer underrun.  

 

I believe this could be accomplished by modifying X series firmware and providing configuration of this feature in the DAQmx API. If no more samples are available in the buffer the DAQ board should immediately write predefined digital states / analog levels to outputs and indicate buffer underrun state on PFI line. Then it should report error to PC.

 

Doing this in firmware has certain advantages:

  1. It can be done quickly (possibly within the time of the next missing sample – at 2Ms/s that’s 0.5us).
  2. Handles all situations (software lockups, excessive CPU loading by other processes, loss of communication do to bus traffic, interface disconnection…)
  3. It does not require any additional hardware (to turn off outputs externally).
  4. Buffer underrun indication on PFI line could provide additional safety measure (it could be used for example to immediately disable external power amplifier connected to DAQ AO). 

Doing this using other methods is just too slow, does not handle all situations, or requires additional external circuitry.

 

Setting outputs from software, once error occurs, is slow (~25ms / time of 50000 samples at 2MS/s) and does not handle physical disconnection of the interface. Analog output does eventually go to 0 V on USB-6363 when USB cable is disconnected, but it takes about half a second.  

 

Using watchdog timer would also be too slow. The timer can be set to quite a short time, but form the software, I would not be able to reset it faster than every 10ms. It also would require switching off analog channels externally with additional circuitry, because watchdog timer is not available for analog channels.

 

The only viable solution right now is to route task sample clock to PFI and detect when it stops toggling. It actually does stop after last sample is programmed. Once that occurs, outputs can be switched off externally. This requires a whole lot of external circuitry and major development time. If you need reaction time to be within time of one or two samples, pulse detector needs to be customized for every possible sampling rate you might what to use. To make this work right for analog output, it would take RISC microcontroller and analog electronic switches. If you wanted to use external trigger to start the waveform, microcontroller would have to turn on the analog switch, look for beginning of waveform sample clock, record initial clock interval as reference, and finally turn off the switch if no pulse is received within reference time.

 

I’m actually quite impressed how well USB-6363 handles streaming to outputs. This allows me to output waveforms with complexity that regular arbitrary generators with fixed memory and sequencing simply cannot handle. The buffer underflow even at the highest sampling rate is quite rare. However, to make my system robust and safe, I need fast, simple, and reliable method of quickly shutting down the outputs that only hardware/firmware solution can provide.

 

Thanks,

Sebastian

I've been told that in order to change module settings, you have to connect a development machine to the network has has the cRIO hardware. Most systems I build have a deployed application and are located somewhere else (i.e. no connected to my machine). It'd be nice if the cRIO module setting could be ported to the cRIO unit without having to connect a development machine and hit "connect" in the project.

 

For example, I had a cRIO unit in my office and set it up for a project. They installed it, wired it, and tested it. A few months later we needed to add a NI 9213 (16 ch Thermocouple module). It defaults to type J and degree Celcius. In order to switch it to type K, I had to bring my desktop out to the unit, connect to the network, and redeploy the cRIO. I called support twice and was told this is the only way to deploy the module settings. If i'm wrong, someone please correct me.

 

 

I'm thinking it could be another type of application builder or something you could add to an existing application.

 

 

I could program a whole panel to allow the user to modify the setup parameters for a DAQmx task, but decided that it's easer to simply stop the task and launch MAX with LaunchExecutableEx and let the user play with the task settings there. Unfortunately there seems to be no way to tell MAX, e.g. through command line parameters, to open up and display a particular DAQmx task upon startup. Might I suggest some facility for doing this, possibly through simple command-line parameters or even through an ActiveX utility?

The current way to programmatically update the GPIB-ENET/100 is to call firmwareupdate.exe and then use the windows API to tab through and enter data.  I suggest that we update the code so that firmwareupdate.exe could be called from the command prompt with parameters that will allow this process to be automated in a batch file.

 

Thanks!

 

Shawn S.

I have an application in which I have to digitize a pulse across a shunt resistor.  The common mode voltage can be up around 60VDC.  The digitizing cards I was able to find cannot perform a differential measurement without digitizing both sides of the resistor and then subtracting.  This method causes a lot of error due to the needed voltage ranges.  I have been able to digitize some of these pulses with the PXI-4072 DMM with great success.  However, I can control when those pulses occur and setup trigger lines as needed.  Other pulses I need to digitize will occur whenever the UUT decides to put it out.  What is really needed is a way to trigger the DMM on a measured voltage level.  Just for reference, Agilent's PXI DMMs can do this.  It seems such a shame I haven't found a way to do this with NI's DMM.  As a final thought, some pretrigger data would be needed to properly capture the pulse.  Though, pretrigger data would be nice in any hardware triggered acquisition.

When using TDMS on cRIO systems, there are a couple of considerations that doesn't normally play in too much when storing data as TDMS files and they are:

 

* The current version of the file system used on cRIO controllers degrades significantly in performance if -any- folder on the cRIO contains more than ~100 files. (work-around> more elaborate folder structures, but a lot of this structuring would be only to work around this shortcoming of the (old) version of the file system 

* The drives are SSD with limited life-length and wear-leveling etc. Writing and re-writing these index files add un-necessary overhead and wear on the disks

* They use up space which is (very) limited on some cRIO's (even if not much). (people may be quick to point out that you can add a thumb-drive, but down-sides to that is the thumbdrives (as far as I know) needs to be FAT. Compared to storing on the cRIO file system which is atomic and fail-safe where you pretty much don't have to worry about sudden power outages and interruptions mid-write.. on a thumb drive you would have all these issues that could worst case corrupt your whole thumb drive.)

 

I propose to add a boolean (default to false) on the TDMS Open called "supress writing tdms index to disk" or some smart name along those lines. What this would do is still allow for the tmds index to be created, but it will remain in memory only and never be written to disk. When the TDMS Close is called, the memory is released and the tdms file is written to disk without the index file. If the same file is opened again, extra time would be needed since the index file would be re-created (again in memory only if boolean indicates this), but I think for the most part this overhead would be more than acceptable.

 

I'm not sure how "simple" modifying the TDMS open and close functions would be, but I do know that there are many cases where this flag would make sense. 

I am trying to use labview to make a stress strain curve for an experiment I am running. However my use of lab view is highly limited. I have a load cell that inputs voltage using a transducer techniques DPM3 to send the voltage signal and I have a laser extensometer that inputs voltages also. My question is about the graph and what is the best way to put the strain input n the x axis and the load input on the y axis. 

If you set up a change detection event as so:

change detection.png

 

There isn't anything in the event data node to tell you which line triggered the interrupt. I'm proposing we add something in the event data node for this event (like a bit field or a reference to the channel) so the programmer would know which line fired the event.

 

The workaround is you do a DAQmx read at this point and you mask the data vs previous data.. but I would prefer not to do this.

It seems the only indication in MAX that a device is simulated is that under the Devices and Interfaces section, the tiny glyph to the left of the device name is colored yellow instead of being white/transparent.  I end up not remembering what color means what.  It would be useful to add text "Simulated" next to the device name.  It would also help to distinguish simulated devices by having the color of that glyph be green (instead of its current transparent/white) when the device is installed and detected.  Have the color change to red (and keep the existing red X) if had been detected and a device number assigned but is currently not installed/detected.  Then simulated devices being yellow may imply "warning/caution" or "not real".  Perhaps also have a help-hint popup ("Detected" or "Not Detected" or "Simulated") when the mouse hovers over device names.

 

MAX Simulated Device.jpg

If you want to create low frequencies with AWGs (5422) you need to increase the buffer manually, OK  would be nice to have an option that the driver can do it.... however

currently the driver (seems to) try to fill the buffer at once, allocating huge amounts of memory only to write a standard function.

Ever tried 50mHz@5MS/s Smiley Surprised  

Please improve the memory management of the FGEN driver by writing smaller blocks...

 

Spoiler
OK, maybe the 5422 is a 'little' overkill to create a simple 50mHz Sine 😉 

 

When it comes to documentation of an measurement, you need to report ALL settings of a device that effects that measurement.

From a core memory dump written as a hex string to a XML document.... anything that shows up a difference in the settings that affect the measurement would be fine for documentation.

Something like a big property node readout followed by a format into string .... but make sure not to miss a property.... and a bit more complicated when it comes to signal routing....

 

A measurement that isn't sufficiently documented is all for naught. 

or

Just think of a nasty auditor 😉

 

It's so easy to make measurements with LabVIEW, please make it easy and consistent to document it.

 

Example:

A quick measurement setup with the DAQ-assistant/Express fills Gigabytes but after a certain time they are useless because nobody knows how they where taken. A simple checkbox could add all this information in the variant of the waveform. (or TDMS or ...) even if the operator don't have a clue of all the settings that affect his measurements.

 

Hi All,

 

In my post on the LabVIEW board I asked if it was possible to have control over the DIO of a simualted DAQ device. Unfortunately it seems this feature is not available. Once MAX is closed the DIOs run through their own sequences.

 

If there was a non-blocking way to control a simulated DAQ device through MAX it would permit much simpler prototyping of systems before they need to be deployed to hardware. For example if you want to see how a program responds to a value change simply enter it in the non-blocking MAX UI. Or as in my original case can make an executable useable even if you don't have all the necessary hardware.

 

I think this feature should only be available for simulated devices.

 

Thanks for reading - and hopefully voting,

Dave

 

When a piece of hardware is simulated with MAX, I would like to be able to insert a transfer function or a signal simulating VI to allow me to get a more realistic test of a system. The current default of generating a sine wave for simulated acquisition only lets me test part of the code. If a transfer function, lookup table, or custom vi were able to be substituted for the sine wave generation, then I would be able to test many other facets of a system.

Multiple people have requested that there be a natural way for Labview and SignalExpress to do a rotational speed measurement using a quadrature encoder. An express VI under "Acquire Signals>>Counter Input>>Rotational Speed" that asks you basic quadrature encoder type questions and computes the rotational speed would be very useful. The information it asks would be things such as Ticks per Revolution, Decoding type (x1, x2, x4) would be useful in computing rotational speed. In addition, this can be then converted into a shipping example for DAQmx relatively easily. I have had multiple people ask this question and believe that especially within SignalExpress, this would be very useful.

 

 

Rotation.png

 

 

There is currently no API available to develop applications that will use the functionality of the GPIB Analyzer. There are customers who would like to be able to monitor the GPIB bus from LabVIEW, so this would be helpful.