LabWindows/CVI

cancel
Showing results for 
Search instead for 
Did you mean: 

Do DAQmx EveryNSamplesEvents poll/run asynchronously?

Hi Trevor,

 

I'm sorry to hear that. Please post it to our ftp site, and drop me a note here once you've done so.

 

Thanks,

Luis

0 Kudos
Message 11 of 21
(2,512 Views)

Luis,

 

Code is in MyMultiTaskProject2.zip (MyMultiTaskProject.zip is also mine but probably not a complete transfer... delete both when you've got the second).

 

Also, there is a second post which outlines memory/link problem and potential work around... Re: Out of Memory In LabWindows/CVI 8.1. It appears to be in the link step ?

 

Trevor

0 Kudos
Message 12 of 21
(2,485 Views)

Luis,

 

I just got out of my meeting and was able to test the software that I sent you a copy of on the development system. On that system my test code, using an async timer to plot generic data at the same rate size as the DAQmx version, has the same CPU utilization issues and display freezing behavior as the DAQmx version.

 

Please let me know the outcome when you've completed testing the code.

 

As an additional point. It should be clear from the code that you'll need to configure three tasks in MAX that will be used to configure the software in the same way for the async version as for the DAQmx version. Also, a single compiler pre-processor directive with toggle the software between DAQmx and aync timer mode (recompile/link required). I have included exported versions of the DAQmx simulated devices/channels/tasks along with the project files.

 

Trevor

0 Kudos
Message 13 of 21
(2,478 Views)

Hi Trevor,

 

Thanks for submitting your test program. I've been able to reproduce the "out of memory" you've been seeing and so I've been investigating that problem first. I'll try to tackle the DAQ/timer problem tomorrow.

 

I did confirm that this is a bug in CVI that was previously unknown -- and that has been around for a very long time, undetected. The conditions that cause this bug to manifest itself are extremely rare, but you were nevertheless "lucky" enough to replicate them. Fortunately, there's a fairly easy workaround that you can do that should make those circumstances go away.

 

The problem happens when at a particular point during the link phase, an internal memory allocation is requested for a list of imported symbols, if the size of the list is between 8187 and 8189 symbols. In the case of your project, it happened to be 8188. This list consists of all the symbols that your project defines and also imports from all the libraries (including the CVI libraries) associated with the project. It just so happened that when you added the new source file, the total came to be 8188. This was completely dependent on what is in your project, of course, but also on the specific version of CVI that you're using.

 

One workaround consists of adding some gratuitous functions to your source file such that the total number of symbols will change. For example, I added the following code to myAsyncTimer.c, which was sufficient to no longer trigger the bug:

 

void foo (void)
{
}
void foo1 (void)
{
}
void foo2 (void)
{
}
void foo3 (void)
{
}
void foo4 (void)
{
}
void foo5 (void)
{
}
 

I'm sorry again for the double inconvenience. I'll look at the other problem next, and I'll let you know when I have some more information.

 

Luis

 

0 Kudos
Message 14 of 21
(2,468 Views)

Hi again,

 

With respect to the unresponsiveness of the GUI, there are two issues that contribute to this:

 

1. Data is being generated by the timer (or being read from DAQ) faster than it can be plotted. Part of the reason for this is the sample rate, of course, but another big culprit is the fact that you have a very large number of points per screen (almost 10000) configured for the chart. This is much higher than the pixel density of the chart (and so it probably does not provide useful visual information) but yet it does really, really slow down the chart.

 

2. The reason for the unresponsiveness of the GUI lies in what is going on inside your thread-safe queue callback (DAM_CORE_ItemsInQueueCB) . Even though you have a callback set up for the queue, you're essentially polling the queue inside the callback, and not allowing the callback to exit until the queue is empty. But because of the previous issue that never happens (the other thread is adding data to the queue faster than this thread can remove it). Because the callback never exits, and because the callback does not call ProcessSystemEvents in its main loop, the UI is unresponsive.

What you are doing is mixing two approaches: callback-based and polling. If you are polling for data continuously, it's a bad idea to have a callback and you don't really need one. Just set up your polling loop instead of calling RunUserInterface and don't install a callback for the queue. Just be sure to call ProcessSystemEvents somewhere in the loop. But if you choose the callback approach, then just plot once and exit the callback, because your callback will be called again, soon enough, if the amount of data in the queue is above your specified threshold. In general, it's a bad idea to stay in a function callback a long time, let alone indefinitely.

Another thing you should consider, whether you're polling or using a callback, is to plot the entire contents of the queue each time, rather than just one 10-sample block at a time. Because data is being written too fast (or read too slowly), this is your only hope of ever catching up, since it's faster to plot, say, 100 points once, than 10 points x 10 times. And the user will probably not really be able to tell, at these speeds, if you're plotting 10 points at a time, or more than that.

 

Luis

0 Kudos
Message 15 of 21
(2,447 Views)

Luis,

I'm back for more...

After having branched the source tree for "the real project" to pre-DAQmx days, remerged the code changes to the GUI and gotten the users up and running on the old USB/Traditional-DAQ version of this application I'm delving back into attempting so solve these performance issues. I mention this because the "core" code is used in "the real project" to handle data buffering between the DAQ operations, disk streaming and GUI updating. "The real project" uses a similar stripchart configuration (~10K POINTS_PER_SCREEN) and with our required sampling frequencies it works fine on a Pentium based laptop. The primary goal in switching to DAQmx was to move to newer PCI M-Series hardware that could provide more "real-time-like" performance (USB data transfers are interupt based and bandwidth limited in a way that a PCI based system SHOULD NOT be) and a similar code base for that system and others we are developing using M-Series DAQ hardware. To reiterate... the code is WORKING on the USB based system though at a slower "update rate" (200Hz, 8 channels, WORD, 50 samples per buffer) than the PCI system (200Hz, 8 channels, FLOAT64, 20 samples per buffer).

My responses to your comments follow...

1. I can't believe this is true. I've verified that neither the DAQ acquisition task nor the timer generation task are creating more than the requested amount of data. We are talking about AT MOST 200 samples of data per second or some divisor of that (ie. 50 samples every 1/4 second or 20 samples every 1/10 of a second) for each channel and I'm only using 8 channels.

In order to support this contention I modified the source of my test project (not in "the real project) so that it downsamples the data to the stripcharts horizontal resolution BEFORE plotting (I expect something similar is happening in the stripchart plotting code somewhere anyway). The problem remains on the target system but is NOT demonstrated on my development system with EITHER simulated DAQ or Async Timer generated data. This CANNOT be due to the number of points per screen really slowing down the chart. It is now limited to the horizontal pixel density (500 pixels).

 2.  The reason there are two "techniques" being implemented in the Queue callback is based on "practical use". In our USB version of software the data displayed in the strip chart was found to "lag" real-time input if/when the user operated the GUI and/or moved the application window. This is "normal" behavior in the sense that GUI updates are "paused" if you click-hold on the title bar. The "spin" within DAM_CORE_ItemsInQueueCB() only gets "activated" in these rare instances and server to "flush" the queue as you have suggested. Our application hardly ever makes use of this functionality as most user input is single key strokes and/or button clicks used to advance through the state machine of our application. In order to quell your concern I added code to "limit" this spinning to a predetermined number of iterations within a single invocation of DAM_CORE_ItemsInQueueCB(). Lowering the allowed "spincount" will force the application to exit the callback after the number of spins (can be 1). It is clear, at least to me, that the DAM_CORE_ItemsInQueueCB() is ONLY called when data is put on the queue. It does NOT get called (prompted by some async timer) if data remains in the queue between writes. This is exactly what happens when our writer (DAQ) continues to write to the queue while our reader (GUI/DISK) are "paused". Due to the architecture of the "system" changing the read routine to flush the queue in a single read is less desireable than "spinning" in the reader callback. And... as I've already explained... this WORKS fine (practically) in our USB DAQ implementation. Even reducing the spin-count to 1 and lowering the DAQ generation rate to 1 buffer every 0.5 seconds doesn't stop the test program from grabbing huge CPU and making the GUI unresponsive.

 

We need to be looking somewhere else...

0 Kudos
Message 16 of 21
(2,378 Views)

Luis,

 

The previous post I started working on last week and decided was important to include even though subsequent work has revealed the potential problem.

 

Looking back through my posts to this thread it seems I've left out information which may be important and which, through elimination, may have uncovered the root of my problem.

 

My primary development system is "LabWindows/CVI  8.5.1 (356) (Microsoft Windows 2000/XP/Vista)" running on Windows Vista Business (32bit). My deployed system is Windows XP Professional SP3 with only the required DAQmx/TraditionalDAQ/MAX/CVI Runtime software installed (ie. no IDE).

 

Out of curiosity I moved my test code (MyMultiTaskProject) over to my "old" Windows XP SP3 development machine which still has "LabWindows/CVI 8.1.0 (271) (Windows 2000/XP)" installed on it. After making a few modifications to the application (the stripchart control changed between 8.1 and 8.5 and won't compile as is... note: I use the UI to Code Converter with this and other projects so I don't have to ship the .UIR file) and recompiling it on the old system I ran both the async timer and DAQmx versions of the test application on the Windows XP target machine. The CPU utilization problem and GUI responsiveness issue is GONE. I ONLY copied the exe files from the "old" development system to the target. The SAME CVI runtime files are being used on the target machine with the Vista/CVI 8.5.1 compiled version and the XP/CVI 8.1.0 versions of the exe file.

 

This suggests, to me anyway, the problem is an issue on the DEVELOPMENT side when using CVI 8.5.1 on Vista going to an XP target (remember the test application runs fine on Vista when compiled on Vista).Hopefully, this additional information can help you figure out what is going on. For the time being I'm going to develop this application on CVI 8.1.0 on Windows XP SP3 until I hear a solution from you.

 

Thanks,

Trevor

0 Kudos
Message 17 of 21
(2,375 Views)

Luis,

 

Just noticed CVI 9 announcement in the board. In the notes there are the following bugs...

 





120414Strip chart scrolling is very slow when many points are plotted and y-axis autoscaling is disabledYes
41422Plotting a large number of points at once to the strip chart causes the x-scale range to be incorrectly updated

Yes

 

Any thoughts on whether or not either or both of these might be involved?

 

Trevor

0 Kudos
Message 18 of 21
(2,371 Views)

Hello Trevor,

 

Luis has been pretty busy this week, and as a result, I have taken a look through the code you posted to the ftp site earlier and have a couple suggestions.  

 

The major culprit in the slowdown you are seeing on the strip chart comes from two main sources.  First, you are plotting 10000 points per screen, which is at the very upper limit of the stripchart's capability.  Second, you are trying to plot to the graph as soon as a new data set is available.  When you combine these with the fact that the stripchart must completely redraw each time it plots when it is configured to use VAL_SWEEP as the scroll mode, you see the slowdown/unresponsiveness you noted in the strip chart.  There are a couple things you can do to improve the speed/responsiveness of the strip chart:

 

  • The biggest improvement you will see is if you use VAL_CONTINUOUS instead of VAL_SWEEP.  This mode does not require a redraw with each plot, and so each plot occurs much more quickly.  If you use this mode, you should be able to disregard both of the performance improving tips below. 
  • If you must use VAL_SWEEP, you can improve the performance in two ways. Implementing both will provide the best results
  1. The first thing that you can try is reducing the number of points per screen.  Because VAL_SLEEP must redraw each time, the less number of points it needs to redraw the better.  I was able to see relatively good performance with points per screen around 2000. 
  2. The second thing you can try is plotting only when you have multiple data sets ready to plot. What this basically means is that for the TSQ that you have set up, you would specify a iThresholdValue of 10 or 20 or even iQueueNumItems / 2 instead of 1.  This won't give the appearance of continuous drawing, but again, this will cut down on the number of redraws that are necessary for the strip chart, drastically improving performance.  I will describe what I feel is the best way to implement this below.


I know that you have already discussed UI responsiveness with Luis, and I can understand your reluctance to change the callback model that you have implemented.  However, I have a few suggestions that you may find helpful, and hope that you will give them some consideration.  If I understand correctly, the reason that you have implemented the while loop in the callback is in case the callback function is not executed each time a new data set is placed into the TSQ, whether because of user interaction or some other reason.  However, by making a very small change to your callback function, I think that you can achieve this same goal, but retain the callback model that allows for events to be processed by RunUserInterface().  Following this model will even allow you to use an iThresholdValue of 1 without getting too far behind. 

 

To implement this, instead of querying how many items are available in the TSQ as a parameter of your while loop, remove the while loop altogether, and use this return value to determine how many items to pull from the TSQ, as well as appropriately size your buffer.  This way, if you are able to process data quickly enough to plot each new piece of data, the graph will update seemingly continuously. However, if the user moves the window, or some other interaction occurs, this will be accounted for, and the next time you plot, you will plot everything that has built up in the queue while the user was interacting with the UI.  This is further an improvement because the UI will only have to redraw once to catch up, instead of once for each data set that was missed. 

 

I have implemented all of the things ( albeit rather roughly) in the code that you posted earlier, and if you would like to have a look at it for reference, I would be happy to email it to you, or post it back to the ftp site.  If you would prefer an email, simply post back stating that it's ok for me to ask the board administrator to get your email address for me.  

 

If you have any questions about anything I've said, please don't hesitate to let me know, and I'll do my best to clarify.

 

NickB

National Instruments 

0 Kudos
Message 19 of 21
(2,331 Views)

Nick,

 

Thanks for helping out. I'm not sure if you get the full picture of what's going on so in addition to speaking to your suggestions I'll add some more information.

 

The sample code you have is a very small subset of the functionality of the application/modules that I am migrating from Traditional DAQ to DAQmx. Without going into extensive detail I'll try to explain as best possible... The test application is compiled into a single EXE file and was meant to test the loading/running of multiple DAQmx tasks created in MAX. One analog input task, one analog output task and one digital output task. The "basic" loading and running of these tasks works but the performance of the analog input task is what has taken me down this long path. Briefly, the "tasks" in the Traditional DAQ and DAQmx versions are the same. The analog output task uses a single voltage update to turn an LED on/off. The digital output task toggles the state of two digital I/O lines which operate pneumatically controlled valves. The analog input task acquires 8 differential inputs at 200 Hz with 50 samples per channel per buffer. In both cases the DAQ hardware is connected to a SC-2345 signal conditioning enclosure that has the inputs and outputs attached.

 

As I've attempted to resolve various performance issues of the analog input task I have pulled more and more code in from the "real" application (TSQ, Async Timer, etc) in order to solve the suspected problems without having to completely rewrite the "real" application in the test application. The major drawback of this is that you are not seeing how the pieces I've pulled in are related to others from the real application. Some of your and Luis' suggested are not possible in the "real" application while others are.

 

All that said, however, the biggest problem I have with the suggestions that you and Luis have provided is that the previous version of the application, running in Traditional DAQ with a DAQPad-6020E does not have ANY of these performance problems. Something has changed to cause all of this to surface. The problem is finding out which change is the culprit if it can be blamed on any one single change. The most likely factors are the following...

 

- Development OS changed from Windows XP to Windows Vista (target, Windows XP has NOT changed)

- Development environment changed from LabWindows/CVI 8.1.0 to 8.5.1

- DAQ driver changed from Traditional DAQ 7.4.4 to DAQmx (version from August 2008 Dev Suite, don't know number off the top of my head)

 

The last thing I did was move my test application to my OLD Dev OS and OLD Dev IDE. This version of the application runs fine with the same DAQmx driver that the NEW Dev OS and NEW Dev IDE fail on. This is using the exact same Windows XP target machine under identical operating conditions. This suggests that DAQmx is NOT the problem and that it lies in the CVI runtime, the IDE itself or the process of "cross-compiling" from Vista to Windows XP.

 

I will suggest to the users the possibility of using Continuous mode plotting instead of Sweep mode but my guess is they want what they already have. It's possible, as you suggest, that this would improve performance but unless THIS is the fundamental difference in the runtime between 8.1.0 and 8.5.1 then I think it will only mask some other, yet to be discovered, problem.

 

Also, if you look back in my notes you will find that I've already modified MY code (not the version you have) making two of Luis' previously suggested changes. Namely, significantly reducing the POINT_PER_SCREEN (from 9001 to 501) as well as "spin-throttling" of the while loop inside the Queue callback (changing the buffer size is one of the suggestions that WILL NOT WORK). In my tests neither of these changes solves the performance problems. I can send you the newer versions of my code if you wish to see if they are similar to the changes you made or not.

 

So... in summary... I have shelved CVI 8.5.1 development for the time being (OH... and now 9.0 is out!) in favor of 8.1.0 which again WORKS but think that somebody at NI should be looking at what the REAL problem is. While it's possible (likely?) that it is just me I HAVE found other obscure bugs in CVI in the past and know I'm not ALWAYS the one making stupid mistakes.

 

Again... thanks for your help.

 

Trevor

0 Kudos
Message 20 of 21
(2,319 Views)