06-28-2013 11:29 AM
LV 2010, PXI 8101 controller NI-DAQ 9.2.2
Starting a new project using the PXI 1050 combo box: part PXI, part SCXI.
I'm trying out things to see how fast I can go.
I have a simple program which configures 8 LVDT channels at 1000 Hz sample rate and enters a loop:
The loop simply reads the AVAILABLE SAMPLES PER CHAN property and then reads ons sample, with zero timeout.
I then check for a QUIT button and loop if not clicked.
For now, I am using the SIMULATED DEVICE for a PXI 6251 Controller and the PXI1050 chassis and SCXI 1540 LVDT board.
I have the real PXI box and a real PXI 6251, but I don't have the real SCXI 1540 (the client does).
Here's the code - the other frame of the loop just samples the QUIT button.
The VI at the left does CREATE AI CHANNEL-LVDT 8 times for 8 consecutive channels and appends each to the task.
I'm recording the value of AVAILABLE SAMPLES as a diagnostic.
What I see is this:
It looks like I joined something in midstream, and the first one is off.
The rest of it looks like somebody DUMPS 20 samples into the buffer periodically.
I pull one out, I pull one out, I pull one out, and it gets empty and stays empty until something dumps another 20 in there.
Sure enough, if I disable the READ part, and just log the SAMPLES AVAILABLE, it goes up in steps of 20:
Where does "20" come from?
If I change my sample rate to 100 Hz (not 1000), then it dumps 2 samples, not 20:
I would expect that second graph to show 0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1
It puts a sample in, and I take it out.
I'm using whatever default buffer happens.
I've used something similar on a different PXI box for years, but that was a real device.
Is this a bug / feature of the simulated thingy-do?
I stumbled around and found a "SCAN ENGINE" but not sure what that is doing for / to me. Didn't seem to have an effect on this issue.
Anybody have ideas?
Blog for (mostly LabVIEW) programmers: Tips And Tricks
06-28-2013 12:55 PM
Here I changed the code to WAIT on a single sample.
I also recorded the value of a 100 nSec timer, AFTER each DAQmx READ.
The DUMP number went back to 10 (where does "10" come from?)
The timer values are around 200 ticks apart (20 uSec), until there's a jump of 100,000 ticks (10 mSec)
Where is the clog in the pipeline?
Blog for (mostly LabVIEW) programmers: Tips And Tricks
06-28-2013 03:07 PM
So... I stripped out all the other stuff, and changed the code to use the REAL 6251 board, not the simulated one.
I don't have a real LVDT board to go in it, but it should work for the test.
Here, I start a task with one channel, and start a timer.
The 0th time thru the loop, I real all available samples, just to clear out the buffer.
I'm recording AVAIL SAMPLES each time, BEFORE a DAQmx READ with ZERO timeout.
I'm also recording the difference in time (100 nSec ticks) between loops.
For the AVAIL SAMPLES, I would expect to see 0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1, and so forth.
I don't.
the AVAIL samples gets TWO samples dumped in every now and then. I read one in one loop and read the other in the next loop.
The average loop time is 2750 ticks (275 uSec). WHY?
When samples get dumped in, it's like the loop wakes up and processes them about 40 uSec apart.
But what is it doing in the meantime?
And why doesn't NI-DAQ tell me when a sample is ready?
Blog for (mostly LabVIEW) programmers: Tips And Tricks
06-28-2013 05:16 PM
So, I went back to the old system. It's a different PXI box (PXI1042 with 8196 controller and PXI 6221 for ADIO, other cards too).
I already had a timer set up there, so I stripped out all the code and did the same testing.
It's running at 100 Hz (not 1000) but that shouldn't matter.
I timed the loop-to-loop interval and it's dead on 10000 uSec (10 mSec), with a couple of glitches.
I put the SAMPLES AVAILABLE property node ahead of the DAQ READ and recorded it too.
That turned out to be a constant zero, which is what I would expect, actually.
There's nothing for the CPU to do except wait on the sample, so it loops and finds no sample available and waits.
So, what's the difference?
1... 100 Hz vs. 1000 Hz - well the new system I tried at 100 Hz and still saw the problem.
2... 6221 vs. 6251. Hard to see how the hardware would clog up the software this way.
3... 8101 vs. 8196 - maybe not the hardware, but the OS on it ?
4... The old system calculated a wait time ( 1/ Sample Rate), applied a fudge factor of 1.1, and used that as a timeout.
Don't see how that can affect anything, I used a 10 sec timeout and it's not timing out at all, enyway.
5.... The old system did NOT have a SCXI chassis attached to the 6221. The new one DOES.
I tried changing the new one to 100 Hz and a short but adequate timeout. It never times out. No errors occur.
That eliminates #1 and #4 above.
It's still dumping TWO samples at once (or at 1000 Hz, it's dumping ....
Hmmm. It's still dumping TWO. Earlier it was dumping TEN or TWENTY.
Confusing..
Blog for (mostly LabVIEW) programmers: Tips And Tricks
06-28-2013 05:17 PM
Oh, right - it was the SIMULATOR that was dumping TEN and TWENTY.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
06-29-2013 07:36 AM
I moved to a separate simulated PXI 6251 device WITHOUT a SCXI chassis on the idea that maybe the SCXI attachment was affecting my results, trying to use the board.
Apparently not.
I changed to simulated PXI 6221 (to match the old system that works) and got no change.
The upper graph shows the number of samples available (Y) vs loop number.
The lower graph shows time between loops (Y - uSec) vs loop number.
The low numbers on the lower graph are around 16-17 uSec - the basic loop time, I suppose.
It still looks like the thing is dumping 10 samples into the buffer every 10-12 mSec. WHY?
Here's the code:
I have set the SCAN ENGINE to 200 mSec and lower priority, to avoid possible interference.
Why doesn't it give me one sample every one mSec instead of 10 samples every 10 mSec ?
Blog for (mostly LabVIEW) programmers: Tips And Tricks
06-30-2013 09:01 AM
Well, I've discovered something, but I'm not sure what it means.
Here's the code and the results.
I would expect the SAMPLES IN BUFFER to be continuous 0 (because it's waiting on one sample and then reading it).
I'm thinking it should get back to the READ long before the sample is ready.
Not so:
Blog for (mostly LabVIEW) programmers: Tips And Tricks
07-01-2013 06:32 AM
So, stripping all the extraneous stuff out and using HW TIMED sampling, here's the code and the results:
Darn near perfect.
The SIMULATED device has also lost the weird dump-20-samples-in-at-one-time behavior, but can't keep up.
AT 4000 Hz, it presents a sample every 2000 uSec (should be 250)
At 2000 Hz, it presents a sample every 2000 uSec (should be 500)
At 1000 Hz, it presents a sample every 4000 uSec (should be 1000)
At 500 Hz, it presents a sample every 4000 uSec (should be 2000)
At 250 Hz, it presents a sample every 6000 uSec (should be 4000)
AT 100 Hz, it presents a sample every 12000 uSec (should be 10000)
At 10 Hz, it presents a sample every 101996 uSec (should be 100000).
I've no idea about this. That's not just a matter of being unable to keep up.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
07-01-2013 10:21 AM
Hi Steve,
I am not sure if there is a specific question within your posts that you are hoping to have addressed. However, I read through the thread and wanted to see if I could clarify some things for you.
As you can see, whenever you used the real devices all of the timing and sample reading seemed to work correctly, as indicated by your last post. When you used simulated devices seemed to be when most of the discrepancies appeared. Section 4 of this document may prove helpful. It discusses how timing is not going to be consistent when using simulated devices, since they obviously do not have hardware timing, or on-board buffers, etc.
If there are any specific questions you would like to have answered, feel free to let me know, and I will see what I can do to assist you.
Hopefully this is helpful.
07-01-2013 10:42 AM
@Thomas-B wrote:
As you can see, whenever you used the real devices all of the timing and sample reading seemed to work correctly
I disagree with that, although yours and my definition of "correct" may be different.
It's possible that NI-DAQ has evolved out from under me.
Once upon a time there were TWO choices: FINITE samples and CONTINUOUS samples.
Here's the basic question in a nutshell:
On a real or simulated device, If I use CONTINUOUS SAMPLES, then I see the AVAILABLE SAMPLES property jump up (to 10 or 20 or something on a simulated device, at least 2 on a real device), and DAQ READ will wait on this 10 or 20 or whatever. It DOES NOT REPORT samples that should be there.
WHY?
Blog for (mostly LabVIEW) programmers: Tips And Tricks