LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Error - 200141 occurred at Counter - Read Pulse Width and Frequency (Continuous).vi


Let me repeat, I need continuous two edge separation data points at a rate of 200k samples per second or greater. I also need to plot this data to verify a smooth transition or identify irregularities during the UUT's 5-10 second dynamic delay profile. I'm almost there with implicit timing, however I would prefer greater sample rate than the frequency of the pulse train as measured by the counter (200kHz).


Unfortunately, counters on DAQmx cards use implicit timing, hence, the concept of sampling rate is defined by how often you read it in the software. And by the limitation of OS and Drivers, reading often than 1ms may not be reliable and guaranteed. With this said, reading the counter at best 1kHz (may not be equally spaced in time) is the reality you may need to work with.

 

I am no expert with Counters but for sure I know Kevin is one of the most respected DAQmx experts on the forum and I will go by his words.

Santhosh
Soliton Technologies

New to the forum? Please read community guidelines and how to ask smart questions

Only two ways to appreciate someone who spent their free time to reply/answer your question - give them Kudos or mark their reply as the answer/solution.

Finding it hard to source NI hardware? Try NI Trading Post
0 Kudos
Message 21 of 27
(937 Views)

@cthunterMC wrote:

Let me repeat, I need continuous two edge separation data points at a rate of 200k samples per second or greater. I also need to plot this data to verify a smooth transition or identify irregularities during the UUT's 5-10 second dynamic delay profile. I'm almost there with implicit timing, however I would prefer greater sample rate than the frequency of the pulse train as measured by the counter (200kHz).


This still shows a faulty understanding of Implicit timing.  The input signal itself sets the sample rate.  Nothing else.  And take a step back to think about it -- how *could* you measure a time interval between edges any faster than the actual time interval between those edges?  Until you get to the 2nd edge, no measurement is possible b/c you can't know at that instant when it might be coming in the future. 

 

I don't know exactly why you get crashes but I'd place my suspicions around:

1. your removal of the time delay from the loop.  You're making the CPU do more work than necessary by iterating your loop as fast as it possibly can.

2. conversion of data array to "signal" wire inside the loop.  I wouldn't trust it to be particularly efficient.

 

Further, that signal wire seems to pass out of the loop as a regular output tunnel.  So you'll only pass through the data from the very last loop iteration to your file writing.

 

What happens when you use my msg #10 example *unchanged*?

 

 

-Kevin P

 

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 22 of 27
(930 Views)

The msg #10 example runs continuously with no measurement file generated.

0 Kudos
Message 23 of 27
(924 Views)

What have you done to debug and troubleshoot so far?  Have you used your debug probes, execution highlighting, temporary indicators?

 

I do spot an oversight on my part though.  I think you must be generating an error in your task.  The presence of such an error would prevent the Elapsed Time function from operating normally, so the elapsed time is never reached and the loop never terminates.

 

Add this minimal change so the loop will terminate on such an error and then display it in your error indicator.  Once we know the error, we can figure out what to do about it.

 

Kevin_Price_0-1659125071016.png

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 24 of 27
(913 Views)

Good morning, thank you for continuing to work on this with me. Attached is the error code seen.

0 Kudos
Message 25 of 27
(896 Views)

Surprise!  Not the kind of error I expected, but one that makes sense in retrospect...

 

So I think we've found ourselves in a weird little corner case, and even more coincidentally, right on the razor's edge of it.

 

First my proposed fix:  in the call to DAQmx Timing, wire in a very large value like 1 million to the 'samples per channel' terminal.

 

Here's what's going on: normally this can be left unwired for Continuous Sampling tasks in which case DAQmx chooses a buffer size that generally works across a wide range of conditions.  However, *YOUR* specific condition is right on the edge of that wide range.

    According to the link, with sample rate unspecified the default buffer size will be 10k samples.  With a nominal 200 kHz sample rate (5 microsec intervals), you get a 50 msec buffer.

    And I went and unthinkingly set the loop delay time to exactly that -- 50 msec!  

 

I usually like to set up my DAQ buffers to hold 1-5 seconds of data, and I usually service them roughly every 0.1 seconds.  So give yourself some slack and set your buffer to 1 million samples (= 5 seconds at 200 kHz).  That should avoid the buffer overflow error you're seeing.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 26 of 27
(879 Views)

So far so good. Thank you, sir. Kudos distributed.

0 Kudos
Message 27 of 27
(868 Views)