Counter/Timer

cancel
Showing results for 
Search instead for 
Did you mean: 

why does error 200279 occur at high speeds only?

I am using a VI very much like the one attached here, and as my motor speeds up and the period value decreases, the vi fails, and error 200279 shows up as displayed in the attached 2 jpg images. This VI is reading the period value of an encoder by rising edges.  The error does not show up at low speeds, only high speeds.  Hardware is wired through USB ports on PC.  Using Labview 2012 and Windows 7.

 

Do I need to specify the samples per channel for the READ in the case structure to eliminate this error?  The error only occurs when the period gets quite short, e.g. 9ms or so.  At higher period values (slower motor speeds) the error does not appear.  I am using the counters built into the cDAQ9174 chassis  and the NI 9401 module to read the period values of my encoder.  What is happening at high speeds to cause this error?  I thought that the setting on the DAQ Timing VI required that 16 periods are read every iteration, so, why is it saying that it is trying to read samples that are no longer available?

 

Also, is the "Append Array" building up a large array that is being carried in the SHIFT REGISTER and causing things to slow down?  There are a huge number of periods occuring with an encoder at 120 ticks/revolution.  should I try to keep this array truncated or something to reduce the size of the data being handled for each iteration?  Can this large array be causing the 200279 error?

 

Thanks,

Dave

0 Kudos
Message 1 of 5
(7,137 Views)

I think there's a combination of things that could be contributing.  I don't have time for a full explanation right now, here's some quick mods I did to the code you posted.  Essential changes are:

 

- made separate loop for collecting data into a big array.  (Maybe you can consider dumping to file instead of growing an array in memory?)

- used a queue to transfer data between loops

- increased the buffer size dramatically while still calculating average of only the most recent periods

- reduced the acquisition loop rate -- expect to retrieve more data points per iteration

 

There are a couple other things I'd probably add or change with more time, but this minimal set of mods should help some.

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 2 of 5
(7,128 Views)

Oops, in such a hurry I didn't even attach the modified vi.  Here it is:

 

-Kevin P

 

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 3 of 5
(7,122 Views)

Hello Kevin,

Just to clarify something, it is not my goal to put all the data into a big array and keep it.  I was asking if my existing code was doing that and causing the error.  My intention is not to save that data, but, to use it and throw it away with each iteration.  Did you add a feature to save the data?  Sorry if my question was a bit confusing.

Thanks,

Dave

0 Kudos
Message 4 of 5
(7,091 Views)

What I did was use a queue to pass your data a little at a time over to an independent loop.  There, you would be free to do whatever you might want with it.  You could keep building up a growing array, save to file, whatever.  By moving that processing over to another loop, your main loop wouldn't get bogged down by it due to LabVIEW's parallelism.

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 5 of 5
(7,079 Views)