LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

RT hardware timed DQA

If it were me, I'd be suspicioius of the function call to the reflective memory driver.  It's a black box, and I would put it through quite a bit of testing before trusting it to execute in deterministic time.  Maybe you've already done this though.

You can indeed read 1 single most recent sample using the DAQmx Read property node.  But that doesn't seem to me like the best overall solution.  It isn't clear to me why you'd go to as much trouble as you have with RT if you end up just ignoring / skipping some cycles.  The better answer is to develop the architecture to behave better than that.  This would mean either (a) 100% on-time loop cycles or (b) smart buffering to cover the rare misses.

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 11 of 14
(1,439 Views)

Actually, i don't want t skip even one single sample. But to read all sample one back is a worse situation.

What i need is at every 1 msec i should read one sample and transfer it to another system by using reflective memory card.

I need to read all samples and transfer them on time (1msec and +-5 msec tolerance)

I try to find best solution. As a stated before when i used softaware timed time loop, i count between 0 to 100 "Finished late" since at that case, sampling process uses DAQ onbourd clock and the timed  loop which read samples uses software clock, since they are using different clocks, problem rises like when try to read sample, sample is not ready yet and this cause timed loop to be late. I try to solve this issue and what i can find is that to use onbourd clok both sampling and the timed loop doing reading , so they run syschronized

 I need a hard determinism, i'm open for all advices

0 Kudos
Message 12 of 14
(1,431 Views)

Only time for some scattered thoughts:

Take a step back from the specific code you've been working on and think again about the overall architecture and timing needs.

RT typically wants to run with exactly 1 time-critical thread.  You need this thread to "sleep" when done with its work so the CPU can be released for other threads.  You'll want to be very sure that the time-critical thread is lean, tight code that is highly deterministic.  I'd be wary of a time-critical loop that about 50% or more CPU usage.  I'd probably generally want to aim for more like 20-25%.

Now you've got to decide what part of your code is truly time-critical.  Consider each of the different things you're trying to accomplish during each time slice.  Which one would hurt the most to miss?  How would you compensate for that miss?  (For example, if you missed acquiring samples with your data acq loop, you could read the extra samples on the next loop.  Any code depending on the missed samples would need a way to be fed with reasonable stale data, such as the samples that were read on the previous loop.)  It's easy going into an RT app saying that everything is critical.  But I'd venture that the most important architectural decision you make is narrowing down the scope of what is truly considered time-critical.  I'd approach it like, "If this code is truly time-critical, then I'm willing to completely shut down the whole system the very first time I miss an iteration." 

Next you've got to figure out how to allocate time slices to the different code.  Will you use some kind of rigid time-allocation scheme that "wakes up" 3 different parts of the code at 25%, 50%, and 75% of the time slice?  If so, are there dependencies that can make the later code fail if the earlier code misses its wake-up call?  Alternately, each portion of code can be used to wake up the next portion of code.  I think this would be the better scheme because it lets you use as much of the CPU as you happen to need.

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 13 of 14
(1,426 Views)

I am sorry, i am a new labview user.

I have a question, please help me as soon as possible!

My question: What is the relation between "sampling time" and "execution time"?

How can I set (optimize) a suitable sampling time for a given system?

Thanh you very much indeed!

My email: khongghegom@gmail.com.

0 Kudos
Message 14 of 14
(625 Views)