LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Interrupt servicing in LABVIEW (non-realtime)

Hello,

We are using the NI DAQCard-6062E PCMCIA card with LABVIEW 6.x at the moment (we also have access to the latest LABVIEW revision). Our current application involves measuring the high and low periods of a time-based digital signal. The high/low, low/high transitions are used as triggers that stop/start the hardware timer on the DAQ card. Our VI then returns two values: (1) the timer value during each LOW period, and (2) the timer value during each HIGH period.

________ _____ ___________
_____| |______________| |____| |___________ --> dig signal to timer trigger of DAQ card
low high l h l h l

So far this non-realtime approached has worked ok. However, we are now investigating the effects of encoding more than 2 types of data in the same time-based digital stream. This will be done by encoding the signal with distinct high and low time-widths to be interpreted by software. So, I am thinking that since we are now moving to a regime where the decoding of the signal will require some more complex (software-based) decision-making, we may have to move to a real-time system.

My question is, in *quantitative* terms, what are we truely gaining by moving to realtime/FPGA? That is, in the non-realtime LABVIEW, when an interrupt signal is sent from the DAQ to LABVIEW, there is some downtime as the interrupt is not serviced immediately by the PCs Operating System (WinXP/2000). As I understand it, this amount of time is not fixed, but varies with current system load. What is the typical range of time values to be expected before an external interrupt is serviced by the non-realtime LABVIEW software? Conversely, in the realtime system, if an interrupt is given highest priority, is the "interrupt service routine" executed immediately (as it would in a properly-configured microcontroller environment)?

I also noticed that the "timebase" for non-realtime LABVIEW is at a 1mSec granularity (the ticker block object). Does this have to do with the fact that the LABVIEW software can only garantee a "realtime" operation within 1ms? I would appreciate some more information/clarification on this as well.

Please feel free to email me at nicholas.f.singh@medtronic.com for any clarification. We're on a fast track, so I would greatly appreciate a prompt reply!

Kindest regards,

Nicholas F. Singh
0 Kudos
Message 1 of 2
(2,685 Views)
Hello Nicholas,

I'm not much of a realtime or computer archictecture expert, but here is what I know.

1) I don't know of a typical range of time values for servicing an interrupt. I imagine that it is highly variable and depends on what other interrupts are occuring at the time. If there are no other interrupts being generated, I would think that the downtime to service the interrupt generated by the DAQ driver is minimal.
2) Realtime systems do not service interrupts. Because realtime systems are deterministic by nature, there can be no interrupts as this would contradict the determinism. All calls to the driver would execute "immediately" because nothing would interrupt that process. However, do not confuse determinism with execution speed. Just because something is running in a realtime system, it does not necessarily mean that it will run faster.
3) The "timebase" for non-realtime LabVIEW is based on the accuracy of the system clock. Because Windows can only give LabVIEW a time value that resolves to 1ms, LabVIEW msut use a 1ms "timebase" for software timing. This timebase is also relative to the processor time that the OS allocates to LabVIEW (i.e. it is not deterministic/guaranteed).

If you have more questions about Realtime/FPGA, DAQ, or LabVIEW please feel free to ask. I'll do what I can but we might have to get a realtime expert on board for a more detailed discussion.

Message Edited by E.Lee on 06-27-2005 02:48 PM

Eric
DE For Life!
0 Kudos
Message 2 of 2
(2,655 Views)