Digital I/O

cancel
Showing results for 
Search instead for 
Did you mean: 

Burst Mode on 6533 has long periods with ACK deactivated.

I am using a PCI-DIO-32HS card in burst mode to input 2 byte data. The input is controlled by an external clock running at 1MHz and I have a FIFO between the peripheral device and the card (capable of 1.5MHz). I am using double-buffered acquisition over, typically, 15s with a half buffer size of 128*1024 samples. I am using the NIDAQ functions in Borland C++ Builder under Windows 2000 on a P4 1.8GHz PC with 512MB RAM.

I know that without the FIFO in place I lose data every time I transfer a half-buffer. Reducing the size of the half-buffer only appears to change the interval between the periods of data loss. So the FIFO is used in an attempt to stop the data loss. However, I am surprised to see that the 4096 s
ample capacity of my FIFO is not sufficiently large and that sometimes the ACK is deactivated for more than 12ms.

Is this normal behaviour? Is there anything I can change in my software to minimise such delays? I am using some simple test software that is doing nothing else whilst acquiring (unlike my final code) and I am allocating (and writing to) all required memory prior to acquisition. I can increase the size of the FIFO but can I really be sure that I get the size big enough?

In hindsight it looks as though the 6534 may be better suited to this application but unfortunately we have 3 systems already using the 6533 so I need to fix those first.

(I have tried pattern generation which works fine in my simple test code but loses data in the more complex final software package.)
0 Kudos
Message 1 of 3
(3,246 Views)
Hello,


With burst mode transfers, the card does have the leeway to lower the ACK line whenever it needs to, just to catch up. In this case, the buffer size that is being set in software is adjusting the size of the buffer in RAM that holds transfers from the PCI-6533. So adjusting the size of this buffer does not affect the card much.

Is there any way that you can do this application using pattern generation instead? The PCI-DIO-32HS has been benchmarked to hold pattern generations at 1.43 MHz at two byte transfers.

Lastly, a 12 ms delay seems very long. Perhaps we are using IRQ transfers instead of DMA transfers. This would explain what we are seeing, as IRQ transfers would probably not support one megahertz. You can use the Get_DAQ_Device_Info f
unction call in order to check and see which transfer method you are using.

Best Regards,
Justin Tipton
National Instruments
0 Kudos
Message 2 of 3
(3,246 Views)
Hi Justin

Thanks for the advice. I have checked the transfer method for my input group and it is set to 'ND_UP_TO_1_DMA_CHANNEL' so it appears that DMA transfer is being used. Is this value something I should set as a matter of course? I thought that it was the default for the PCI-DIO-32HS card?

Pattern generation only appears to work in my test code where I acquire data from this card only. In my final system, where I need to simultaneously monitor inputs from a PCI-6025E card, I appear to lose samples at 1MHz using this method.

At the moment, I am finding that a FIFO of 16k samples appears to be sufficient although I haven't finishing testing this. I tried a 4k buffer prior to this and found that this often filled whilst the ACK line was inacti
ve.

Any more suggestions would be very welcome as, although the 16k buffer seems okay so far, I am not fully confident that the system will not lose data at some point. I have attached a file containing the NIDAQ functions I use to configure, start and check my acquisition in case I have any inappropriate setting. The only value I am a little unsure of is the use of the 'oldDataStop' parameter in 'DIG_DB_Config' although the help system says that this may cause delays for the AT card only.
0 Kudos
Message 3 of 3
(3,246 Views)