Measurement Studio for VC++

cancel
Showing results for 
Search instead for 
Did you mean: 

sampsPerChanRead appears wrong for Digital I/O read 16 Input

Hi,
 
I am using 6534 Digital I/O board in loopback mode, using "burst mode"  timing.  I have tried and succeeded with a variety of loopback configurations operating at 2 MHz internal clock.  The device under test (DUT), receives the digital data, manipulates it, and returns the data into the board.  I am using 8 bits out (Port 0), and 16 bits in (Ports 2-3).  Port 1 is unused.  I have an output task, and and input task.
 
A problem has cropped up in the input task in that after the read, the sampsPerChanRead seems wrong, though the exact correct data, of the correct length,  is in the input read buffer.  It is alot of data, in excess of 1 MB, up to 16 MB
 
In the prototype, sampsPerChanRead is type int32.  Looking at this in hex, we would have a representation:
 
ABCD  EFGH,  whree 'ABCD' is the upper 16-bit word, and 'EFGH' the lower word.
 
My DUT outputs 'pqrst' samples.  But when I read the number of samples actually read, returned from the DAQmx Read Function, the data is correct, but the sampsPerChanRead, 
 
--> instead of reading 0x000pqrst, reads 0xfffpqrst
 
Note the prepended 'fff' pattern. 
 
Either my code is corrupting the value, or something else wierd and undetermined is going on.   Any ideas anyone?  I'd like to be able to use this variable to confirm the number of samples actually read, but it seem to have these prepended 'fff's.
 
Anybody else see this?  Any thoughts/help would be appreciated.   
 
Kip Leitner
0 Kudos
Message 1 of 8
(5,556 Views)
Hi Kip,
 
I'm not sure why that is happening, and I'd like to look into it a little more.  Would it be possible to post the portion of your code that calls that DAQmx Read function?
 
Thanks,
 
Justin M.
National Instruments
0 Kudos
Message 2 of 8
(5,542 Views)
Here's the read.   Also, the task executes the callback on finishing with no errors.  The buffer is full of the correct data.


          int32 Samples_Per_Channel_Actually_Read = 0;
     
            int32 Read_16_Result = DAQmxReadDigitalU16(
                   h_Task_Port_2_In,                        //  Task Handle
               n_Count_From_FPGA_words,         //  number of samples per channel
               -1,                                                      //  timeout        (-1 == infinite wait)
               DAQmx_Val_GroupByChannel,       //  fill mode
               p_Data_P2_In,                                  //  put data read by 6534 here
               n_Count_From_FPGA_words,        //  number of samples per channel

               &Samples_Per_Channel_Actually_Read,        // 7
               NULL);                                            //  Reserved for Future use

        // confirm single shot data gets read onto
        do (DAQmxIsTaskDone(h_Task_Port_2_In, &B_Task_is_Done_Port_2));
        while
            (B_Task_is_Done_Port_2 == FALSE);

       // hack here to zero out the upper 3 nibbles -- seems they are getting filled with 'fff'.  Not sure why.

        printf("Number of Samples  Input, single shot:  %d\n", Samples_Per_Channel_Actually_Read & 0x000fffff);
        }

Kip
0 Kudos
Message 3 of 8
(5,535 Views)
I have exactly the same problem, as listed in the post above:
"sampsPerChanRead appears wrong for Digital I/O read 16 Input".  See above for example, and description of the problem.

When using
DAQmxReadDigitalU16, sampsPerChanRead returns the wrong value (the correct value preceded with lots of F's in hex) for data reads of over 32768 in length.

It's not always possible to determine how many samples were actually read, since I don't know which F is the "last F" returned by
sampsPerChanRead

Any solutions?



0 Kudos
Message 4 of 8
(5,207 Views)
Hey bymaster,

The bad news is that NI-6534 does not work correctly in all circumstances.  The good news is, after pestering them to no end about 6 months ago they finally admitted to me that the board and/or their implementation of the API is broken, and told me exactly how I might work around the problem.   Unfortunately, I no longer have the lengthy ( ~5000 words ) correspondence with the truly gory details.  But I know this board inside and out, as well as I can from the API.  I have run it up to the 20 MHz sample rate with the 18" cable and a custom termination card, and it works at this rate (Win XP box), so I know their PCI/DMA engine works ok.  It can really move the data this fast. 

Details:

What is Broken:  the var sampsPerChanRead is not correct once the number exceeds 0xFFFF (yes, you and I know this.)
How to work around:  read more frequently, using the API call
DAQmxReadDigitalU16, which you are using.  

One would think that with 32 MB of onboard RAM, and (on my previous XP machine) 1.5 GB of RAM at 3.2 GHz, that their application would have been able to support a read block size of larger than 32k.  Alas, for 6534, it's not true.  Don't look for a fix soon.

Kip

0 Kudos
Message 5 of 8
(5,201 Views)
I have a stream of about 1million samples (read in 16 bits words) that are piped in all at once at ~5Mhz.  I think that I can jam this into DMA memory just fine in a big aquisition.

But if I call DAQmxReadDigitalU16 multiple times -- can it keep up with the real time data stream?  I'd have to call it a lot of times! 

Is there a work around for this kind of situation?

Thanks,
Brett
0 Kudos
Message 6 of 8
(5,200 Views)
Bymaster, I assume you have read some of the sample projects. There is no other workaround other than the one I describe to you.  (Believe me, I tried.  Save yourself the trouble.)   You can switch the driver mode to continuous_samps, but there are  other issues involved if you do that, and you have less control over what the driver is actually doing. On my XP box, using the technique I described to you, I successfully captured and streamed in real-time to disk files of about 100 MB.  I used this technique successfully, for months capturing PetaBytes of Data for gigantic video simulations.  I am an experienced real-time programmer and an expert at the NI-DAQ mx API on 6534, having used and explored many of the API calls.  I no longer have access to all my code and the NI libs (they belong to the company), but will review here from memory the basics of how this thing works. Here's how the memory on-board the 6534 works http://digital.ni.com/public.nsf/allkb/da52dd262285520686256a470065e589 Basically, if you have only 5M Word samples, what will happen is the hardware on the 6534 board, when configured to capture correctly, will capture all the samples.  It is invisible to you, as the programmer, what the NI-DAQmx actually does as it captures the samples, and I myself actually do not know.  Their driver is free to transfer the samples over the PCI bus to the PC RAM (but still not yet "read" via the DAQmx API call at the user level), or, if you just have 1 MWord (2 MB), which is less than the 16MB available for receive on the 6534, the samples may just remain in the RAM onboard the 6534 in its RAM until a read is executed by the user at the DAQmx API level, at which time the DAQmx driver may move them directly via a PCI bust transfer to the user buffer space.    None of this will matter with your capture size because it is only 1 MWord, (less than the 8 MWords available on the 6534).  The only time you have to worry about streaming issues with the 6534 is if your capture size is **larger** than the RAM on the 6534, and then the board (out of your control) has to do something as its onboard RAM fills up.  Either the user must start reading the data, or their driver can start allocating buffer RAM as a cache (up to a certain point.) Advanced Note: Here are the limits to system performance when streaming to disk with NI-6534 (this is relevant only if your total data capture size exceeds the RAM size of your capture machine, so you have to keep chunking the data to the drive or else your machine will run out of RAM.  I had to do this because I had **huge** capture sizes > 1 GB. http://zone.ni.com/devzone/cda/tut/p/id/4181 My suggestions: 1.  Read carefully the NI API document for their driver (DAQmx), and all the parameters for the API DAQmxReadDigitalU16. 2.  Configure the hardware interface for finite samps. 3.  Put your DAQ mx Code in a loop which reads the samps as they become available;  Note that the DAQmxReadDigitalU16, when configured properly, will not return until it reads all the samples of the chunk size you tell it. // adjustable setups #define NI_READ_MAX_SIZE 0xff00 #define TOTAL_DATA_POINTS 1000000 // compute number of chunks (reads) required const unsigned int ui_Chunk_Size                = NI_READ_MAX_SIZE;  // just less 0xffff, you  **cannot** go over 0xffff (as you know) const unisgned int ui_Total_Data_Size         = TOTAL_DATA_POINTS; const unsigned int ui_Chunks_Total              = ui_Total_DataSize / ui_Chunk_Size;  // word* pw_Data_Block = (word*) malloc (ui_Total_Data_Size *2); // iterated over the reads // note:  you may have to use some DAQmx timeout mechanism if your device under test does not send the full 1M data points, otherwise the loop will never complete as the read stalls, starved for samples.  Use a windows timeout or some other timer mechanism. int i = 0; // start at top of memory block word* pw_Data = pw_Data_Block; while ((i<ui_Chunks_Total) AND (<not timeout>))     {     // read a chunk into local memory pointed to my pw_Data;     // point to next spot in chunk     pw_Data += ui_Chunk_Size;         // update timeout condition     } That's it! Good luck. Kip
0 Kudos
Message 7 of 8
(5,195 Views)
< previous compacted message courtesy  NI >

Bymaster,

I assume you have read some of the sample projects. There is no other workaround other than the one I describe to you.  (Believe me, I tried.  Save yourself the trouble.)   You can switch the driver mode to continuous_samps, but there are  other issues involved if you do that, and you have less control over what the driver is actually doing.

On my XP box, using the technique I described to you, I successfully captured and streamed in real-time to disk files of about 100 MB.  I used this technique successfully, for months capturing PetaBytes of Data for gigantic video simulations.  I am an experienced real-time programmer and an expert at the NI-DAQ mx API on 6534, having used and explored many of the API calls.  I no longer have access to all my code and the NI libs (they belong to the company), but will review here from memory the basics of how this thing works.

Here's how the memory on-board the 6534 works
http://digital.ni.com/public.nsf/allkb/da52dd262285520686256a470065e589

Basically, if you have only 5M Word samples, what will happen is the hardware on the 6534 board, when configured to capture correctly, will capture all the samples.  It is invisible to you, as the programmer, what the NI-DAQmx actually does as it captures the samples, and I myself actually do not know.  Their driver is free to transfer the samples over the PCI bus to the PC RAM (but still not yet "read" via the DAQmx API call at the user level), or, if you just have 1 MWord (2 MB), which is less than the 16MB available for receive on the 6534, the samples may just remain in the RAM onboard the 6534 in its RAM until a read is executed by the user at the DAQmx API level, at which time the DAQmx driver may move them directly via a PCI bust transfer to the user buffer space.    None of this will matter with your capture size because it is only 1 MWord, (less than the 8 MWords available on the 6534).  The only time you have to worry about streaming issues with the 6534 is if your capture size is **larger** than the RAM on the 6534, and then the board (out of your control) has to do something as its onboard RAM fills up.  Either the user must start reading the data, or their driver can start allocating buffer RAM as a cache (up to a certain point.)

Advanced Note:
Here are the limits to system performance when streaming to disk with NI-6534 (this is relevant only if your total data capture size exceeds the RAM size of your capture machine, so you have to keep chunking the data to the drive or else your machine will run out of RAM.  I had to do this because I had **huge** capture sizes > 1 GB.
 http://zone.ni.com/devzone/cda/tut/p/id/4181

 My suggestions:

1.  Read carefully the NI API document for their driver (DAQmx), and all the parameters for the API DAQmxReadDigitalU16.
2.  Configure the hardware interface for finite samps.
3.  Put your DAQ mx Code in a loop which reads the samps as they become available;  Note that the DAQmxReadDigitalU16, when configured properly, will not return until it reads all the samples of the chunk size you tell it.

// adjustable setups
#define NI_READ_MAX_SIZE 0xff00
#define TOTAL_DATA_POINTS 1000000

// compute number of chunks (reads) required
const unsigned int ui_Chunk_Size                = NI_READ_MAX_SIZE;  // just less 0xffff, you  **cannot** go over 0xffff (as you know)
const unisgned int ui_Total_Data_Size         = TOTAL_DATA_POINTS;
const unsigned int ui_Chunks_Total              = ui_Total_DataSize / ui_Chunk_Size;

word* pw_Data_Block = (word*) malloc (ui_Total_Data_Size *2);

// note:  you may have to use some DAQmx timeout mechanism if your device under test does not send the full 1M data points, otherwise the loop will never complete as the read stalls, starved for samples.  Use a windows timeout or some other timer mechanism.

int i = 0;

// start at top of memory block
word* pw_Data = pw_Data_Block; while ((i<ui_Chunks_Total) AND (<not timeout>))
     {
     // read a chunk into local memory pointed to my pw_Data;

     // point to next spot in chunk
     pw_Data += ui_Chunk_Size;

     // update timeout condition

     }

That's it!

Good luck.

 Kip
0 Kudos
Message 8 of 8
(5,193 Views)