LabWindows/CVI

cancel
Showing results for 
Search instead for 
Did you mean: 

UDP receive buffer in CVI 8.5 (packet loss)

I'm doing some high speed data capture using UDP in LabWindows/CVI 8.5. I'm trying to receive 5029 byte packets sent at 1kHz, so about 40 Mbps. This is over a gigabit link using a standard ethernet frame size.

My problem is that my application can't keep up with the packets. I've launched my UDP channel in a separate thread pool with a priority of THREAD_PRIORITY_TIME_CRITICAL, and I've gotten the loss down to about 1%. My receive callback only reads the data in and increments a counter--it is about as barebones as you can get and I'm still losing packets.

I've done some reading and apparently Windows defaults to a UDP receive buffer size of 8k. This would explain why I'm dropping so many packets. Is there a way I can increase the size of this buffer? I see there's the function GetUDPSocketHandle() that can return the system socket handle, but is there anything I can do with this? I'm only using the base version of CVI so I don't have access to the Windows SDK.

Greg
0 Kudos
Message 1 of 8
(5,701 Views)

Have you tried implementing a FIFO between your UDP thread and the main application? I have done this successfully in the past with similar applications and it works well at averaging out the inevitable Windows latencies. (My app used a FIFO which could store 2 seconds worth of data while logging high speed telemetry to a disc system.) I understand you can use CVI's own thread safe queues in the same way, but I generally prefer to code these things myself.

JR

0 Kudos
Message 2 of 8
(5,683 Views)
I will use a FIFO between the UDP thread and my application once I get the loss down to an acceptable level. My UDP callback consists of the following at the moment:

int CVICALLBACK UDPCallback (unsigned channel, int eventType, int errCode, void *callbackData) {
    // Read the waiting message into the allocated buffer.
    UDPRead(channel, &msg, sizeof(Message), UDP_DEFAULT_TIMEOUT, NULL, NULL);
    packetsRead++;

    return 0;
}

So you can see, even with this pared down function that does nothing but read the buffer and increment a counter, I'm still losing 1% of the packets. The OS just can't context switch to the process quickly enough before the buffer is overrun every single time, which makes sense because the 8k buffer will overflow after 2 packets, and packets are sent every 1ms. Unless the buffer can be increased, UDP is unusable at these rates.
0 Kudos
Message 3 of 8
(5,668 Views)
I was trying to make the point that a FIFO being fed from a time critical thread will help to avoid source data loss in the first place, by effectively giving you huge buffers at the UDP stage. The question as to whether the application at the other end of the FIFO can then process the data fast enough is a separate one.
 
JR
0 Kudos
Message 4 of 8
(5,660 Views)
I understand your point, but the problem is the size of the buffer of the underlying UDP stack--the time critical thread can't even retrieve the data fast enough. The rest of my application at the moment is sleeping.

Greg
0 Kudos
Message 5 of 8
(5,657 Views)
Instead of using the CVI thread pool, maybe a standard Windows thread might perform better. (Via SDK functions CreateThread() and SetThreadPriority(). These will avoid any CVI overhead there may be in NI's implementation.)
 
JR


Message Edited by jr_2005 on 12-19-2007 04:50 PM
0 Kudos
Message 6 of 8
(5,653 Views)
Unfortunately I'm using the Base version of CVI, so I don't have access to the SDK functions.

TCP is probably a better choice in this situation for us anyway, and I think I've made the case to the powers that be to switch our implementation to TCP. The internal buffer size I presume is much larger, and we certainly have the bandwidth for the increased overhead that TCP adds. We are not so much latency constrained either, and lost packets are a bigger deal for us. Nonetheless, it would be advantageous for NI to include a function in the new UDP library to change the internal buffer size, as it would make this library more versatile and useful at higher rates. The default buffer size is simply too small for even a high priority scheduled thread to keep up with at high data rates.
0 Kudos
Message 7 of 8
(5,612 Views)
Thanks for the suggestion. We'll consider adding such a function.  I think I agree that your use case is better suited to TCP, as UDP is most useful if sending small packets and assuming that packet loss is tolerable.

Mert A.
National Instruments
0 Kudos
Message 8 of 8
(5,606 Views)