LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Losing packets using winpcap

Solved!
Go to solution

Hi,

I am working on an LabVIEW application to visualize our telemetry data.
This data is sent over ethernet using UDP and has a payload of 1332 bytes. The framerate is 6720 packets per second.

To work with this datarate I set the UDP-socket receive buffer ( http://digital.ni.com/public.nsf/allkb/D5AC7E8AE545322D8625730100604F2D ) to exactly the size of packets multiplied with the payload. Increasing or decreasing the buffer will lead to errors and one or some packets will be lost.

However, I am still losing some packets!

Now I tried to use the Ethernet Packet Sniffer Example ( http://zone.ni.com/devzone/cda/epd/p/id/2660 ) and modified the Example.vi to check in every packet our internal packet counter. I was glad to see that no packets were lost! But, after adding the Sniffer VIs to my application massive packet loss occurs. 😞


Next I moved the aquisition job into the wrapper DLL. The function below is called with an pre-initalized 1-D array of strings. This way memory allocation is done by LabVIEW.



/* LabVIEW created typedefs */
typedef struct {
  int32 dimSize;
  LStrHandle LString[1];
} TD1;
typedef TD1 **TD1Hdl;



EXTERNC EXPORT int32 lvwpcap_read_n_packets(uInt32 pcap,
                        uInt32 *tv_sec,
                          uInt32 *tv_usec,
                        uInt32 *capture_len,
                        TD1Hdl capture_data)
{
    struct pcap_pkthdr *header;
    const u_char *data;
        int32 array_size, string_size, i;
        int32 retval = 1;
        int32 errors;
        LStrHandle TempString;


        errors = 0;
        array_size  = (*capture_data)->dimSize;
        //        TempString  = (*capture_data)->LString[0];
        //        string_size = (*TempString)->cnt;

        *tv_sec  = array_size;
        //        *tv_usec = string_size;

    for (i = 0; i < array_size; ++i) {

          retval = pcap_next_ex((pcap_t *)pcap, &header, &data);

          if (retval > 0) {
            *tv_sec = header->ts.tv_sec;
            // *tv_usec = header->ts.tv_usec;
            *capture_len = header->caplen;

            TempString  = (*capture_data)->LString[i];

            // avoid costly memory allocation.  assume string is at least 64K
            // (or whatever we set the max size to with our init call.
            memcpy(LStrBuf(*TempString), data, header->caplen);
          }
          else {
            i--; // In case of timeout correct pointer to avoid skipping of entries in array
            errors++;
            if (errors >= array_size) {
              return -1;
            }
          }
        }

        *tv_usec = errors;
        return retval;
    // return errors;
}



The problem: Also now I lose packets! Not at every call to this function, but in a regulary way (in about every third call).
What am I doing wrong to aquire say 800 packets in one go without losing even one?

Is there someone with the same problems using winpcap?

The UDP packets are sent by our hardware and the PC is directly connected via an crossover cable.
I am using LabVIEW 8.5 and WinPcap 4.1.2.

Thank you in advance for your answer,

Thilo

 

0 Kudos
Message 1 of 13
(8,438 Views)

We will need to see your LabVIEW code in order to help. 

 

Some comments:

 


I am working on an LabVIEW application to visualize our telemetry data.
This data is sent over ethernet using UDP and has a payload of 1332 bytes. The framerate is 6720 packets per second.

To work with this datarate I set the UDP-socket receive buffer ( http://digital.ni.com/public.nsf/allkb/D5AC7E8AE545322D8625730100604F2D ) to exactly the size of packets multiplied with the payload. Increasing or decreasing the buffer will lead to errors and one or some packets will be lost.

 


 

Increasing the buffer does not lead to loss of packets. Your problem is most likely that you are not pulling data from the UDP connection as fast as the device is placing them on the Ethernet wire. (The consequences of this sort of action can be seen in this YouTube video)

 

 


Now I tried to use the Ethernet Packet Sniffer Example ( http://zone.ni.com/devzone/cda/epd/p/id/2660 ) and modified the Example.vi to check in every packet our internal packet counter. I was glad to see that no packets were lost! But, after adding the Sniffer VIs to my application massive packet loss occurs. 😞

 


If the Packet Sniffer example worked, then the issue seems to be with the way your are processing your data.

 

1/6720 is about 140 microseconds. If your telemetry data is in an integer format (or worse packed binary) it would be safe to say that you will not be able to read one packet from the connection, unpack/convert the data to scale/engineering units and then update a display BEFORE THE NEXT PACKET ARRIVES.

 

Things to consider:

 

Updating the UI is a processor intensive operation. Don't try to display 'the whole message' for every packet received. Start out by reading something simple like your frame counter.

 

Don't transform/calculate ALL of the packet data; only transform what you need. provide a list of observables and allow the operator to select which packet element(s) they can view. Extract the element from the packet and transform only that element.

 

I went throught this kind of thing in 2006. You can read about it on the LAVA forums in this thread...

 

 

 

0 Kudos
Message 2 of 13
(8,424 Views)

I have had great suscess with winPCap and labview in the past the trick was to put your sniffing in its own loop and have a producer consumer architecture.

loop 1 is setup, acquire and queue the packets (as you stated the example works great)  have this loop run as fast as possible as to not lose packets.  loop 2 should wait on packets and decode the packets in the queue.  Since you are using UDP you can not lose the packets and get a resend.  I would assume that in a UDP sceme a packet lost here and there is not the ien of the world (ie audio/video data can lose some data and keep going).

If you simply place the sniff in a loop you can experience massive packet loss if the loop slows.

 

 

Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA
0 Kudos
Message 3 of 13
(8,412 Views)

Hi Phillip,

 

Thanks for your answer.

 

I attached 3 VIs.

 

The first (SUB_Collect_Data.vi) was my original code without WinPcap. As you can see I allways collect e.g. 800 UDP packets (Number of Frames) and afterwards I will go on with my data processing/analysis.

As you noted, I will never be able to "continuously" acquire packets and process them.

 

In the other VIs (SUB_Collect_Data_pcap_n.vi, TEST_VI_GETDATA.vi) I moved the acquisition into the wrapper DLL using my above quoted C routine. Even there I lose packets. But, as I mentioned, not at every call. Instead the Array (showing me the numbers of Errors) will grow almost continuous with an almost fixed difference of e.g. 2.5 calls to this function.

 

 

Thilo

 

PS: Nice video.. 🙂

 

 

 

 

 

0 Kudos
Message 4 of 13
(8,410 Views)

Hi Paul,

 

In general I would agree with your assumption of a packet loss here and there.

But our customers have a different view on this matter. They expect that the reference software (aka my LabVIEW application) will not lose any packet, even if this software is only used to show the customer that our telemetry system works as desired.

 

Thilo

0 Kudos
Message 5 of 13
(8,402 Views)

Your code looks fairly clean, I see you set the execution priority Smiley Wink

 

Suggestions:

 

In SUB_Collect_Data.vi: Do you need to use Clear Errors.vi? This sub-vi has a lower priority and has debugging enabled. It essentially discards an error, and returns an default error cluster that you don't use. This is run for every interation of the loop Smiley Surprised LabVIEW should be smart enough to optimize this out, but I'm not sure... if in doubt, leave it out.

 

I'm not a big user/fan of the feedback nodes, but if I read your code correctly, you might be able to move your array initialization case outside the for-loop and prevent unessessary comparisons for each loop interation. Again, LabVIEW may optimize this out, but I wouldn't depend on it...

 

You might consider changing the TC Stream data output from a cluster of 1D arrays to a single 2D array. The Header and TC Data Arrays are the same size, have the same data type, and building a cluster adds overhead.

 

The only other reason that you might lose data is because the caller of SUB_Collect_Data.vi is taking > ~30 ms between calls to execute.

 

 

0 Kudos
Message 6 of 13
(8,393 Views)

hi,

did you try the example from http://decibel.ni.com/content/docs/DOC-11373

in that example a background thread in the dll works with acquiring. another thread just gets the packets from a buffer (queue). You have the option to get all the packets from buffer, or one at a time.

hope it helps,

cosmin

0 Kudos
Message 7 of 13
(8,385 Views)

Hi Phillip,

 

Smiley Happy It takes some time to let it look good. But its worth for the better understanding after some days not looking into it. Smiley Wink

 

I removed the Clear Errors.vi and changed the feedback node into an shift register. The error rate drops a little bit, but this is only a feeling, not a clear measurement.

 

I did not change the cluster! First I had to change to much in the upper VIs. Smiley Wink Second lays the cluster outside of the for loop, so I assume it would not affect the data acquisition inside the loop?!

 

As this VI is the fifth subVI, I don't care for the 30 ms to call it! What I'm interested in is to get my selected amount of UDP packets continuously. The processing afterwards includes FFT, Triggering and, if the user selects it, upsampling and filtering of the data. So I know I will not get the full stream, only chunks out of it.

 

Thilo

 

0 Kudos
Message 8 of 13
(8,378 Views)

The main point of my post was not that UDP might allow for lost packets but that you can shitch architectures to a queue based producer consumer architecture, this will alleviate timing issues due to packet processing.  It also splits your problem in to 2 distinct tasks: acquiring and processing.  This is what I had done when using win pcap.

Loop 1 was a state machine with the states init connection (also sets up buffers and memory allocation), read winpcap error handler and exit.

The second loop just waits on the queue for new data packets (string) and processes at needed.  as long as the average processing time is less than the rate of data recieved 6kpackets/sec you will never fall behind.  The process loop can also decide if packets are accumulating in the loop to remove some level of processing to catch back up.

 

 

Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA
Message 9 of 13
(8,372 Views)
Solution
Accepted by topic author Thilo Datatel

Paul has best suggestion at this point; use parallel loops combined with queues to create a producer/consumer architecture.

 

For an example, see here.

 

In the example's upper loop, you would place a UDP Read, or even your SUB_Collect_Data.vi. The output data would be placed into the queue.

 

The lower loop would extract one element (your TC Stream cluster) and perform the FFT and UI update(s).

 

The queue allows the uncoupling of the data acquisition from the processing. The upper loop would pull data from the UDP connection without stopping to process or display the data. The lower loop would pull data off of your queue as quickly as it can run and performs your FFT/UI presentation.

 

Note that a queue can grow until your system memory is exhausted. Using a queue does not relieve you of processing the data in a timely matter, but it DOES allow you to take greater advantage of LabVIEW's parallel / multiprocessor capabilities.

 

Search the KnowledgeBase and forums for producer/consumer and you will find many examples...

 

 

Message 10 of 13
(8,365 Views)