07-25-2013 08:11 AM
Hi all,
i'm using the UDP protocol to exchange information between a PC and a cRIO; the first is the master, the second is the slave.
In the cRIO i developed a piece of code in order to realize a loopback, so that the message sent from the PC is sent back to the PC itself as it has been received, without any modification.
First, i tried sending a short message (100 B) from the PC to the cRIO; in this case, no problem at all
Then, i tried sending a long message (20 kB) from the PC to the cRIO; in the cRIO code, the error number 113 was raised when trying to send back the message to the PC (calling the UDP write function)
What can be the origin of the different behaviour between the PC and the cRIO? Seems to be a question of buffer size... Can it be dependent of the OS? How can i increase it in the cRIO?
Thanks!
aRCo
07-26-2013 12:10 PM
Greetings aRCo,
I believe you are correct in concludinhg the source of your difficulty is the size of your message. The small message works fine, but the larger message fails.
If you open up the detailed help window for the LabVIEW "UDP Write" function, you will find this note following the "Data IN" control specifications:
"data in contains the data to write to another UDP socket. In an Ethernet environment, restrict data to 8192 bytes. In a LocalTalk environment, restrict data to 1458 bytes to maintain gateway performance." (Emphasis added to highlight the important point.)
The maximum UDP message size is 8K. If you want/need to send larger messages, you need to break them into smaller chunks (no larger than 8K) prior to sending over UDP. Of course, you will also need to restore the original message from the chunks on the reciever side as well. Ideally you will want to bundle the whole thing in an intelligent transmit/receiver driver that transmits the string length at the beginning of the first message, and internally reconstructs the strings on the receiver side.
(I believe there is a LabVIEW example that illustrates this technique included with the UDP networking examples. If not, I may have an example if you find you are unable to get this working.)
-- Dave
07-27-2013 04:19 AM
Thanks Dave, the method you suggested is more or less what i thought as workaround.
Anyway, i don't understand why PC and cRIO behave diffrently. In the case of the PC, the message size is much bigger than the one specified in the context help. Do the bytes limit can change depending on the OS of the device?
07-29-2013 12:29 PM
Hi aRCo,
According to Wikipedia (https://en.wikipedia.org/wiki/User_Datagram_Protocol), the max. theoretical limit for a UDP datagram is 8 bytes for header + 65,527 bytes of data (65,535 bytes in total). However, "the practical limit for the data length which is imposed by the underlying IPv4 protocol is 65,507 bytes (65,535 − 8 byte UDP header − 20 byte IP header)". The wiki article goes on to say that larger packets are possible with Jumbograms.
As for the NI documented limits of 8K, this would appear to be a LabVIEW-imposed limitation. (Perhaps this was related to an earlier limitation of the UDP protocol that has since been upgraded. If I recall correctly, the Windows version of LabVIEW also had this 8K limitation in the past.)
Perhaps NI relaxed this limit, or maybe you have jumbo packets enabled on youir PC, and this is allowing more throughput (-- of course both of these possibilities are only speculation on my part).
In any event, if you limit the UDP packet size to the documented 8K limitation, your code should probably be fine across all LV platforms. (How's that for stating the obvious..?)
Anyway, good luck with your application.
-- Dave