03-01-2007 09:02 AM
I don't remember the exact number, but there was a point where I could send N bytes in a packet and it would work every time, and if I sent N+1, it would fail every time. That point was somewhere around N=1500.
So I ended up breaking everything up into chunks of 1000 or so, and that worked just fine.
Recently, on a new project, I did not think about that limitation, and ended up having a chunk of over 3500 bytes in a single TCP WRITE, and it worked!
Perfectly. First time, every time.
So what are the rules here? Have the restrictions relaxed over the years? Does the fact that I am now on a simple Win2K - D-Link Router - PXI box connection (with 3 other computers) make a difference (versus whatever corporate environment I was in before, with a thousand computers).
Do I still need to break it up, because maybe my client has a different environment, or am I safe as it is?
Can someone explain the rules, or point me to where they are explained?
Blog for (mostly LabVIEW) programmers: Tips And Tricks
03-01-2007 09:49 AM
03-01-2007 09:57 AM
Hi Coastal,
I can not point at references...
I can not explain why you ran into the N= 1500 limit.
The TCP part of TCP/IP is responcible for breaking a tranimited packet into small enough chunks to be transported (VIA IP) and enusring all of the pieces arive at the destination and are properly rea-assembled. It is only after everything is back together at the recieving end that the TCP/IP nodes should return a recieved packet.
So I do not know of an upper limit!
Another factor that comes into play is the Naegel (sp?) algorithm. Inside the TCP/IP stack (somewhere) an attempt is made to optimize network throughput by avoiding sending many small packets, by allowing the packets to pile up and send everything at once. I seem to remember that the optimal ethernet packet size being about 1500 bytes. The Naegel algorithm will affect how long after a transmit request is issued, the data packets are put on the wire.
That is all I can say for now.
Take care,
Ben
03-01-2007 10:10 AM
However, I clearly remember having the problem some years ago, and broke up file transmissions (for example) into blocks of 1024 bytes to avoid it.
Is the MTU an adjustable number on Win machines? Is it possible that the MTU on the machine I was dealing with was set to 1500 or so?
When I had a satellite Internet link, I remember adjusting the MTU on my Mac to enhance that somehow, but it wasn't permanent.
Is it then your opinion that my break-up-files-into-pieces approach is unnecessary?
Thanks.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
03-01-2007 10:19 AM
03-01-2007 10:29 AM
@CoastalMaineBird wrote:
Perhaps "packet" is indeed the wrong word - I was referring to the size of the TCP WRITE string, which would count as "payload", I suppose.
There is no size limitation and the help does not mention any. Of course you should follow the guidelines in the help so your application on the receiveing end is aware when the transmission is over. (prepend size in string, use constant portion size, use delimiter, etc.).
You are only dealing with OSI layer 7 (http://en.wikipedia.org/wiki/OSI_model)
03-01-2007 10:35 AM
So you are saying that it IS necessary to break up the payload...
Then I don't understand why I do NOT have the problem now... I have ethernet from CPU to ROUTER to PXI box.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
03-01-2007 10:41 AM
Yes, I am sending a one-byte COMMAND and a two-byte SIZE in front of my current payloads. It works just fine, but my question is will it work on my client's setup?
My client happens to be the very place where I previously worked, and experienced the problem.
I'm wondering where the limit comes from: you say there isn't one, pincpainter says there is. I used to have a limit, now I don't (or at least it's a larger limit, I haven't tested more than 3500 bytes).
So I'm still confused.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
03-01-2007 10:43 AM - edited 03-01-2007 10:43 AM
Message Edited by pincpanter on 03-01-2007 05:45 PM
03-01-2007 10:54 AM
So you agree that there should be no problem?
I wonder why I experienced the problem a while back?
I know that I spent enough time on it to verify that I could send N, but not N+1 in a single operation.
If I send a megabyte file with a single TCP WRITE, is it synchronous, or asynchronous? Is it going to hang my program until all the bytes go out the wire, or do they go into a buffer somewhere for the OS to handle?
I guess I can find that out myself...
Blog for (mostly LabVIEW) programmers: Tips And Tricks