I have a tcp client who is based loosely on the tcp client example code, I have the program open the connection to the tcp server then enter an infinate loop the does ProcessSystemEvents(). I was relying on the general TCPCallback that I registered with I opened the TCP connection to alert me when data, the Callback would then call a function to serve the recieved data. The problem I encountered is sometimes the callbacks would stop occuring and the program would be stuck doing process system events and I would never know that data arrived even though using a network protocal analiser I see that windows TCP stack is recieving the data. Watching the network analizer I see the TCP window for my client getting smaller indicating that windows is recieving the packets and buffering them for the program to retrieve.
The quick fix I found is to move the tcp rx handler from the "case TCP_DATAREADY:" statement in the general tcp callback to the infinate loop along with the ProcessSystemEvents() (see psudo code).
Ex. Problematic way//Main
main{
OpenTCPConnection
do{
ProcessSystemEvents()
}while(1)
}
//TCP Callback Function
TCPCallbackFunction{
switch (event)
{
case TCP_DATAREADY:
ProcessTCPRx()
break;
case TCP_DISCONNECT:
QUIT()
break;
}
}
Ex. Working Hack Way
//Main
main{
OpenTCPConnection
do{
ProcessSystemEvents()
ProcessTCPRx()
}while(1)
}
//TCPCallbackFunction{
switch (event)
{
case TCP_DATAREADY:
break;
case TCP_DISCONNECT:
QUIT()
break;
}
}
I should mention that all this is happening inside a thread that my main app launches.
Has anyone else seen such behavoir? Is pooling the TCP system like this a good/bad idea?
Tyler