What Really Is the "Linger Time" That Can Be Set with So_Linger on Sockets

When is TCP option SO_LINGER (0) required?

The typical reason to set a SO_LINGER timeout of zero is to avoid large numbers of connections sitting in the TIME_WAIT state, tying up all the available resources on a server.

When a TCP connection is closed cleanly, the end that initiated the close ("active close") ends up with the connection sitting in TIME_WAIT for several minutes. So if your protocol is one where the server initiates the connection close, and involves very large numbers of short-lived connections, then it might be susceptible to this problem.

This isn't a good idea, though - TIME_WAIT exists for a reason (to ensure that stray packets from old connections don't interfere with new connections). It's a better idea to redesign your protocol to one where the client initiates the connection close, if possible.

How to apply Linger option with winsock2

You can't really avoid TIME_WAIT when your app is the one closing the TCP connection first (TIME_WAIT does not happen when the peer closes the connection first). No amount of SO_LINGER settings will change that fact, other than performing an abortive socket closure (ie sending a RST packet). It is simply part of how TCP works (look at the TCP state diagram). SO_LINGER simply controls how long closesocket() waits before actually closing an active connection.

The only way to prevent the socket from entering the TIME_WAIT state is to set the l_linger duration to 0, and don't call shutdown(SD_SEND) or shutdown(SD_BOTH) at all (calling shutdown(SD_RECEIVE) is OK). This is documented behavior:

The closesocket call will only block until all data has been delivered to the peer or the timeout expires. If the connection is reset because the timeout expires, then the socket will not go into TIME_WAIT state. If all data is sent within the timeout period, then the socket can go into TIME_WAIT state.

If the l_onoff member of the linger structure is nonzero and the l_linger member is a zero timeout interval on a blocking socket, then a call to closesocket will reset the connection. The socket will not go to the TIME_WAIT state.

The real problem with your code (aside from the lack of error handling) is that your client is bind()'ing a client socket before connect()'ing it to a server. Typically, you should not bind() a client socket at all, you should let the OS choose an appropriate binding for you. However, if you must bind() a client socket, you will likely need to enable the SO_REUSEADDR option on that socket to avoid being blocked when a previous connection boudn to the same local IP/Port is still in TIME_WAIT state and you are trying to connect() in a short amount of time after the previous closesocket().

See How to avoid TIME_WAIT state after closesocket() ? for more details. Also, the document you linked to in your question also explains ways to avoid TIME_WAIT without resorting to messing with SO_LINGER.

How to use so-linger to keep server connection for some time

Spring integration performs no conversion on the value, so it's seconds.

The behavior you describe sounds like so-linger is 0 (immediate RST sent for close()).

You really don't need so linger for this purpose; when it's not set the TCP stack should gracefully close the socket.

You probably need to run a network monitor to figure out what's happening.

If you trust your client to close the socket, you could remove the

single-use="true"

or set it to false so the server doesn't close the socket after sending the reply.



Related Topics



Leave a reply



Submit