Differencebetween Connection and Read Timeout for Sockets

What is the difference between connection and read timeout for sockets?

  1. What is the difference between connection and read timeout for sockets?

The connection timeout is the timeout in making the initial connection; i.e. completing the TCP connection handshake. The read timeout is the timeout on waiting to read data1. If the server (or network) fails to deliver any data <timeout> seconds after the client makes a socket read call, a read timeout error will be raised.


  1. What does connection timeout set to "infinity" mean? In what situation can it remain in an infinitive loop? and what can trigger that the infinity-loop dies?

It means that the connection attempt can potentially block for ever. There is no infinite loop, but the attempt to connect can be unblocked by another thread closing the socket. (A Thread.interrupt() call may also do the trick ... not sure.)


  1. What does read timeout set to "infinity" mean? In what situation can it remain in an infinite loop? What can trigger that the infinite loop to end?

It means that a call to read on the socket stream may block for ever. Once again there is no infinite loop, but the read can be unblocked by a Thread.interrupt() call, closing the socket, and (of course) the other end sending data or closing the connection.


1 - It is not ... as one commenter thought ... the timeout on how long a socket can be open, or idle.

ConnectionTimeout versus SocketTimeout

A connection timeout occurs only upon starting the TCP connection. This usually happens if the remote machine does not answer. This means that the server has been shut down, you used the wrong IP/DNS name, wrong port or the network connection to the server is down.

A socket timeout is dedicated to monitor the continuous incoming data flow. If the data flow is interrupted for the specified timeout the connection is regarded as stalled/broken. Of course this only works with connections where data is received all the time.

By setting socket timeout to 1 this would require that every millisecond new data is received (assuming that you read the data block wise and the block is large enough)!

If only the incoming stream stalls for more than a millisecond you are running into a timeout.

URLConnection, why two different timeouts? (connect and read)

Let's say server is busy and is configured to accept 'N' connection and all the connections are long runner and all of sudden you send in request, What should happen? Should you wait indefinitely or should you time out? That's connectTimeout.

While let's say your server turns brain dead service just accepting connection and doing nothing with it (or say server synchronously goes to db and does some time taking activity and server ends up with deadlock for e.g.) and on the other hand client keeps on waiting for the response, in this case what should client do? Should it wait indefinitely for response or should it timeout? That's read timeout.

Is there any way to distinguish a connection timeout from a socket timeout?

I haven't gone thru the relationship between classes in Java API, but I think what you need is

  1. java.net.ConnectException : Packet loss due to wrong network, network overload, too many request to server, firewall.

  2. java.net.SocketTimeoutException : The socket timeout is the amount of time to keep the server socket open while data is being transferred back to the caller. This could even be the server is still processing and writing back data but it's taking rather long and the client has just timed out waiting for it.

What is a connection timeout during a http request

I will try to answer it a little bit more informally.

Connection timeout - is a time period within which a connection between a client and a server must be established. Suppose that you navigate your browser (client) to some website (server). What happens is that your browser starts to listen for a response message from that server but this response may never arrive for various reasons (e.g. server is offline). So if there is still no response from the server after X seconds, your browser will 'give up' on waiting, otherwise it might get stuck waiting for eternity.

Request timeout - as in the previous case where client wasn't willing to wait for response from server for too long, server is not willing to keep unused connection alive for too long either. Once the connection between server and client has been established, client must periodically inform server that it is still there by sending information to that server. If client fails to send any information to server in a specified time, server simply drops this connection as it thinks that client is no longer there to communicate with it (why wasting resources meaninglessly).

Time to live (TTL) - is a value specified inside of a packet that is set when the packet is created (usually to 255) that tells how long the packet can be left alive in a network. As this packet goes through the network, it arrives at routers that sit on the path between the packet's origin and its destination. Each time the router resends the packet, it also decrements its TTL value by 1 and if that value drops to 0, instead of resending the packet, router simply drops it as the packet is not supposed to live any longer. This mechanism is used to prevent network from flooding by data as each packet can live inside of it for only limited amount of 'time'.

Connection timeout and socket timeout advice

There's no reason whatsoever why they should be equal. Considering each condition separately, you want to set it high enough that a timeout will indicate a genuine problem rather than just a temporary overload, and low enough that you maintain responsiveness of the application.

As a rule, 40 seconds is far too long for a connect timeout. I would view anything in double figures with suspicion. A server should be able to accept tens or hundreds of connections per second.

Read timeouts are a completely different matter. The 'correct' value, if there is such a thing, depends entirely on the average service time for the request and on its variance. As a starting point, you might want to set it to double the expected service time, or the average service time plus two or three standard deviations, depending entirely on your service level requirements and the performance of your server and its variance. There is no hard and fast rule about this. Many contractual service level agreements (SLAs) specify a 'normal' response time of two seconds, which may inform your deliberations. But it's your decision.



Related Topics



Leave a reply



Submit