So_Rcvtime and So_Rcvtimeo Not Affecting Boost.Asio Operations

SO_RCVTIME and SO_RCVTIMEO not affecting Boost.Asio operations

Using SO_RCVTIMEO and SO_SNDTIMEO socket options with Boost.Asio will rarely produce the desired behavior. Consider using either of the following two patterns:

Composed Operation With async_wait()

One can compose an asynchronous read operation with timeout by using a Boost.Asio timer and an async_wait() operation with a async_receive() operation. This approach is demonstrated in the Boost.Asio timeout examples, something similar to:

// Start a timeout for the read.
boost::asio::deadline_timer timer(io_service);
timer.expires_from_now(boost::posix_time::seconds(1));
timer.async_wait(
[&socket, &timer](const boost::system::error_code& error)
{
// On error, such as cancellation, return early.
if (error) return;

// Timer has expired, but the read operation's completion handler
// may have already ran, setting expiration to be in the future.
if (timer.expires_at() > boost::asio::deadline_timer::traits_type::now())
{
return;
}

// The read operation's completion handler has not ran.
boost::system::error_code ignored_ec;
socket.close(ignored_ec);
});

// Start the read operation.
socket.async_receive(buffer,
[&socket, &timer](const boost::system::error_code& error,
std::size_t bytes_transferred)
{
// Update timeout state to indicate the handler has ran. This
// will cancel any pending timeouts.
timer.expires_at(boost::posix_time::pos_infin);

// On error, such as cancellation, return early.
if (error) return;

// At this point, the read was successful and buffer is populated.
// However, if the timeout occurred and its completion handler ran first,
// then the socket is closed (!socket.is_open()).
});

Be aware that it is possible for both asynchronous operations to complete in the same iteration, making both completion handlers ready to run with success. Hence, the reason why both completion handlers need to update and check state. See this answer for more details on how to manage state.

Use std::future

Boost.Asio's provides support for C++11 futures. When boost::asio::use_future is provided as the completion handler to an asynchronous operation, the initiating function will return a std::future that will be fulfilled once the operation completes. As std::future supports timed waits, one can leverage it for timing out an operation. Do note that as the calling thread will be blocked waiting for the future, at least one other thread must be processing the io_service to allow the async_receive() operation to progress and fulfill the promise:

// Use an asynchronous operation so that it can be cancelled on timeout.
std::future<std::size_t> read_result = socket.async_receive(
buffer, boost::asio::use_future);

// If timeout occurs, then cancel the read operation.
if (read_result.wait_for(std::chrono::seconds(1)) ==
std::future_status::timeout)
{
socket.cancel();
}
// Otherwise, the operation completed (with success or error).
else
{
// If the operation failed, then read_result.get() will throw a
// boost::system::system_error.
auto bytes_transferred = read_result.get();
// process buffer
}

Why SO_RCVTIMEO Will Not Work

System Behavior

The SO_RCVTIMEO documentation notes that the option only affects system calls that perform socket I/O, such as read() and recvmsg(). It does not affect event demultiplexers, such as select() and poll(), that only watch the file descriptors to determine when I/O can occur without blocking. Furthermore, when a timeout does occur, the I/O call fails returning -1 and sets errno to EAGAIN or EWOULDBLOCK.

Specify the receiving or sending timeouts until reporting an error. [...] if no data has been transferred and the timeout has been reached then -1 is returned with errno set to EAGAIN or EWOULDBLOCK [...] Timeouts only have effect for system calls that perform socket I/O (e.g., read(), recvmsg(), [...]; timeouts have no effect for select(), poll(), epoll_wait(), and so on.

When the underlying file descriptor is set to non-blocking, system calls performing socket I/O will return immediately with EAGAIN or EWOULDBLOCK if resources are not immediately available. For a non-blocking socket, SO_RCVTIMEO will not have any affect, as the call will return immediately with success or failure. Thus, for SO_RCVTIMEO to affect system I/O calls, the socket must be blocking.

Boost.Asio Behavior

First, asynchronous I/O operations in Boost.Asio will use an event demultiplexer, such as select() or poll(). Hence, SO_RCVTIMEO will not affect asynchronous operations.

Next, Boost.Asio's sockets have the concept of two non-blocking modes (both of which default to false):

  • native_non_blocking() mode that roughly corresponds to the file descriptor's non-blocking state. This mode affects system I/O calls. For example, if one invokes socket.native_non_blocking(true), then recv(socket.native_handle(), ...) may fail with errno set to EAGAIN or EWOULDBLOCK. Anytime an asynchronous operation is initiated on a socket, Boost.Asio will enable this mode.
  • non_blocking() mode that affects Boost.Asio's synchronous socket operations. When set to true, Boost.Asio will set the underlying file descriptor to be non-blocking and synchronous Boost.Asio socket operations can fail with boost::asio::error::would_block (or the equivalent system error). When set to false, Boost.Asio will block, even if the underlying file descriptor is non-blocking, by polling the file descriptor and re-attempting system I/O operations if EAGAIN or EWOULDBLOCK are returned.

The behavior of non_blocking() prevents SO_RCVTIMEO from producing desired behavior. Assuming socket.receive() is invoked and data is neither available nor received:

  • If non_blocking() is false, the system I/O call will timeout per SO_RCVTIMEO. However, Boost.Asio will then immediately block polling on the file descriptor to be readable, which is not affected by SO_RCVTIMEO. The final result is the caller blocked in socket.receive() until either data has been received or failure, such as the remote peer closing the connection.
  • If non_blocking() is true, then the underlying file descriptor is also non-blocking. Hence, the system I/O call will ignore SO_RCVTIMEO, immediately return with EAGAIN or EWOULDBLOCK, causing socket.receive() to fail with boost::asio::error::would_block.

Ideally, for SO_RCVTIMEO to function with Boost.Asio, one needs native_non_blocking() set to false so that SO_RCVTIMEO can take affect, but also have non_blocking() set to true to prevent polling on the descriptor. However, Boost.Asio does not support this:

socket::native_non_blocking(bool mode)

If the mode is false, but the current value of non_blocking() is true, this function fails with boost::asio::error::invalid_argument, as the combination does not make sense.

boost asio timeout

Fist of all I believe that you should ALWAYS use the async methods since they are better and your design will only benefit from a reactor pattern approach.
In the bad case that you're in a hurry and you're kind of prototyping, the sync methods can be useful. In this case I do agree with you that without any timeout support, they cannot be used in the real world.

What I did was very simple:

void HttpClientImpl::configureSocketTimeouts(boost::asio::ip::tcp::socket& socket)
{
#if defined OS_WINDOWS
int32_t timeout = 15000;
setsockopt(socket.native(), SOL_SOCKET, SO_RCVTIMEO, (const char*)&timeout, sizeof(timeout));
setsockopt(socket.native(), SOL_SOCKET, SO_SNDTIMEO, (const char*)&timeout, sizeof(timeout));
#else
struct timeval tv;
tv.tv_sec = 15;
tv.tv_usec = 0;
setsockopt(socket.native(), SOL_SOCKET, SO_RCVTIMEO, &tv, sizeof(tv));
setsockopt(socket.native(), SOL_SOCKET, SO_SNDTIMEO, &tv, sizeof(tv));
#endif
}

The code above works both on windows and on Linux and on MAC OS, according to the OS_WINDOWS macro.

std::future not working in Boost UDP socket async receive operation

I found the solution. So to wrap this up, here is what needs to be done. My initial code needs 2 modifications.

(1) adding 2 lines at the beginning to launch a separate thread with io_service to monitor the time-out (as suggested by Tanner Sansbury)

boost::asio::io_service::work work(io_service);
std::thread thread([&io_service](){ io_service.run(); });

(2) call socket.cancel(); in the condition of socket time_out. If the socket operation is not cancelled, the socket will keep blocking despite the renewed calls to wait_for() (solution received on Boost's mailing list).

Here is the amended code for reference:

#include <future>
#include <boost/asio.hpp>
#include <boost/asio/use_future.hpp>

using boost::asio::ip::udp;

int main()
{
try
{
boost::asio::io_service io_service;
boost::asio::io_service::work work(io_service);
std::thread thread([&io_service](){ io_service.run(); });

udp::socket socket(io_service, udp::endpoint(udp::v4(), 10000));

char recv_buf[8];

for (;;)
{
ZeroMemory(recv_buf, 8);
udp::endpoint remote_endpoint;
std::future<std::size_t> recv_length;

recv_length = socket.async_receive_from(
boost::asio::buffer(recv_buf),
remote_endpoint,
0,
boost::asio::use_future);

if (recv_length.wait_for(
std::chrono::seconds(5)) == std::future_status::timeout)
{
printf("time out. Nothing received.\n");
socket.cancel();
}
else
{
printf("received something: %s\n", recv_buf);
}
}
}
catch (std::exception& e)
{
printf("Error: %s\n", e.what());
}
return 0;
}

Thank you all for your help.

Scalability of Boost.Asio

We are using 1.39 on several Linux flavors for timers, network (both TCP and UDP), serial (20+ lines, two of which run at 500 kbps), and inotify events, and while we don't have many socket connections, we do have a few hundred async timers at any time. They are in production and they work well, for us. If I were you, I'd make up a quick prototype and performance-test it.

Boost 1.43 claims a number of Linux-specific performance improvements in ASIO, but I am yet to benchmark them for our product.

Waiting with timeout on boost::asio::async_connect fails (std::future::wait_for)

This appears to be a bug as answered here by Stephan Lavavej.

I wasn't able to find the original bug, but it's fixed in "the RTM version" (assuming VS2013).

This is affected by internal bug number DevDiv#255669 ":
wait_for()/wait_until() don't block". Fortunately, I've received a fix
for this from one of our Concurrency Runtime developers, Hong Hong. With my
current build of VC11, this works:

With my current build of VC11, this works:

C:\Temp>type meow.cpp
#include <stdio.h>
#include <chrono>
#include <future>
#include <thread>
#include <windows.h>
using namespace std;

long long counter() {
LARGE_INTEGER li;
QueryPerformanceCounter(&li);
return li.QuadPart;
}

long long frequency() {
LARGE_INTEGER li;
QueryPerformanceFrequency(&li);
return li.QuadPart;
}

int main() {
printf("%02d.%02d.%05d.%02d\n", _MSC_VER / 100, _MSC_VER % 100, _MSC_FULL_VER % 100000, _MSC_BUILD);

future<int> f = async(launch::async, []() -> int {
this_thread::sleep_for(chrono::milliseconds(250));

for (int i = 0; i < 5; ++i) {
printf("Lambda: %d\n", i);
this_thread::sleep_for(chrono::seconds(2));
}

puts("Lambda: Returning.");
return 1729;
});

for (;;) {
const auto fs = f.wait_for(chrono::seconds(0));

if (fs == future_status::deferred) {
puts("Main thread: future_status::deferred (shouldn't happen, we used launch::async)");
} else if (fs == future_status::ready) {
puts("Main thread: future_status::ready");
break;
} else if (fs == future_status::timeout) {
puts("Main thread: future_status::timeout");
} else {
puts("Main thread: unknown future_status (UH OH)");
}

this_thread::sleep_for(chrono::milliseconds(500));
}

const long long start = counter();

const int n = f.get();

const long long finish = counter();

printf("Main thread: f.get() took %f microseconds to return %d.\n",
(finish - start) * 1000000.0 / frequency(), n);
}

C:\Temp>cl /EHsc /nologo /W4 /MTd meow.cpp
meow.cpp

C:\Temp>meow
17.00.50419.00
Main thread: future_status::timeout
Lambda: 0
Main thread: future_status::timeout
Main thread: future_status::timeout
Main thread: future_status::timeout
Main thread: future_status::timeout
Lambda: 1
Main thread: future_status::timeout
Main thread: future_status::timeout
Main thread: future_status::timeout
Main thread: future_status::timeout
Lambda: 2
Main thread: future_status::timeout
Main thread: future_status::timeout
Main thread: future_status::timeout
Main thread: future_status::timeout
Lambda: 3
Main thread: future_status::timeout
Main thread: future_status::timeout
Main thread: future_status::timeout
Main thread: future_status::timeout
Lambda: 4
Main thread: future_status::timeout
Main thread: future_status::timeout
Main thread: future_status::timeout
Main thread: future_status::timeout
Lambda: Returning.
Main thread: future_status::ready
Main thread: f.get() took 2.303971 microseconds to return 1729.

I inserted timing code to prove that when wait_for() returns ready, f.get() returns instantly without blocking.

Basically, the workaround is to loop while it reports deferred

Matching boost::deadline_timer callbacks to corresponding wait_async

You can boost::bind additional parameters to the completion handler which can be used to identify the source.

Setting ASIO timeout for stream

The socket options you've set don't apply to connect AFAIK.
This can be accomplished by using the asynchronous asio API as in the following asio example.

The interesting parts are setting the timeout handler:

deadline_.async_wait(boost::bind(&client::check_deadline, this));

Starting the timer

void start_connect(tcp::resolver::iterator endpoint_iter)
{
if (endpoint_iter != tcp::resolver::iterator())
{
std::cout << "Trying " << endpoint_iter->endpoint() << "...\n";

// Set a deadline for the connect operation.
deadline_.expires_from_now(boost::posix_time::seconds(60));

// Start the asynchronous connect operation.
socket_.async_connect(endpoint_iter->endpoint(),
boost::bind(&client::handle_connect,
this, _1, endpoint_iter));
}
else
{
// There are no more endpoints to try. Shut down the client.
stop();
}
}

And closing the socket which should result in the connect completion handler to run.

void check_deadline()
{
if (stopped_)
return;

// Check whether the deadline has passed. We compare the deadline against
// the current time since a new asynchronous operation may have moved the
// deadline before this actor had a chance to run.
if (deadline_.expires_at() <= deadline_timer::traits_type::now())
{
// The deadline has passed. The socket is closed so that any outstanding
// asynchronous operations are cancelled.
socket_.close();

// There is no longer an active deadline. The expiry is set to positive
// infinity so that the actor takes no action until a new deadline is set.
deadline_.expires_at(boost::posix_time::pos_infin);
}

// Put the actor back to sleep.
deadline_.async_wait(boost::bind(&client::check_deadline, this));
}


Related Topics



Leave a reply



Submit