How Does Boost Asio's Hostname Resolution Work on Linux? How to Use Nss

How does Boost Asio's hostname resolution work on Linux? Is it possible to use NSS?

The problem was that the constructor for query has the address_configured flag set by default which won't return an address if the loopback device is the only device with an address. By just settings flags to 0 or anything other than address_configured the problem is fixed.
Here is what I'm successfully using now:

tcp::resolver::query query(host, PORT, boost::asio::ip::resolver_query_base::numeric_service);

Hope this helps anyone with this problem in the future.

boost asio: host not found (authorative)

You should use:

tcp::resolver::query query(host, PORT, boost::asio::ip::resolver_query_base::numeric_service);

The problem was that the constructor for query has the
address_configured flag set by default which won't return an address
if the loopback device is the only device with an address. By just
settings flags to 0 or anything other than address_configured the
problem is fixed.

How does Boost Asio's hostname resolution work on Linux? Is it possible to use NSS?

I will more help you if you paste the whole code. For the very beginning there is really useful piece of code here:

http://boost.2283326.n4.nabble.com/Simple-telnet-client-demonstration-with-boost-asio-asynchronous-I-O-td2583017.html

it works and you can test it with your local telnet.

How do I convert a host name in to a boost address or endpoint?

You need to use a tcp::resolver to do name resolution (i.e. DNS lookup):

boost::asio::io_service io_service;
boost::asio::ip::tcp::resolver resolver(io_service);
boost::asio::ip::tcp::resolver::query query("example.com", "80");
boost::asio::ip::tcp::resolver::iterator iter = resolver.resolve(query);

Dereferencing the iterator gives you a resolver entry that has a tcp::endpoint:

boost::asio::ip::tcp::endpoint endpoint = iter->endpoint();

What type of asio resolver object should I use?

Use the resolver that has the same protocol as the socket. For example, tcp::socket::connect() expects a tcp::endpoint, and the endpoint type provided via udp::resolver::iterator is udp::endpoint. Attempting to directly use the result of the query from a different protocol will result in a compilation error:

boost::asio::io_service io_service;  
boost::asio::ip::tcp::socket socket(io_service);
boost::asio::ip::udp::resolver::iterator iterator = ...;
socket.connect(iterator->endpoint());
// ~~~^~~~~~~ no matching function call to `tcp::socket::connect(udp::endpoint)`
// no known conversion from `udp::endpoint` to `tcp::endpoint`

Neither tcp::resolver nor udp::resolver dictate the transport layer protocol the name resolution will use. The DNS client will use TCP when either it become necessary or it has been explicitly configured to use TCP.

On systems where service name resolution is supported, when performing service-name resolution with a descriptive service name, the type of resolver can affect the results. For example, in the IANA Service Name and Transport Protocol Port Number Registry:

  • the daytime service uses port 13 on UDP and TCP
  • the shell service uses port 514 only on TCP
  • the syslog service uses port 514 only on UDP

Hence, one can use tcp::resolver to resolver the daytime and shell service, but not syslog. On the other hand, udp::resolver can resolve daytime and syslog, but not shell. The following example demonstrates this distinction:

#include <boost/asio.hpp>

int main()
{
boost::asio::io_service io_service;

using tcp = boost::asio::ip::tcp;
using udp = boost::asio::ip::udp;

boost::system::error_code error;
tcp::resolver tcp_resolver(io_service);
udp::resolver udp_resolver(io_service);

// daytime is 13/tcp and 13/udp
tcp_resolver.resolve(tcp::resolver::query("daytime"), error);
assert(!error);
udp_resolver.resolve(udp::resolver::query("daytime"), error);
assert(!error);

// shell is 514/tcp
tcp_resolver.resolve(tcp::resolver::query("shell"), error);
assert(!error);
udp_resolver.resolve(udp::resolver::query("shell"), error);
assert(error);

// syslog is 514/udp
tcp_resolver.resolve(tcp::resolver::query("syslog"), error);
assert(error);
udp_resolver.resolve(udp::resolver::query("syslog"), error);
assert(!error);

tcp_resolver.resolve(tcp::resolver::query("514"), error);
assert(!error);
udp_resolver.resolve(udp::resolver::query("514"), error);
assert(!error);
}

How to run boost asio resolver service on more threads?

What you need are two io_service ioService, because each one is run by a thread. By that I mean that you block the normal execution of a thread by calling io_service::run.

I think that the code itself is correct.

Socket hostname lookup timeout: how to implement it?

First of all: don't use gethostbyname(), it's obsolete. Use getaddrinfo() instead.

What you want is asynchronous name resolution. It's a common requirement, but unfortunately there is no "standard" way, how to do it. Here my hints for finding the best solution for you:

  1. Don't implement a DNS client. Name resolution is more than just DNS. Think of mDNS, hosts files and so on. Use a system function like getaddrinfo() that abstracts the different name resolution mechanisms for you.

  2. Some systems offer asynchronous versions of the resolution functions, like glibc offers getaddrinfo_a().

  3. There are asynchronous resolution libraries, that wrap around the synchronous system resolver functions. At first libasyncns comes to my mind.

  4. Boost.Asio supports to use the resolver with a thread pool. See here.

How does libuv compare to Boost/ASIO?

Scope

Boost.Asio is a C++ library that started with a focus on networking, but its asynchronous I/O capabilities have been extended to other resources. Additionally, with Boost.Asio being part of the Boost libraries, its scope is slightly narrowed to prevent duplication with other Boost libraries. For example, Boost.Asio will not provide a thread abstraction, as Boost.Thread already provides one.

On the other hand, libuv is a C library designed to be the platform layer for Node.js. It provides an abstraction for IOCP on Windows, kqueue on macOS, and epoll on Linux. Additionally, it looks as though its scope has increased slightly to include abstractions and functionality, such as threads, threadpools, and inter-thread communication.

At their core, each library provides an event loop and asynchronous I/O capabilities. They have overlap for some of the basic features, such as timers, sockets, and asynchronous operations. libuv has a broader scope, and provides additional functionality, such as thread and synchronization abstractions, synchronous and asynchronous file system operations, process management, etc. In contrast, Boost.Asio's original networking focus surfaces, as it provides a richer set of network related capabilities, such as ICMP, SSL, synchronous blocking and non-blocking operations, and higher-level operations for common tasks, including reading from a stream until a newline is received.


Feature List

Here is the brief side-by-side comparison on some of the major features. Since developers using Boost.Asio often have other Boost libraries available, I have opted to consider additional Boost libraries if they are either directly provided or trivial to implement.


libuv Boost
Event Loop: yes Asio
Threadpool: yes Asio + Threads
Threading:
Threads: yes Threads
Synchronization: yes Threads
File System Operations:
Synchronous: yes FileSystem
Asynchronous: yes Asio + Filesystem
Timers: yes Asio
Scatter/Gather I/O[1]: no Asio
Networking:
ICMP: no Asio
DNS Resolution: async-only Asio
SSL: no Asio
TCP: async-only Asio
UDP: async-only Asio
Signal:
Handling: yes Asio
Sending: yes no
IPC:
UNIX Domain Sockets: yes Asio
Windows Named Pipe: yes Asio
Process Management:
Detaching: yes Process
I/O Pipe: yes Process
Spawning: yes Process
System Queries:
CPU: yes no
Network Interface: yes no
Serial Ports: no yes
TTY: yes no
Shared Library Loading: yes Extension[2]

1. Scatter/Gather I/O.

2. Boost.Extension was never submitted for review to Boost. As noted here, the author considers it to be complete.

Event Loop

While both libuv and Boost.Asio provide event loops, there are some subtle differences between the two:

  • While libuv supports multiple event loops, it does not support running the same loop from multiple threads. For this reason, care needs to be taken when using the default loop (uv_default_loop()), rather than creating a new loop (uv_loop_new()), as another component may be running the default loop.
  • Boost.Asio does not have the notion of a default loop; all io_service are their own loops that allow for multiple threads to run. To support this Boost.Asio performs internal locking at the cost of some performance. Boost.Asio's revision history indicates that there have been several performance improvements to minimize the locking.

Threadpool

  • libuv's provides a threadpool through uv_queue_work. The threadpool size is configurable via the environment variable UV_THREADPOOL_SIZE. The work will be executed outside of the event loop and within the threadpool. Once the work is completed, the completion handler will be queued to run within the event loop.
  • While Boost.Asio does not provide a threadpool, the io_service can easily function as one as a result of io_service allowing multiple threads to invoke run. This places the responsibility of thread management and behavior to the user, as can be seen in this example.

Threading and Synchronization

  • libuv provides an abstraction to threads and synchronization types.
  • Boost.Thread provides a thread and synchronization types. Many of these types follow closely to the C++11 standard, but also provide some extensions. As a result of Boost.Asio allowing multiple threads to run a single event loop, it provides strands as a means to create a sequential invocation of event handlers without using explicit locking mechanisms.

File System Operations

  • libuv provides an abstraction to many file system operations. There is one function per operation, and each operation can either be synchronous blocking or asynchronous. If a callback is provided, then the operation will be executed asynchronously within an internal threadpool. If a callback is not provided, then the call will be synchronous blocking.
  • Boost.Filesystem provides synchronous blocking calls for many file system operations. These can be combined with Boost.Asio and a threadpool to create asynchronous file system operations.

Networking

  • libuv supports asynchronous operations on UDP and TCP sockets, as well as DNS resolution. Application developers should be aware that the underlying file descriptors are set to non-blocking. Therefore, native synchronous operations should check return values and errno for EAGAIN or EWOULDBLOCK.
  • Boost.Asio is a bit more rich in its networking support. In addition many of the features libuv's networking provides, Boost.Asio supporting SSL and ICMP sockets. Furthermore, Boost.Asio provides synchronous blocking and synchronous non-blocking operations, into addition to its asynchronous operations. There are numerous free standing functions that provide common higher-level operations, such as reading a set amount of bytes, or until a specified delimiter character is read.

Signal

  • libuv provides an abstraction kill and signal handling with its uv_signal_t type and uv_signal_* operations.
  • Boost.Asio does not provde an abstraction to kill, but its signal_set provides signal handling.

IPC

  • libuv abstracts Unix domain sockets and Windows named pipes through a single uv_pipe_t type.
  • Boost.Asio separates the two into local::stream_protocol::socket or local::datagram_protocol::socket, and windows::stream_handle.

API Differences

While the APIs are different based on the language alone, here are a few key differences:

Operation and Handler Association

Within Boost.Asio, there is a one-to-one mapping between an operation and a handler. For instance, each async_write operation will invoke the WriteHandler once. This is true for many of libuv operations and handlers. However, libuv's uv_async_send supports a many-to-one mapping. Multiple uv_async_send calls may result in the uv_async_cb being called once.

Call Chains vs. Watcher Loops

When dealing with task, such as reading from a stream/UDP, handling signals, or waiting on timers, Boost.Asio's asynchronous call chains are a bit more explicit. With libuv, a watcher is created to designate interests in a particular event. A loop is then started for the watcher, where a callback is provided. Upon receiving the event of interests, the callback will be invoked. On the other hand, Boost.Asio requires an operation to be issued each time the application is interested in handling the event.

To help illustrate this difference, here is an asynchronous read loop with Boost.Asio, where the async_receive call will be issued multiple times:

void start()
{
socket.async_receive( buffer, handle_read ); ----.
} |
.----------------------------------------------'
| .---------------------------------------.
V V |
void handle_read( ... ) |
{ |
std::cout << "got data" << std::endl; |
socket.async_receive( buffer, handle_read ); --'
}

And here is the same example with libuv, where handle_read is invoked each time the watcher observes that the socket has data:

uv_read_start( socket, alloc_buffer, handle_read ); --.
|
.-------------------------------------------------'
|
V
void handle_read( ... )
{
fprintf( stdout, "got data\n" );
}

Memory Allocation

As a result of the asynchronous call chains in Boost.Asio and the watchers in libuv, memory allocation often occurs at different times. With watchers, libuv defers allocation until after it receives an event that requires memory to handle. The allocation is done through a user callback, invoked internal to libuv, and defers deallocation responsibility of the application. On the other hand, many of the Boost.Asio operations require that the memory be allocated before issuing the asynchronous operation, such as the case of the buffer for async_read. Boost.Asio does provide null_buffers, that can be used to listen for an event, allowing applications to defer memory allocation until memory is needed, although this is deprecated.

This memory allocation difference also presents itself within the bind->listen->accept loop. With libuv, uv_listen creates an event loop that will invoke the user callback when a connection is ready to be accepted. This allows the application to defer the allocation of the client until a connection is being attempted. On the other hand, Boost.Asio's listen only changes the state of the acceptor. The async_accept listens for the connection event, and requires the peer to be allocated before being invoked.


Performance

Unfortunately, I do not have any concrete benchmark numbers to compare libuv and Boost.Asio. However, I have observed similar performance using the libraries in real-time and near-real-time applications. If hard numbers are desired, libuv's benchmark test may serve as a starting point.

Additionally, while profiling should be done to identify actual bottlenecks, be aware of memory allocations. For libuv, the memory allocation strategy is primarily limited to the allocator callback. On the other hand, Boost.Asio's API does not allow for an allocator callback, and instead pushes the allocation strategy to the application. However, the handlers/callbacks in Boost.Asio may be copied, allocated, and deallocated. Boost.Asio allows for applications to provide custom memory allocation functions in order to implement a memory allocation strategy for handlers.


Maturity

Boost.Asio

Asio's development dates back to at least OCT-2004, and it was accepted into Boost 1.35 on 22-MAR-2006 after undergoing a 20-day peer review. It also served as the reference implementation and API for Networking Library Proposal for TR2. Boost.Asio has a fair amount of documentation, although its usefulness varies from user to user.

The API also have a fairly consistent feel. Additionally, the asynchronous operations are explicit in the operation's name. For example, accept is synchronous blocking and async_accept is asynchronous. The API provides free functions for common I/O task, for instance, reading from a stream until a \r\n is read. Attention has also been given to hide some network specific details, such as the ip::address_v4::any() representing the "all interfaces" address of 0.0.0.0.

Finally, Boost 1.47+ provides handler tracking, which can prove to be useful when debugging, as well as C++11 support.

libuv

Based on their github graphs, Node.js's development dates back to at least FEB-2009, and libuv's development dates to MAR-2011. The uvbook is a great place for a libuv introduction. The API documentation is here.

Overall, the API is fairly consistent and easy to use. One anomaly that may be a source of confusion is that uv_tcp_listen creates a watcher loop. This is different than other watchers that generally have a uv_*_start and uv_*_stop pair of functions to control the life of the watcher loop. Also, some of the uv_fs_* operations have a decent amount of arguments (up to 7). With the synchronous and asynchronous behavior being determined on the presence of a callback (the last argument), the visibility of the synchronous behavior can be diminished.

Finally, a quick glance at the libuv commit history shows that the developers are very active.



Related Topics



Leave a reply



Submit