Linux Ipc - Multiple Writers, Single Reader

Linux IPC - Multiple writers, single reader

The main issue you should consider is what kind of data you are passing as this will in part determine your options. This comes down to whether your data is bounded or not. If it isn't bounded then something stream oriented like FIFOs or sockets are appropriate; if it is then you might make better use of of things like MQs or shared memory. Since you mention both strings and structs it is hard to say what is appropriate in your case, though if your strings are bounded within some reasonable maximum you can use anything with some minor fiddling.

The second is speed. There is never a completely correct answer for this but generally it goes something like: shared memory, MQs, FiFOs, domain sockets, network sockets.

The third is ease of use. Shared memory is the biggest PITA since you have to handle your own synchronization. Pipes are easy so long as your message lengths stay below PIPE_BUF size. The OS handles most of your headaches with MQs. Sockets are easy enough but you have the setup boilerplate.

Lastly several of the IPC mechanisms have both POSIX and SYSV variants. Generally POSIX is the way to go unless the SYSV type has some feature you really need or want.

EDIT: Count0's answer reminded me that you might be interested in something more abstract and higher level. In addition to ACE you can look at Poco. And, of course, no SO answer is complete if it doesn't mention Boost somewhere.

Linux IPC with single writer, multible readers

If the writer process is a server, it might fork the client processes and just plain pipe(2) for communication. If there is no parent/child relationship, consider named pipes made with mkfifo(3), or AF_UNIX sockets (see unix(7) and scoket(2) ....) which are bidirectional (AF_UNIX sockets are much faster than TCP/IP or UDP/IP on the same machine).

Notice that your writer process is reading data from your hardware device and is writing or sending data to several reader clients. So your writer process deals with many file descriptors simultaneously (the hardware device to be read, and the sockets or pipes to be written to the clients, at least one file descriptor per client).

However, the important thing is to have some event loop (notably on the server side, and probably also inside clients). This means that you call some multiplexing syscall like poll(2) in the loop, and you "decide" if you are reading or writing or connecting (and which file descriptor should be read, or should be written, or should connect) in each iteration. See also read(2), write(2), connect(2), send(2), recv(2) etc... Notice that you should buffer data with an event loop (since read and write can be on "partial" or "incomplete" messages).

Notice that poll is not eating CPU resources when waiting for I/O. You could, but you should not anymore, use some older multiplexing syscall (like the obsolete select(2) ...). Use poll(2).

You may want to use a library to provide the event loop, e.g. libevent or libev... See also this answer. That event loop should also (on the server side) poll then read the hardware device.

If some of the programs are using a GUI toolkit (e.g. on the client side) like Qt or Gtk they should profit of the existing event loop provided by that toolkit...

You should read Advanced Linux Programming and know about the C10K problem.

If signals or timers are important (read carefully signal(7) and time(7)) the Linux specific signalfd(2) and timerfd_create(2) could be very helpful since they play nicely with event loops. These linux specific syscalls (signalfd & timerfd_create ...) are too recent to be mentioned in Advanced Linux Programming.

BTW, you could study the source code of existing free software similar to yours, and/or use strace(1) to understand the exact syscalls that they are doing.

If you have no loop around a multiplexing syscall (à la poll(2)) then you have no event loop and your design is buggy and cannot possibly and reliably work (since you need to react to several file descriptors at once).

You could also use a multi-threaded approach, but it is much more complex and not worth the effort in your particular case.

unix pipe multiple writers

http://pubs.opengroup.org/onlinepubs/009695399/functions/write.html

Atomic/non-atomic: A write is atomic
if the whole amount written in one
operation is not interleaved with data
from any other process. This is useful
when there are multiple writers
sending data to a single reader.
Applications need to know how large a
write request can be expected to be
performed atomically. This maximum is
called {PIPE_BUF}. This volume of IEEE
Std 1003.1-2001 does not say whether
write requests for more than
{PIPE_BUF} bytes are atomic, but
requires that writes of {PIPE_BUF} or
fewer bytes shall be atomic.

Are there repercussions to having many processes write to a single reader on a named pipe in posix?

The FIFO write should be atomic, as long as it's under the page size. So there shouldn't be an issue with 100 bytes messages. On linux the max size used to be 4K, I believe it is larger now. I've used this technique on a few systems for message passing, since the writes end up atomic.

You can end up with an issue, if you are using a series of writes, since output buffering could cause a sync issue. So make sure the whole message is written at one time. eg. build a string, then print, don't print multiple pieces at once.

s="This is a message"
echo $s

NOT

echo "This "
echo "is "
echo " a message"


Related Topics



Leave a reply



Submit