Simple Ipc Between C++ and Python (Cross Platform)

Simple IPC between C++ and Python (cross platform)

zeromq -- and nothing else. encode the messages as strings.

However, If you want to get serialiazation from a library use protobuf it will generate classes for Python and C++. You use the SerializeToString() and ParseFromString() functions on either end, and then pipe the strings via ZeroMq.

Problem solved, as I doubt any other solution is faster, and neither will any other solution be as easy to wire-up and simple to understand.

If want to use specific system primitives for rpc such as named pipes on Windows and Unix Domain Sockets on unix then you should look at Boost::ASIO. However, unless you have (a) a networking background, and (b) a very good understanding of C++, this will be very time consuming

Cross platform IPC

In terms of speed, the best cross-platform IPC mechanism will be pipes. That assumes, however, that you want cross-platform IPC on the same machine. If you want to be able to talk to processes on remote machines, you'll want to look at using sockets instead. Luckily, if you're talking about TCP at least, sockets and pipes behave pretty much the same behavior. While the APIs for setting them up and connecting them are different, they both just act like streams of data.

The difficult part, however, is not the communication channel, but the messages you pass over it. You really want to look at something that will perform verification and parsing for you. I recommend looking at Google's Protocol Buffers. You basically create a spec file that describes the object you want to pass between processes, and there is a compiler that generates code in a number of different languages for reading and writing objects that match the spec. It's much easier (and less bug prone) than trying to come up with a messaging protocol and parser yourself.

IPC between C application and Python

About fork() and the need to execute a different process. It is true that fork() creates a copy of the current process. But this is usually coupled with exec() (one of the various forms) to get the process copy to execute a different program.

As for IPC, you have several choices. Someone mentioned a queue - but something like ZeroMQ is a overkill. You can do IPC with one of several mechanisms.

  1. Pipes (named pipes or anonymous)
  2. Unix domain sockets
  3. TCP or UDP via the sockets API
  4. Shared memory
  5. Message queues

The pipe approach is the easiest. Note that when you pass data back and forth between the C program and Python, you will need to worry about the transfer syntax of the data. If you choose to use C structs (which can be non portable), you will need to unpack the data on the Python side. Else you can use some textual format - combination of sprintf/sscanf, or JSON etc.

Fast communication between C++ and python using shared memory

So I spent the last days implementing shared memory using mmap, and the results are quite good in my opinion. Here are the benchmarks results comparing my two implementations: pure TCP and mix of TCP and shared memory.

Protocol:

Benchmark consists of moving data from C++ to Python world (using python's numpy.nparray), then data sent back to C++ process. No further processing is involved, only serialization, deserialization and inter-process communication (IPC).

Case A:

  • One C++ process implementing TCP communication using Boost.Asio
  • One Python3 process using standard python TCP sockets

Communication is done with TCP {header + data}.

Case B:

  • One C++ process implementing TCP communication using Boost.Asio and shared memory (mmap) using Boost.Interprocess
  • One Python3 process using standard TCP sockets and mmap

Communication is hybrid : synchronization is done through sockets (only header is passed) and data is moved through shared memory. I think this design is great because I have suffered in the past from problem of synchronization using condition variable in shared memory, and TCP is easy to use in both C++ and Python environments.

Results:

Small data at high frequency

200 MBytes/s total: 10 MByte sample at 20 samples per second

























CaseGlobal CPU consumptionC++ partpython part
A17.5 %10%7.5%
B6%1%5%

Communication between C++ and Python

You have two basic options:

  • Run the C++ code and the python code as two separate programs, in two separate processes, and use a IPC mechanism
  • Link the C++ code against your code, as grc suggested.

The first option is probably better if you already have a complete complex C++ program written. Also, it's generally easier to debug and maintain.

As for a specific IPC mechanism, sockets are commonly used because they have somewhat standardized cross-platform APIs at the OS level, and still work if you need the two programs running on different machines. Sockets should be more than enough for transferring three coordinates 30 times each second, if you're dealing with a modern desktop machine.

If you really need more performance, you could look into (named or named) pipes, but you'll probably need some extra work on the C++ side to make it cross-platform.

OS-independent Inter-program communication between Python and C

If you want and need truly OS independent, language independent inter process communication, sockets are probably the best option.

This will allow the two programs to communicate across machines, as well (without code changes).

For reading material, here's a Python Socket Programming How To.

Shared-memory IPC solution for both Linux and Windows

The easiest way is to use python with version >=3.8, it has added a built-in abstraction for shared memory,
it works on both windows and linux
https://docs.python.org/3.10/library/multiprocessing.shared_memory.html

The code will look something like this:

Process #1:

  from multiprocessing import shared_memory
# create=true to create a new shared memory instance, if it already exists with the same name, an exception is thrown
shm_a = shared_memory.SharedMemory(name="example", create=True, size=10)
shm_a.buf[:3] = bytearray([1, 2, 3])
while True:
do_smt()
shm_a.close()

Process #2:

  from multiprocessing import shared_memory
# create=false, use existing
shm_a = shared_memory.SharedMemory(name="example", size=10)
print(bytes(shm.buf[:3]))
# [0x01, 0x02, 0x03]
while True:
do_smt()
shm_a.close()

Otherwise, I think there are no common good solutions and you will need to reinvent the wheel :)

Fast Cross Platform Inter Process Communication in C++

boost::asio is a cross platform library handling asynchronous io over sockets. You can combine this with using for instance Google Protocol Buffers for your actual messages.

Boost also provides you with boost::interprocess for interprocess communication on the same machine, but asio lets you do your communication asynchronously and you can easily have the same handlers for both local and remote connections.



Related Topics



Leave a reply



Submit