Communication Between Linked Docker Containers

Docker containers in different networks communication

Your Angular frontend application will not be run in the same machine where your server will be running. Basically, your frontend application will be shipped to the client browser (through Nginx, for example), and then it will need to communicate with the server (backend application) through a connection.

you will have to make your API calls using your server IP address (or domain name), and of course, you need to expose your backend application inside your server and publish it on the correct port.

Communicate between two docker containers

let's say you run the first container with the following command:

docker run -d --name my_service web_api_image

so you can use the --link flag to run the second:

docker run -d -P --name web --link my_service:my_service website_image

then, within website container you can refer to the web api using my_service hostname.

please note:
--link is deprecated.

you can also use docker-compose:

version: "2"
services:
web_api:
image: web_api_image
container_name: web_api
ports:
- "8000:8000"
expose:
- "8000"
website:
image: website_image
container_name: website
ports:
- "80:80"
links:
- "web_api:web_api"

replace image names and run with docker-compose up

Communication between linked docker containers

As mentioned in Docker links:

Docker also defines a set of environment variables for each port exposed by the source container.

Each variable has a unique prefix in the form:

<name>_PORT_<port>_<protocol>

The components in this prefix are:

  • the alias specified in the --link parameter (for example, webdb)
  • the <port> number exposed
  • a <protocol> which is either TCP or UDP

That means you need to make sure that Container1 exposes the right port with the right protocol (in your case, UDP): see "How do I expose a UDP Port on Docker?"

Allow communication on specific ports between two Docker containers on different bridge networks

I have a lesser restriction in my case where I open up certain port numbers in all containers. The containers communicate with each other by using the host IP and the exposed port number.

In my case, on top connecting to the custom network, I also connect the containers to the default bridge network. The default network does not allow communication between the containers.

Then in iptables, I create a new pipeline and pipe docker0 (the bridge network) to it

-F FILTERS
-A DOCKER-USER -i docker0 -o docker0 -j FILTERS

And allow the whitelisted port numbers

-A FILTERS -p tcp --dport 1234 -m state --state NEW -j ACCEPT -m comment --comment container1
-A FILTERS -p tcp --dport 5678 -m state --state NEW -j ACCEPT -m comment --comment container2

You can try tightening the restriction, by

  • not connecting the default bridge network
  • finding the network interface of net1 and net2 via ip link show and ifconfig
  • changing the pipeline to
-F CONTAINER1-CONTAINER2
-F CONTAINER2-CONTAINER1
-A DOCKER-USER -i br-xxxx -o br-yyyy -j CONTAINER1-CONTAINER2
-A DOCKER-USER -i br-yyyy -o br-xxxx -j CONTAINER2-CONTAINER1
  • Modifying the port list to
-A CONTAINER2-CONTAINER1 -p tcp --dport 1234 -m state --state NEW -j ACCEPT -m comment --comment container1
-A CONTAINER1-CONTAINER2 -p tcp --dport 5678 -m state --state NEW -j ACCEPT -m comment --comment container2

How to communicate between Docker containers via hostname

Edit: After Docker 1.9, the docker network command (see below https://stackoverflow.com/a/35184695/977939) is the recommended way to achieve this.


My solution is to set up a dnsmasq on the host to have DNS record automatically updated: "A" records have the names of containers and point to the IP addresses of the containers automatically (every 10 sec). The automatic updating script is pasted here:

#!/bin/bash

# 10 seconds interval time by default
INTERVAL=${INTERVAL:-10}

# dnsmasq config directory
DNSMASQ_CONFIG=${DNSMASQ_CONFIG:-.}

# commands used in this script
DOCKER=${DOCKER:-docker}
SLEEP=${SLEEP:-sleep}
TAIL=${TAIL:-tail}

declare -A service_map

while true
do
changed=false
while read line
do
name=${line##* }
ip=$(${DOCKER} inspect --format '{{.NetworkSettings.IPAddress}}' $name)
if [ -z ${service_map[$name]} ] || [ ${service_map[$name]} != $ip ] # IP addr changed
then
service_map[$name]=$ip
# write to file
echo $name has a new IP Address $ip >&2
echo "host-record=$name,$ip" > "${DNSMASQ_CONFIG}/docker-$name"
changed=true
fi
done < <(${DOCKER} ps | ${TAIL} -n +2)

# a change of IP address occured, restart dnsmasq
if [ $changed = true ]
then
systemctl restart dnsmasq
fi

${SLEEP} $INTERVAL
done

Make sure your dnsmasq service is available on docker0. Then, start your container with --dns HOST_ADDRESS to use this mini dns service.

Reference: http://docs.blowb.org/setup-host/dnsmasq.html

Communication between docker containers on different servers

You'd set up your containers in the exact same way as if the database was running on the remote host but not in Docker: configure them with the other host's name as the database host and the appropriate database name, login credentials, &c. You need to make sure the database container is configured to allow external connections (docker run -p 3306:3306 or -p 5432:5432 options as appropriate); it wouldn't necessarily be in a single-host Docker Compose setup.

As a more general question, Docker on its own doesn't give much specific help if a given client and server are on separate hosts, even if both are in Docker. You can configure things to use only physical host names, ignoring Docker space, and redeploy if you move workloads around; you can deploy a service registry like Consul that knows where services are running and can tell the clients; or set up an overlay network (also buying into Docker's Swarm container orchestrator). This also is related to how you deploy the workloads, and your choice here might be different if you're using Ansible, or can tolerate installing a full-scale cluster manager, or insist on only using docker run or docker-compose commands.

Unable to communicate between docker containers on localhost

I had a similar issue with an angular app and an API both running in separate Docker containers.

Both apps were individually working fine with the following setup :

Angular running at http://localhost:4200

API running at http://localhost:8080

Problem

The Angular app couldn't reach the API.

The following code gave me network related errors all the time.

this.http.get('http://localhost:8080').subscribe(console.log);

Solution

Links
Link to containers in another service. Either specify both the service name and a link alias (SERVICE:ALIAS), or just the service name. Containers for the linked service are reachable at a hostname identical to the alias, or the service name if no alias was specified.

When a container needs to reach another container via the network, we need to create a link.

I ended up creating a link to the api service in the angular service definition in the docker-compose.yml

version: "2"
services:
api:
build:
context: .
dockerfile: ./api/Dockerfile
volumes:
- ./api:/usr/src/app
ports:
- "8080:8080"

angular:
build:
context: .
dockerfile: ./angular/Dockerfile
volumes:
- ./angular:/usr/src/app
ports:
- "4200:4200"
links:
- api

Then I fixed the network error by replacing localhost with api in the Angular app.

this.http.get('http://api:8080').subscribe(console.log);

You don't need a proxy. However, you might have to tweak your config to make it work.



Related Topics



Leave a reply



Submit