Dynamic Listening Ports Inside Docker Container

Dynamic listening ports inside Docker container

The --net=host option, for the docker run command, should enables the behavior you are seeking -- note that it is considered as insecure, but I really don't see any other mean of doing this.

See the docker run man page:

   --net="bridge"
Set the Network mode for the container
'bridge': create a network stack on the default Docker bridge
'none': no networking
'container:<name|id>': reuse another container's network stack
'host': use the Docker host network stack. Note: the host mode gives the container full access to local system services such as D-bus
and is therefore considered insecure.
'<network-name>|<network-id>': connect to a user-defined network

Docker containers on same host listening to the same 2 ports

Binding to multiple addresses

You cannot bind ports from multiple containers to the same host ports when listening on the same host address. The only way to make a configuration like that work is to bind the ports to different addresses on the host. For example, if I have multiple addresses associated with eth0 on my host:

$ ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 192.168.1.175/24 brd 192.168.1.255 scope global dynamic noprefixroute eth0
valid_lft 49625sec preferred_lft 49625sec
inet 192.168.1.200/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet 192.168.1.201/24 scope global secondary eth0
valid_lft forever preferred_lft forever

Then I can bind each of my containers to a specific address, like this:

  sensor1:
build: ./sensor1
image: sensor1:latest
ports:
- "192.168.1.175:8086:80"
- "192.168.1.175:1883:80"
[...]
sensor2:
build: ./sensor2
image: sensor2:latest
ports:
- "192.168.1.200:8086:80"
- "192.168.1.200:1883:80"
[...]

Then a connection to http://192.168.1.175:8086 will go to the sensor1 container, while a connection to http://192.168.1.200:8086 would go to the sensor2 container.

Hostname and path based routing

If you want everything hosted at the same address, then you need another strategy for differentiating between the containers. Your options are effectively:

  • Hostname -- you configure multiple hostnames to point to the same ip address, and a load balancer like Traefik will use the hostname to direct incoming connections to the appropriate container.

  • Path -- each container is exposed at a different path (e.g., http://myhost/sensor1 goes to sensor1, http://myhost/sensor2 goes to sensor2, etc). The load balancer uses the path contained in incoming requests to route traffic.

Path example

I'll start with the path example, because that's often easiest. It
doesn't require setting up DNS entries or mucking about with
/etc/hosts on multiple machines.

The following docker-compose.yaml demonstrates a path-based routing configuration:

version: '3'

services:

# This is the load balancer. To match the configuration you show in
# your question, I have it listening on ports 8086 and 1883 in
# addition to port 80.
#
# The default configuration of Traefik is to expose a management
# interface on port 8080; if you don't want that, you can remove
# the corresponding `ports` entry.
reverse-proxy:
image: traefik:v2.7
command: --api.insecure=true --providers.docker
ports:
- "80:80"
- "8086:80"
- "1883:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock

# In our container configuration, we use labels to configure
# Traefik. Here, we're declaring that requests prefixed by `/sensor1`
# will be routed to this container, and then we strip the `/sensor1`
# prefix from the request (so that the service running inside the
# container doesn't see the prefix).
#
# Note that we're not publishing any ports here: only the load
# balancer has ports published on the host.
sensor1:
hostname: sensor1
labels:
- traefik.enable=true
- traefik.http.routers.sensor1.rule=PathPrefix(`/sensor1`)
- traefik.http.services.sensor1.loadbalancer.server.port=80
- traefik.http.middlewares.strip-sensor1.stripprefix.prefixes=/sensor1
- traefik.http.routers.sensor1.middlewares=strip-sensor1
build: ./sensor1

sensor2:
hostname: sensor2
labels:
- traefik.enable=true
- traefik.http.routers.sensor2.rule=PathPrefix(`/sensor2`)
- traefik.http.services.sensor2.loadbalancer.server.port=80
- traefik.http.middlewares.strip-sensor2.stripprefix.prefixes=/sensor2
- traefik.http.routers.sensor2.middlewares=strip-sensor2
build: ./sensor2

If each container is running a service that includes the hostname in
/hostname.txt, I will see the following behavior:

$ curl myhost/sensor1/hostname.txt
sensor1
$ curl myhost/sensor2/hostname.txt
sensor2

Hostname example

A host-based configuration looks pretty much identical, except the
rule uses a Host match instead of a PathPrefix match (and we no
longer need the prefix-stripping logic):

version: '3'

services:
reverse-proxy:
image: traefik:v2.7
command: --api.insecure=true --providers.docker
ports:
- "80:80"
- "8086:80"
- "1883:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock

sensor1:
hostname: sensor1
labels:
- traefik.enable=true
- traefik.http.routers.sensor1.rule=Host(`sensor1`)
- traefik.http.services.sensor1.loadbalancer.server.port=8080
build:
context: web

sensor2:
hostname: sensor2
labels:
- traefik.enable=true
- traefik.http.routers.sensor2.rule=Host(`sensor2`)
- traefik.http.services.sensor2.loadbalancer.server.port=8080
build:
context: web

sensor3:
hostname: sensor3
labels:
- traefik.enable=true
- traefik.http.routers.sensor3.rule=Host(`sensor3`)
- traefik.http.services.sensor3.loadbalancer.server.port=8080
build:
context: web

For this to work, you need to have the multiple hostnames mapping to
the docker host. You can accomplish this by setting up appropriate DNS
entries, or by adding an appropriate entry to /etc/hosts on any
machines that need to contact these services.

We can demonstrate the configuration by setting an explicit Host
header in our requests:

$ curl -H 'Host: sensor1' myhost/hostname.txt
sensor1
$ curl -H 'Host: sensor2' myhost/hostname.txt
sensor2

Exposing a port on a live Docker container

You cannot do this via Docker, but you can access the container's un-exposed port from the host machine.

If you have a container with something running on its port 8000, you can run

wget http://container_ip:8000

To get the container's IP address, run the 2 commands:

docker ps
docker inspect container_name | grep IPAddress

Internally, Docker shells out to call iptables when you run an image, so maybe some variation on this will work.

To expose the container's port 8000 on your localhost's port 8001:

iptables -t nat -A  DOCKER -p tcp --dport 8001 -j DNAT --to-destination 172.17.0.19:8000

One way you can work this out is to setup another container with the port mapping you want, and compare the output of the iptables-save command (though, I had to remove some of the other options that force traffic to go via the docker proxy).

NOTE: this is subverting docker, so should be done with the awareness that it may well create blue smoke.

OR

Another alternative is to look at the (new? post 0.6.6?) -P option - which will use random host ports, and then wire those up.

OR

With 0.6.5, you could use the LINKs feature to bring up a new container that talks to the existing one, with some additional relaying to that container's -p flags? (I have not used LINKs yet.)

OR

With docker 0.11? you can use docker run --net host .. to attach your container directly to the host's network interfaces (i.e., net is not namespaced) and thus all ports you open in the container are exposed.

Access a docker container with random ports from host ip

Docker does not have the functionality to map a port after container creation, or to modify an existing port mapping.

The usual solution is to configure your application to use a set port or range of ports. If that's not possible then there are a few options to work around the issue.

Host network

Use docker run --network=host. The containers share the hosts network stack so are available on the hosts IP. Note as this give's the container access to the hosts network so could interfere with host services or expose more of your host to the container than normal.

Routable, User defined network.

Create a user defined network for your containers and assign it an IP range that the network can route to the Docker host. Services listening on ports in containers are then directly addressable.

docker network create \
--subnet=10.1.3.0/24 \
-o com.docker.network.bridge.enable_ip_masquerade=false \
routable

A route for the new docker network will need be added to your network gateway(s) so they can route the traffic via your Docker host. On Linux this would be something like:

ip route add 10.1.3.0/24 via $DOCKER_HOST_IP

Then you should be able transfer data as normal

docker run --net=routable --rm -it alpine ping $DOCKER_HOST_GATEWAY_IP

Macvlan bridge

Docker has a macvlan network driver that allows you to map a host interface into a container, kind of like a bridged interface in a VM. The container can then have an interface on the same network as the host.

docker network create -d macvlan \
--subnet=10.1.2.0/24 \
-o macvlan_mode=bridge \
-o parent=enp3s0 macvlan
docker run --net=macvlan --ip=10.1.2.128 --rm -it alpine ping 10.1.2.1

Note that you can't communicate with the docker hosts IP address over this bridge. You can add a macvlan sub interface to the host and move the hosts IP address onto it to allow traffic.

Some VM's and virtual networks get finicky about having additional MAC addresses they don't know about generating data. AWS EC2, for example will reject the containers traffic.

Container iptables

It's possible to create iptables NAT rules in a containers namespace. To do this inside a container the container needs the NET_ADMIN capability. Some form of script could lookup the applications ports once it's started and forward traffic with a DNAT rule from your static externally mapped ports to the dynamic application ports.

docker run -p 5000:5000 --cap-add=NET_ADMIN debian:9
# port=$(ss -lntpH | awk '/"app-bin"/ { split($4,a,":"); print a[length(a)]; exit}')
# iptables -t nat -A -p tcp -m tcp --dport 5000 -j DNAT --to-destination 127.0.0.1:$port

You could similarly add the iptables rules from the docker host for the containers network namespace if you don't want to add the NET_ADMIN capability to the container. The host needs a little help to use container name spaces

pid=$(docker inspect -f '{{.State.Pid}}' ${container_id})
mkdir -p /var/run/netns/
ln -sfT /proc/$pid/ns/net /var/run/netns/$container_id
ip netns exec "${container_id}" iptables -t nat -vnL

How can kubernetes dynamically expose my docker port?

In your deployment.yaml you actually don't have to specify the containerPort; all ports are exposed. From the docs:

ports ContainerPort array

List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.

Dynamic Ports with Docker and Oracle CQN

It works outside of Docker because you're being more liberal with your host's ports ("a wide range") than you are with the container image.

If you're willing to let your host present the range of ports, there's little difference with permitting a container running on that host to accept the same range.

One way to effect this for the container is to --net=host which directly presents the host's networking to the container's. You don't need to --publish ports and the container can then use the port defined by Oracle's service.

docker host and port info from the container

You should pass the complete externally-visible callback URL to the application.

ports:
- "1234:9876"
environment:
- CALLBACK_URL=http://physical-host.example.com:1234/path

You can imagine an interesting variety of scenarios where the host IP address isn't directly routable either. As a basic example, say you're running the container, on your laptop, at home. The laptop's IP address might be 192.168.1.2/24 but that's still a "private" address; you need your router's externally-visible IP address, and there's no easy way to discover that.

xx.xx.xx.xx /--------\ 192.168.1.1   192.168.1.2 /----------------\
------------| Router |---------------------------| Laptop |
\--------/ | Docker |
| 172.17.1.2 |
Callback address must be | Server |
http://xx.xx.xx.xx/path \----------------/

In a cloud environment, you can imagine a similar setup using load balancers. Your container might run on some cloud-hosted instance. The container might listen on port 11111, and you remap that to port 22222 on the instance. But then in front of this you have a load balancer that listens on the ordinary HTTPS port 443, does TLS termination, and then forwards to the instance, and you have a DNS name connected to that load balancer; the callback address would be https://service.example.com/path, but without explicitly telling the container this, there's no way it can figure this out.



Related Topics



Leave a reply



Submit