Running as a Host User Within a Docker Container

Running as a host user within a Docker container

You can share the host's passwd file:

docker run -ti -v /etc/passwd:/etc/passwd -u `id -u`:`id -g` -v `pwd`:`pwd` -w `pwd` -v pydeps:/usr/local -p 8000:8000 python:3-slim ./manage.py runserver

Or, add the user to the image with useradd, using /etc as volume, in the same way you use /usr/local:

docker run -v etcvol:/etc python..... useradd -u `id -u` $USER

(Both id -u and $USER are resolved in the host shell, before docker receive the command)

From inside of a Docker container, how do I connect to the localhost of the machine?

Edit:

If you are using Docker-for-mac or Docker-for-Windows 18.03+, just connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string).

If you are using Docker-for-Linux 20.10.0+, you can also use the host host.docker.internal if you started your Docker container with the --add-host host.docker.internal:host-gateway option.

Otherwise, read below



TLDR

Use --network="host" in your docker run command, then 127.0.0.1 in your docker container will point to your docker host.

Note: This mode only works on Docker for Linux, per the documentation.



Note on docker container networking modes

Docker offers different networking modes when running containers. Depending on the mode you choose you would connect to your MySQL database running on the docker host differently.

docker run --network="bridge" (default)

Docker creates a bridge named docker0 by default. Both the docker host and the docker containers have an IP address on that bridge.

on the Docker host, type sudo ip addr show docker0 you will have an output looking like:

[vagrant@docker:~] $ sudo ip addr show docker0
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::5484:7aff:fefe:9799/64 scope link
valid_lft forever preferred_lft forever

So here my docker host has the IP address 172.17.42.1 on the docker0 network interface.

Now start a new container and get a shell on it: docker run --rm -it ubuntu:trusty bash and within the container type ip addr show eth0 to discover how its main network interface is set up:

root@e77f6a1b3740:/# ip addr show eth0
863: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 66:32:13:f0:f1:e3 brd ff:ff:ff:ff:ff:ff
inet 172.17.1.192/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::6432:13ff:fef0:f1e3/64 scope link
valid_lft forever preferred_lft forever

Here my container has the IP address 172.17.1.192. Now look at the routing table:

root@e77f6a1b3740:/# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.42.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 eth0

So the IP Address of the docker host 172.17.42.1 is set as the default route and is accessible from your container.

root@e77f6a1b3740:/# ping 172.17.42.1
PING 172.17.42.1 (172.17.42.1) 56(84) bytes of data.
64 bytes from 172.17.42.1: icmp_seq=1 ttl=64 time=0.070 ms
64 bytes from 172.17.42.1: icmp_seq=2 ttl=64 time=0.201 ms
64 bytes from 172.17.42.1: icmp_seq=3 ttl=64 time=0.116 ms

docker run --network="host"

Alternatively you can run a docker container with network settings set to host. Such a container will share the network stack with the docker host and from the container point of view, localhost (or 127.0.0.1) will refer to the docker host.

Be aware that any port opened in your docker container would be opened on the docker host. And this without requiring the -p or -P docker run option.

IP config on my docker host:

[vagrant@docker:~] $ ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:98:dc:aa brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe98:dcaa/64 scope link
valid_lft forever preferred_lft forever

and from a docker container in host mode:

[vagrant@docker:~] $ docker run --rm -it --network=host ubuntu:trusty ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:98:dc:aa brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe98:dcaa/64 scope link
valid_lft forever preferred_lft forever

As you can see both the docker host and docker container share the exact same network interface and as such have the same IP address.



Connecting to MySQL from containers

bridge mode

To access MySQL running on the docker host from containers in bridge mode, you need to make sure the MySQL service is listening for connections on the 172.17.42.1 IP address.

To do so, make sure you have either bind-address = 172.17.42.1 or bind-address = 0.0.0.0 in your MySQL config file (my.cnf).

If you need to set an environment variable with the IP address of the gateway, you can run the following code in a container :

export DOCKER_HOST_IP=$(route -n | awk '/UG[ \t]/{print $2}')

then in your application, use the DOCKER_HOST_IP environment variable to open the connection to MySQL.

Note: if you use bind-address = 0.0.0.0 your MySQL server will listen for connections on all network interfaces. That means your MySQL server could be reached from the Internet ; make sure to setup firewall rules accordingly.

Note 2: if you use bind-address = 172.17.42.1 your MySQL server won't listen for connections made to 127.0.0.1. Processes running on the docker host that would want to connect to MySQL would have to use the 172.17.42.1 IP address.

host mode

To access MySQL running on the docker host from containers in host mode, you can keep bind-address = 127.0.0.1 in your MySQL configuration and all you need to do is to connect to 127.0.0.1 from your containers:

[vagrant@docker:~] $ docker run --rm -it --network=host mysql mysql -h 127.0.0.1 -uroot -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 36
Server version: 5.5.41-0ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

note: Do use mysql -h 127.0.0.1 and not mysql -h localhost; otherwise the MySQL client would try to connect using a unix socket.

Using current user when running container in docker-compose

This question is pretty interesting. Let me begin with a short explanation:

Understanding the problem

In fact the user that exists inside container will be valid only inside the container itself. What you're trying to do is to use a user that exists outside a container, i.e. your docker host, inside a container. Unfortunately this movement can't be done in a normal way.

For instance, let me try to change to my user in order to get this container:

$ docker run -it --rm --user jon ubuntu whoami
docker: Error response from daemon: unable to find user jon: no matching entries in passwd file.

I tried to run a classic ubuntu container inside my docker host; Although the user exists on my local machine, the Docker image says that didn't find the user.

$ id -a
uid=1000(jon) gid=1001(jon) groups=1001(jon),3(sys),90(network),98(power),108(vboxusers),962(docker),991(lp),998(wheel),1000(autologin)

The command above was executed on my computer, proving that "jon" username exists.

Making my username available inside a container: a docker trick

I suppose that you didn't create a user inside your container. For demonstration I'm going to use the ubuntu docker image.

The trick is to mount both files responsible for handling your user and group definition inside the container, enabling the container to see you inside of it.

$ docker run -it --rm --volume /etc/passwd:/etc/passwd:ro --volume /etc/group:/etc/group:ro --user $(id -u) ubuntu whoami
jon

For a more complete example:

$ docker run -it --rm --volume /etc/passwd:/etc/passwd:ro --volume /etc/group:/etc/group:ro --user $(id -u):$(id -g) ubuntu "id"
uid=1000(jon) gid=1001(jon) groups=1001(jon)
  • Notice that I used two volumes pointing to two files? /etc/password and /etc/group?
    Both I mounted read only (appending ":ro") just for safety.

  • Also notice that I used the id -u, which brings me the user id (1000 on my case), forcing the user id for being the same of mine defined on my /etc/password file.

Caveat

If you try to set the username to jon rather than the UID, you're going to run into an issue:

$ docker run -it --rm --volume /etc/passwd:/etc/passwd:ro --volume /etc/group:/etc/group:ro --user jon ubuntu whoami
docker: Error response from daemon: unable to find user jon: no matching entries in passwd file.

This happens because the docker engine would try to change the username before mouting the volumes and this should exists before running the container. If you provide a numeric representation of the user, this one doesn't needs to exist within the container, causing the trick to work;

https://docs.docker.com/engine/reference/run/#user

I hope being helpful. Be safe!

Allow Docker Container & Host User To Write on Bind Mounted Host Directory

Problem: if I set "ubuntu" as owner, container can't write (using php to write), if I set "nobody" as owner, VSCode SSH can't write. I am finding a way to allow both to write without changing directory owner user again and again, or similar ease.

First, I'd recommend the container image should create a new username for the files inside the container, rather than reusing nobody since that user may also be used for other OS tasks that shouldn't have any special access.

Next, as Triet suggests, an entrypoint that adjusts the container's user/group to match the volume is preferred. My own version of these scripts can be found in this base image that includes a fix-perms script that makes the user id and group id of the container user match the id's of a mounted volume. In particular, the following lines of that script where $opt_u is the container username, $opt_g is the container group name, and $1 is the volume mount location:

# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi

# update the gid
if [ -n "$opt_g" ]; then
OLD_GID=$(getent group "${opt_g}" | cut -f3 -d:)
NEW_GID=$(stat -c "%g" "$1")
if [ "$OLD_GID" != "$NEW_GID" ]; then
echo "Changing GID of $opt_g from $OLD_GID to $NEW_GID"
groupmod -g "$NEW_GID" -o "$opt_g"
if [ -n "$opt_r" ]; then
find / -xdev -group "$OLD_GID" -exec chgrp -h "$opt_g" {} \;
fi
fi
fi

Then I start the container as root, and the container runs the fix-perms script from the entrypoint, followed by a command similar to:

exec gosu ${container_user} ${orig_command}

This replaces the entrypoint that's running as root with the application running as the specified user. I've got more examples of this in:

  • DockerCon presentation
  • Similar SO questions

What I tried: In Container, I added user "nobody" to group "ubuntu".
On host, directory (used as mount) was set "sudo chown -R
ubuntu:ubuntu directory", user "ubuntu" was already added to group
"ubuntu". VSCode did edit, container was unable to edit.

I'd avoid this and create a new user. Nobody is designed to be as unprivileged as possible, so there could be unintended consequences with giving it more access.

Edit: the container already created without Dockerfile also ran and
maybe edited with important changes, so maybe I can't use Dockerfile
or entrypoint.sh way to solve problem. Can It be achieved through
running commands inside container or without creating container again?
This container can be stopped.

This is a pretty big code smell in containers. They should be designed to be ephemeral. If you can't easily replace them, you're missing the ability to upgrade to a newer image, and creating a lot of state drift that you'll eventually need to cleanup. Your changes that should be preserved need to be in a volume. If there are other changes that would be lost when the container is deleted, they will be visible in docker diff and I'd recommend fixing this now rather than increasing the size of the technical debt.

Edit: I am wondering, in Triet Doan's answer, an option is to modify
UID and GID of already created user in the container, will doing this
for the user and group "nobody" can cause any problems inside
container, I am wondering because probably many commands for settings
already executed inside container, files are already edited by php on
mounted directory & container is running for days

I would build a newer image that doesn't depend on this username. Within the container, if there's data you need to preserve, it should be in a volume.

Edit: I found that alpine has no usermod & groupmod.

I use the following in the entrypoint script to install it on the fly, but the shadow package should be included in the image you build rather than doing this on the fly for every new container:

if ! type usermod >/dev/null 2>&1 || \
! type groupmod >/dev/null 2>&1; then
if type apk /dev/null 2>&1; then
echo "Warning: installing shadow, this should be included in your image"
apk add --no-cache shadow
else
echo "Commands usermod and groupmod are required."
exit 1
fi
fi

Create a user on the docker host from inside a container

When you mount an individual file, you end up mounting the inode of that file with the bind mount. And when you write to the file, many tools create a new file, with a new inode, and replace the existing file with that. This avoids partial reads, and other file corruption risks if you were to modify the file in place.

What you are attempting to do is likely a very bad idea, it's the very definition of a container escape, allowing the container to setup credentials on the host. If you really need host access, I'd mount the folder in a different location because containers have other files that are automatically mounted in /etc. So you could say /etc:/host/etc and access the files in the container under /host/etc. Just realize that's even a larger security hole.

Note, if the entire goal is to avoid permission issues between the host and the container, there are much better ways to do this, but that would be an X-Y problem.

Should I run things inside a docker container as non root for safety?

Running the container as root brings a lot of risks. Although being root inside the container is not the same as root on the host machine (some more details here) and you're able to deny a lot of capabilities during container startup, it is still the recommended approach to avoid being root.

Usually it is a good idea to use the USER directive in your Dockerfile after you install some general packages/libraries. In other words - after the operations that require root privileges. Installing sudo in a production service image is a mistake, unless you have a really good reason for it. In most cases - you don't need it and it is more of a security issue. If you need permissions to access some particular files or directories in the image, then make sure that the user you specified in the Dockerfile can really access them (setting proper uid, gid and other options, depending on where you deploy your container). Usually you don't need to create the user beforehand, but if you need something custom, you can always do that.

Here's an example Dockerfile for a Java application that runs under user my-service:

FROM alpine:latest
RUN apk add openjdk8-jre
COPY ./some.jar /app/
ENV SERVICE_NAME="my-service"

RUN addgroup --gid 1001 -S $SERVICE_NAME && \
adduser -G $SERVICE_NAME --shell /bin/false --disabled-password -H --uid 1001 $SERVICE_NAME && \
mkdir -p /var/log/$SERVICE_NAME && \
chown $SERVICE_NAME:$SERVICE_NAME /var/log/$SERVICE_NAME

EXPOSE 8080
USER $SERVICE_NAME
CMD ["java", "-jar", "/app/some.jar"]

As you can see, I create the user beforehand and set its gid, disable its shell and password login, as it is going to be a 'service' user. The user also becomes owner of /var/log/$SERVICE_NAME, assuming it will write to some files there. Now we have a lot smaller attack surface.



Related Topics



Leave a reply



Submit