How to Mount a Directory Inside a Docker Container on Linux Host

How to mount a directory inside a docker container on Linux host

If your goal is to provide a ready to go LAMP, you should use the VOLUMES declaration inside the Dockerfile.
VOLUME volume_path_in_container
The problem is that docker will not mount the file cause they were already present in the path you are creating the volume on. You can go as @Grif-fin said in his comment or modify the entry point of the container so he copy the file you want to expose to the volume at the run time.

You have to insert your datas using the build COPY or ADD command in Dockerfile so the base files will be present in the container.

Then create an entrypoint that will copy file from the COPY path to the volume path.

Then run the container using the -v tag and like -v local_path:volume_path_in_container. Like this, you should have the files inside the container mounted on the local. (At least, it was what I add).

Find an exemple here : https://github.com/titouanfreville/Docker/tree/master/ready_to_go_lamp.

It will avoid to have to build every time and you can provide it from a definitive image.

To be nicer, it would be nice to add an user support so you are owner of the mounted files (if you are not root).

Hope it was useful to you.

Mount directory in Container and share with Host

After countless hours of research, I decided to extend my image with the following Dockerfile:

FROM sphinxsearch

VOLUME /usr/local/sphinx/etc
VOLUME /usr/local/sphinx/var

RUN mkdir -p /sphinx && cd /sphinx && cp -avr /usr/local/sphinx/etc . && cp -avr /usr/local/sphinx/var .

ADD docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh

ENTRYPOINT ["/docker-entrypoint.sh"]

Extending it benefited it me in that I didn't have to build the entire image from scratch as I was testing, and only building the parts that were relevant.

I created an ENTRYPOINT to execute a bash script that would copy the files back to the required destination for sphinx to run properly, here is that code:

#!/bin/sh
set -e

target=/usr/local/sphinx/etc

# check if directory exists
if [ -d "$target" ]; then
# check if we have files
if find "$target" -mindepth 1 -print -quit | grep -q .; then
# no files don't do anything
# we may use this if condition for something else later
echo not empty, don\'t do anything...
else
# we don't have any files, let's copy the
# files from etc and var to the right locations
cp -avr /sphinx/etc/* /usr/local/sphinx/etc && cp -avr /sphinx/var/* /usr/local/sphinx/var
fi
else
# directory doesn't exist, we will have to do something here
echo need to creates the directory...
fi

exec "$@"

Having access to the /etc & /var directories on the host allows me to adjust the files while keeping them preserved on the host in between restarts and so forth... I also have the data saved on the host which should survive the restarts.

I know it's a debated topic on data containers vs. storing on the host, at this moment I am leaning towards storing on the host, but will try the other method later. If anyone has any tips, advice, etc... to improve what I have or a better way, please share.

Thank you @h3nrik for suggestions and for offering help!

Dockerfile : How to mount host directory in container path?

You can not mount a volumn in Dockerfile

Because:

Dockerfile will build an image, image is independent on each machine host.

Image should be run everywhere on the same platform for example on linux platform it can be running on fedora, centos, ubuntu, redhat...etc

So you just mount volumn in to the container only. because container will be run on specify machine host.

Hope you understand it. Sorry for my bad English.

How to mount a directory in a Docker container to the host?

First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.

So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).

You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.

You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.

Allow Docker Container & Host User To Write on Bind Mounted Host Directory

Problem: if I set "ubuntu" as owner, container can't write (using php to write), if I set "nobody" as owner, VSCode SSH can't write. I am finding a way to allow both to write without changing directory owner user again and again, or similar ease.

First, I'd recommend the container image should create a new username for the files inside the container, rather than reusing nobody since that user may also be used for other OS tasks that shouldn't have any special access.

Next, as Triet suggests, an entrypoint that adjusts the container's user/group to match the volume is preferred. My own version of these scripts can be found in this base image that includes a fix-perms script that makes the user id and group id of the container user match the id's of a mounted volume. In particular, the following lines of that script where $opt_u is the container username, $opt_g is the container group name, and $1 is the volume mount location:

# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi

# update the gid
if [ -n "$opt_g" ]; then
OLD_GID=$(getent group "${opt_g}" | cut -f3 -d:)
NEW_GID=$(stat -c "%g" "$1")
if [ "$OLD_GID" != "$NEW_GID" ]; then
echo "Changing GID of $opt_g from $OLD_GID to $NEW_GID"
groupmod -g "$NEW_GID" -o "$opt_g"
if [ -n "$opt_r" ]; then
find / -xdev -group "$OLD_GID" -exec chgrp -h "$opt_g" {} \;
fi
fi
fi

Then I start the container as root, and the container runs the fix-perms script from the entrypoint, followed by a command similar to:

exec gosu ${container_user} ${orig_command}

This replaces the entrypoint that's running as root with the application running as the specified user. I've got more examples of this in:

  • DockerCon presentation
  • Similar SO questions

What I tried: In Container, I added user "nobody" to group "ubuntu".
On host, directory (used as mount) was set "sudo chown -R
ubuntu:ubuntu directory", user "ubuntu" was already added to group
"ubuntu". VSCode did edit, container was unable to edit.

I'd avoid this and create a new user. Nobody is designed to be as unprivileged as possible, so there could be unintended consequences with giving it more access.

Edit: the container already created without Dockerfile also ran and
maybe edited with important changes, so maybe I can't use Dockerfile
or entrypoint.sh way to solve problem. Can It be achieved through
running commands inside container or without creating container again?
This container can be stopped.

This is a pretty big code smell in containers. They should be designed to be ephemeral. If you can't easily replace them, you're missing the ability to upgrade to a newer image, and creating a lot of state drift that you'll eventually need to cleanup. Your changes that should be preserved need to be in a volume. If there are other changes that would be lost when the container is deleted, they will be visible in docker diff and I'd recommend fixing this now rather than increasing the size of the technical debt.

Edit: I am wondering, in Triet Doan's answer, an option is to modify
UID and GID of already created user in the container, will doing this
for the user and group "nobody" can cause any problems inside
container, I am wondering because probably many commands for settings
already executed inside container, files are already edited by php on
mounted directory & container is running for days

I would build a newer image that doesn't depend on this username. Within the container, if there's data you need to preserve, it should be in a volume.

Edit: I found that alpine has no usermod & groupmod.

I use the following in the entrypoint script to install it on the fly, but the shadow package should be included in the image you build rather than doing this on the fly for every new container:

if ! type usermod >/dev/null 2>&1 || \
! type groupmod >/dev/null 2>&1; then
if type apk /dev/null 2>&1; then
echo "Warning: installing shadow, this should be included in your image"
apk add --no-cache shadow
else
echo "Commands usermod and groupmod are required."
exit 1
fi
fi


Related Topics



Leave a reply



Submit