Running Docker on Ubuntu: Mounted Host Volume Is Not Writable from Container

Running docker on Ubuntu: mounted host volume is not writable from container

If your uid on the host (id -u) isn't the same as the uid of the user in the docker container (often "docker") then you can have this problem. You can try:

  1. Making the UIDs the same between your user and the user in the docker container.
  2. Setting the group permissions on the directory to be writable for a group that both you and docker belong to.
  3. You could also use the nuclear option:

chmod a+rwx -R project-dir/

The nuclear option will make your git workspace filthy, which will annoy you greatly, so isn't the best long-term solution. It stops the bleeding tho.

For further understanding the problem, you might find these useful:

  1. https://github.com/docker/docker/issues/7906
  2. https://github.com/docker/docker/issues/7198

Allow Docker Container & Host User To Write on Bind Mounted Host Directory


Problem: if I set "ubuntu" as owner, container can't write (using php to write), if I set "nobody" as owner, VSCode SSH can't write. I am finding a way to allow both to write without changing directory owner user again and again, or similar ease.

First, I'd recommend the container image should create a new username for the files inside the container, rather than reusing nobody since that user may also be used for other OS tasks that shouldn't have any special access.

Next, as Triet suggests, an entrypoint that adjusts the container's user/group to match the volume is preferred. My own version of these scripts can be found in this base image that includes a fix-perms script that makes the user id and group id of the container user match the id's of a mounted volume. In particular, the following lines of that script where $opt_u is the container username, $opt_g is the container group name, and $1 is the volume mount location:

# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi

# update the gid
if [ -n "$opt_g" ]; then
OLD_GID=$(getent group "${opt_g}" | cut -f3 -d:)
NEW_GID=$(stat -c "%g" "$1")
if [ "$OLD_GID" != "$NEW_GID" ]; then
echo "Changing GID of $opt_g from $OLD_GID to $NEW_GID"
groupmod -g "$NEW_GID" -o "$opt_g"
if [ -n "$opt_r" ]; then
find / -xdev -group "$OLD_GID" -exec chgrp -h "$opt_g" {} \;
fi
fi
fi

Then I start the container as root, and the container runs the fix-perms script from the entrypoint, followed by a command similar to:

exec gosu ${container_user} ${orig_command}

This replaces the entrypoint that's running as root with the application running as the specified user. I've got more examples of this in:

  • DockerCon presentation
  • Similar SO questions

What I tried: In Container, I added user "nobody" to group "ubuntu".
On host, directory (used as mount) was set "sudo chown -R
ubuntu:ubuntu directory", user "ubuntu" was already added to group
"ubuntu". VSCode did edit, container was unable to edit.

I'd avoid this and create a new user. Nobody is designed to be as unprivileged as possible, so there could be unintended consequences with giving it more access.

Edit: the container already created without Dockerfile also ran and
maybe edited with important changes, so maybe I can't use Dockerfile
or entrypoint.sh way to solve problem. Can It be achieved through
running commands inside container or without creating container again?
This container can be stopped.

This is a pretty big code smell in containers. They should be designed to be ephemeral. If you can't easily replace them, you're missing the ability to upgrade to a newer image, and creating a lot of state drift that you'll eventually need to cleanup. Your changes that should be preserved need to be in a volume. If there are other changes that would be lost when the container is deleted, they will be visible in docker diff and I'd recommend fixing this now rather than increasing the size of the technical debt.

Edit: I am wondering, in Triet Doan's answer, an option is to modify
UID and GID of already created user in the container, will doing this
for the user and group "nobody" can cause any problems inside
container, I am wondering because probably many commands for settings
already executed inside container, files are already edited by php on
mounted directory & container is running for days

I would build a newer image that doesn't depend on this username. Within the container, if there's data you need to preserve, it should be in a volume.

Edit: I found that alpine has no usermod & groupmod.

I use the following in the entrypoint script to install it on the fly, but the shadow package should be included in the image you build rather than doing this on the fly for every new container:

if ! type usermod >/dev/null 2>&1 || \
! type groupmod >/dev/null 2>&1; then
if type apk /dev/null 2>&1; then
echo "Warning: installing shadow, this should be included in your image"
apk add --no-cache shadow
else
echo "Commands usermod and groupmod are required."
exit 1
fi
fi

Mounted host volume is not writable from host in Azure Pipelines

You could try this to make a folder on a pipleine:





- task: CmdLine@2
inputs:
script: 'mkdir .foo'
workingDirectory: System.DefaultWorkingDirectory

Docker (compose) mounted volume not writable

After two days, the solution is:

RUN apk add shadow && usermod -u 1000 www-data && groupmod -g 1000 www-data

in the php Docker.

How does volume mount from container to host and vice versa work?

Volumes are used for persistent storage and the volumes persists independent of the lifecycle of the container.

We can go through a demo to understand it clearly.

First, let's create a container using the named volumes approach as:

docker run -ti --rm -v DataVolume3:/var ubuntu

This will create a docker volume named DataVolume3 and it can be viewed in the output of docker volume ls:

docker volume ls
DRIVER VOLUME NAME
local DataVolume3

Docker stores the information about these named volumes in the directory /var/lib/docker/volumes/ (*):

ls /var/lib/docker/volumes/
1617af4bce3a647a0b93ed980d64d97746878564b141f30b6110d0818bf32b76 DataVolume3

Next, let's write some data from the ubuntu container at the mounted path var:

echo "hello" > var/file1
root@2b67a89a0050:/# cat /var/file1
hello

We can see this data with cat even after deleting the container:

cat /var/lib/docker/volumes/DataVolume3/_data/file1
hello

Note: Although, we are able to access the volumes like shown above but it not a recommended practice to access volumes data like this.

Now, next time when another container uses the same volume then the data from the volume gets mounted at the container directory specified as part of -v flag.

(*) The location may vary based on OS as pointed by David and probably can be seen by the docker volume inspect command.

Can I have a writable Docker volume mounted under a read-only volume?

Answering my own question: The mount point must exist in the Read-Only volume, even if it won't be used. Docker was trying to create the uploads directory in the RO volume before mounting it.

When I created an empty directory at /dev/workspace/wp-content/uploads, the error disappeared and everything worked as expected.



Related Topics



Leave a reply



Submit