Docker in Docker Cannot Mount Volume

Docker in Docker cannot mount volume

A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.

Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.

Very basic and obvious thing which I missed, but realized as soon I typed the question.

Cannot mount volume when running Docker inside Docker

Solved the issue! The problem was with the command arguments order, the --mount should be writen before the docker image name.
I ended up with this command:

    docker run -d -h my-app --name my-app --mount type=bind,source=/usr/share/rec/myApp ,target=/usr/share/myApp <ECR repo end point>/image:latest --privileged  --rm=true

Volume bind mounting from docker in docker?

When you mount the docker socket, it's not really docker in docker, but just a client making requests to the daemon on the host over the API, and that daemon doesn't know where the requests come from. So you can simplify this question to "can you mount files from one container into another container". Unfortunately there's no easy answer to that without using volumes that are external to both containers. This is because the container filesystems depend on the graph driver being used to assemble the various image and container layers, so even a solution that might work for overlay2 would break on other drivers, and it would depend on internals of docker that could change without warning.

Once you get into an external volume, there are several possible solutions I can think of.

Option A: a common host directory. I use this fairly often with what I consider transparent containers on my laptop, hiding the fact that I'm running commands inside of a container. I mount a common directory with the full path in my container, e.g. -v $HOME:$HOME. This same technique could be used from inside of container "A" and "B" if you mounted the same host directories in each. If you use a volume mount like the above for container "A", this would work with a compose file since the path is the same inside the container as it is on the host.

Option B: volumes_from. I hesitate to even mention this as an option because it's getting phased out as users adopt swarm mode, but there is an option to mount all volumes in container "A" to container "B". This still requires that you define a volume in container "A", but now you do no care about the source of the volume, it could be a host, named, or anonymous volume.

Option C: shared named volume. Named volumes let docker manage the storage of the data, by default under /var/lib/docker/volumes on the host. You can run both containers with the same named volume, which allows you to pass data between the containers. You do need to have the name of the volume in container "A" to run your command for container "B" with the same name. Named volumes also initialize the contents of the named volume from the image when you first use a named volume, so that may be beneficial, especially for file ownership and permissions. Just be aware that on the next usage of the same named volume, it will not reinitialize over any existing data, instead the previous data will be persistent. With a compose file, you would need to define the named volume as external.

Option D: manually created named volume. If you are only trying to inject some files into container "B" from container "A", there are a variety of ways to inject that over the docker API. I've seen files saved into environment variables on "A" and then the environment variable written back out to a file in the entrypoint for "B". For larger files, or to avoid changing the entrypoint of "B", you can create a named volume and populate it by passing the data over docker's stdin/stdout pipes to a running container, and packing/unpacking that data with tar to send over the I/O pipes. This will work from inside of container "A" since one half of the tar command runs inside of that container's filesystem. Then container "B" would mount that named volume. To import data from container "A" to a named volume, that looks like:

tar -cC source_dir . | \
docker run --rm -i -v target_vol:/target busybox tar -xC /target

And to get data back out of a named volume, the process is reversed:

docker run --rm -v source_vol:/source busybox tar -cC /source . | \
tar -xC target_dir

Similar to option C, you would need to define this named volume as external in your compose file.

Docker Volume not mounting any files

Docker & Virtualbox seem to have an issue with mounting a volume outside of the /Users directory. The only way to fix the issue is to delete the docker machine image, properly set the /Users/yourname directory as the share folder in Virtualbox and create a new docker machine image.

Steps to fix the issue:

  1. docker-machine stop dev
  2. docker-machine rm dev
  3. docker-machine create --driver virtualbox dev
  4. eval "$(docker-machine env dev)"
  5. docker build -t davesrepo/dynamo -f ./Dockerfile .
  6. docker run -v $(pwd):/var/dynamo -d -t -p 8001:8001 --env-file ./dynamo.env --name dynamo davesrepo/dynamo
  7. docker exec -it dynamo /bin/bash
  8. ls

root@42f9e47fa2de:/var/dynamo# ls
Dockerfile README.md __init__.py __pycache__ bin config.ini requirements.txt seed.sql tests

Files!



Related Topics



Leave a reply



Submit