Strange File Permission in Docker Container (Question Marks on Permission Bit and User Bit)

Strange file permission in docker container (question marks on permission bit and user bit)

This problem is related to the storage-driver bug, see https://github.com/moby/moby/issues/28391, https://github.com/moby/moby/issues/20240. Currently I can only change storage-driver to overlay, use the default aufs or recommended overlay2 will break.

Incorrect permissions for file with docker compose volume? 13: Permission denied

I think it was an SELinux thing, appending :z to the volume fixed it.

volumes:
- ../nginx/nginx.conf:/etc/nginx/nginx.conf:z

SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host

From this issue by thaJeztah on Github

That warning was added, because the Windows filesystem does not have an option to mark a file as 'executable'. Building a linux image from a Windows machine would therefore break the image if a file has to be marked executable.
For that reason, files are marked executable by default when building from a windows client; the warning is there so that you are notified of that, and (if needed), modify the Dockerfile to change/remove the executable bit afterwards.

Docker: file permissions with --volume bind mount

After more searching I found the answer to my problem here: Permission denied on accessing host directory in Docker and here: http://www.projectatomic.io/blog/2015/06/using-volumes-with-docker-can-cause-problems-with-selinux/.

In short, the problem was with the SELinux default labels for the volume mount blocking access to the mounted files. The solution was to add a ':Z' trailer to the -v command line argument to force docker to set the appropriate flags against the mounted files to allow access.

The command line therefore became:

sudo docker run -it -e LOCAL_USER_ID=`id -u` -v `realpath ../..`:/ws:Z django-runtime /bin/bash

Worked like a charm.

Permission denied: 'dbt_modules'

dbt creates a dbt_modules directory (renamed to dbt_packages in version 1.0) inside your dbt project directory when you run dbt deps (which installs dbt packages in your project).

It looks like you're mounting your dbt project directory as a volume. Most likely the user that runs dbt deps (as an airflow task) is not authorized to write to that volume.

You may be able to configure the modules-path (packages-install-path after 1.0) in your dbt_project.yml file to write to a local directory instead of the protected volume. Docs

Docker configs/secrets are always user/group writable

You need a leading 0, otherwise yaml notation will parse the number as a decimal instead of an octal number:

mode: 0400

Octal 620 is decimal 400.

See the yaml spec: https://yaml.org/type/int.html

Also the examples in the compose documentation: https://docs.docker.com/compose/compose-file/#long-syntax-2

Permission issues with Apache inside Docker

I just ran into this after posting a similar question at Running app inside Docker as non-root user.

My guess is you can't chmod/ chown files that were added via the ADD command. – thom_nic Jun 19 at 14:14

Actually you can. You just need to issue a a RUN command after the ADD for the file location that will be INSIDE your container. For example

ADD extras/dockerstart.sh /usr/local/servicemix/bin/
RUN chmod 755 /usr/local/bin/dockerstart.sh

Hope that helps. It worked for me.

denied: requested access to the resource is denied: docker

You may need to switch your docker repo to private before docker push.

Thanks to the answer provided by Dean Wu and this comment by ses, before pushing, remember to log out, then log in from the command line to your docker hub account

# you may need log out first `docker logout` ref. https://stackoverflow.com/a/53835882/248616
docker login

According to the docs:

You need to include the namespace for Docker Hub to associate it with your account.
The namespace is the same as your Docker Hub account name.
You need to rename the image to YOUR_DOCKERHUB_NAME/docker-whale.

So, this means you have to tag your image before pushing:

docker tag firstimage YOUR_DOCKERHUB_NAME/firstimage

and then you should be able to push it.

docker push YOUR_DOCKERHUB_NAME/firstimage

Jenkins Docker image, to use bind mounts or not?

As commented, the syntax used is for a volume:

docker run -d -v jenkins_home:/var/jenkins_home -n jenkins ...

That defines a Docker volume names jenkins_homes, which will be created in:

/var/lib/docker/volumes/jenkins_home.

The idea being that you can easily backup said volume:

$ mkdir ~/backup
$ docker run --rm --volumes-from jenkins -v ~/backup:/backup ubuntu bash -c “cd /var/jenkins_home && tar cvf /backup/jenkins_home.tar .”

And reload it to another Docker instance.

This differs from bind-mounts, which does involve building a new Docker image, in order to be able to mount a local folder owner by your local user (instrad of the default user defined in the official Jenkins image: 1000:1000)

FROM jenkins/jenkins:lts-jdk11

USER root
ENV JENKINS_HOME /var/lib/jenkins
ENV COPY_REFERENCE_FILE_LOG=/var/lib/jenkins/copy_reference_file.log

RUN groupmod -g <yourId>jenkins
RUN usermod -u <yourGid> jenkins

RUN mkdir "${JENKINS_HOME}"
RUN usermod -d "${JENKINS_HOME}" jenkins
RUN chown jenkins:jenkins "${JENKINS_HOME}"
VOLUME /var/lib/jenkins

USER jenkins

Note that you have to declare a new volume (here /var/lib/jenkins), because, as seen in jenkinsci/docker issue 112, the official /var/jenkins_home path is already declared as a VOLUME in the official Jenkins image, and you cannot chown or chmod it.

The advantage of that approach would be to see the content of Jenkins home without having to use Docker.

You would run it with:

docker run -d -p 8080:8080 -p 50000:50000 \
--mount type=bind,source=/my/local/host/jenkins_home_dev1,target=/var/lib/jenkins \
--name myjenkins \
myjenkins:lts-jdk11-2.190.3
sleep 3
docker logs --follow --tail 10 myjenkins


Related Topics



Leave a reply



Submit