Container Running in Privileged Mode

Privileged containers and capabilities

Running in privileged mode indeed gives the container all capabilities.
But it is good practice to always give a container the minimum requirements it needs.

The Docker run command documentation refers to this flag:

Full container capabilities (--privileged)

The --privileged flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker.

You can give specific capabilities using --cap-add flag. See man 7 capabilities for more info on those capabilities. The literal names can be used, e.g. --cap-add CAP_FOWNER.

Docker containers in Azure pipelines in privileged mode possible?

If docker containers are located in Docker Hub or Local Machine, you could run the docker containers in privileged mode.

You could try to run the following script: docker run --privileged [image_name]

steps:
- task: Docker@2
inputs:
containerRegistry: 'DockerServiceConnectionName'
command: 'login'
- script: docker run --privileged [image_name]

You could refer to this ticket about the Azure Container.

Azure Container Instances does not expose direct access to the underlying infrastructure that hosts container groups. This includes running privileged containers and thus it is not supported currently.

container running in privileged mode

I have solved the problem by opening the new instance of the container by running this command sudo docker exec -i -t a6f7c25afbbf /bin/bash

How to know if a docker container is running in privileged mode

From the docker host

Use the docker inspect command:

docker inspect --format='{{.HostConfig.Privileged}}' <container id>

And within a bash script you could have a test:

if [[ $(docker inspect --format='{{.HostConfig.Privileged}}' <container id>) == "false" ]]; then
echo not privileged
else
echo privileged
fi

From inside the container itself

You have to try to run a command that requires the --privileged flag and see if it fails

For instance ip link add dummy0 type dummy is a command which requires the --privileged flag to be successful:

$ docker run --rm -it ubuntu ip link add dummy0 type dummy
RTNETLINK answers: Operation not permitted

while

$ docker run --rm -it --privileged ubuntu ip link add dummy0 type dummy

runs fine.

In a bash script you could do something similar to this:

ip link add dummy0 type dummy >/dev/null
if [[ $? -eq 0 ]]; then
PRIVILEGED=true
# clean the dummy0 link
ip link delete dummy0 >/dev/null
else
PRIVILEGED=false
fi

Difference between docker privileged mode and kubernetes privilege container

Edit:

I see you have --pid=host in docker run command and hostPID: true in kubernetes pod spec. In that case, both the numbers should be similar if the containers are running on same host. Check if the containers are running on same host or not. Kubernetes might have scheduled the pod to a different node.


Prev answer

sudo docker run -d --privileged --pid=host alpine:3.8 tail -f /dev/null

In the above command, you are using --pid=host argument which is running the container in host pid namespace. So you are able to view all the processes on the host. You can achieve the same with hostPID option in pod spec in kubernetes.


Running a container in privileged mode means the processes in the container are essentially equal to root on the host. By default a container is not allowed to access any devices on the host, but a “privileged” container is given access to all devices on the host.

$ kubectl exec -it no-privilege ls /dev
core null stderr urandom
fd ptmx stdin zero
full pts stdout
fuse random termination-log
mqueue shm tty
$ kubectl exec -it privileged ls /dev
autofs snd tty46
bsg sr0 tty47
btrfs-control stderr tty48
core stdin tty49
cpu stdout tty5
cpu_dma_latency termination-log tty50
fd tty tty51
full tty0 tty52
fuse tty1 tty53
hpet tty10 tty54
hwrng tty11 tty55
...

The container still runs in it's own pid namespace, ipc namespace and network namespace etc. So you will not see host processes inside the container even when running in privileged mode. You can use hostPID, hostNetwork, hostIPC fields of pod spec in Kubernetes if you want to run in the host namespace.

Openshift containers running in privileged mode

By default pods use the Restricted SCC. The pod's SCC is determined by the User/ServiceAccount and/or Group. Then, you also have to consider that a SA may or may not be bound to a Role, which can set a list of available SCCs.

To find out what SCC a pod runs under:

oc get pod $POD_NAME -o yaml | grep openshift.io/scc

The following commands can also be useful:

# get pod's SA name
oc get pod $POD_NAME -o yaml | grep serviceAccount:
# list service accounts that can use a particular SCC
oc adm policy who-can use scc privileged
# list users added by the oc adm policy command
oc get scc privileged -o yaml
# check roles and role bindings of your SA
# you need to look at rules.apiGroups: security.openshift.io
oc get rolebindings -o wide
oc get role $ROLE_NAME -o yaml

Difference between `--privileged` and `--cap-add=all` in docker

Setting privileged should modify:

  • capabilities: removing any capability restrictions
  • devices: the host devices will be visible
  • seccomp: removing restrictions on allowed syscalls
  • apparmor/selinux: policies aren't applied
  • cgroups: I don't believe the container is limited within a cgroup

That's from memory, I might be able to find some more digging in the code if this doesn't point you too your issue.



Related Topics



Leave a reply



Submit