How Does Docker Share Resources

How does Docker share resources

Strictly speaking Docker no longer has to use LXC, the user tools. It does still use the same underlying technologies with their in house container library, libcontainer. Actually Docker can use various system tools for the abstraction between process and kernel:
Sample Image
The kernel need not be different for different distributions - but you cannot run a non-linux OS. The kernel of the host and of the containers is the same but it supports a sort of context awareness to separate these from one another.

Each container does contain a separate OS in every way beyond the kernel. It has its own user-space applications / libraries and for all intents and purposes it behaves as though it has its own kernel.

Can docker share memory and CPU between containers as needed?

Docker containers are not VM's,
They run in a cage over the host OS kernel, so there's no hypervisor magic behind.

Processes running inside a container are not much different from host processes from a kernel point of view. They are just highly isolated.

Memory and cpu scheduling will be handled by the "host". What you set on docker settings are CPU shares, to give priority and bounds to some containers.

So yes, containers with sleeping processes won't consume much cpu/memory if the used memory is correctly freed after the processing spike, otherwise, that memory would be swapped out, with no much performance impact.

Instantiating a docker container will only consume memory resources. As long as no process is running, you will see zero cpu usage by it.

Do Docker Containers allow for changing of resources [CPU, Memory, Storage] while running?

yes, you can look into docker update command, you can update memory and CPU but storage is not listed in the docker-update command you can look further here for storage option.

Update a container with cpu-shares and memory

To update multiple resource configurations for multiple containers:

$ docker update --cpu-shares 512 -m 300M abebf7571666

Extended description

The docker update command dynamically updates container configuration.
You can use this command to prevent containers from consuming too many
resources from their Docker host. With a single command, you can place
limits on a single container or on many. To specify more than one
container, provide a space-separated list of container names or IDs.

Warning:
The docker update and docker container update commands are not supported for Windows containers.

docker-update-command

Is it possible to share memory between docker containers?

The --ipc=host and --ipc=container:id options have since been added to the Docker create and run commands to share IPC resources.

--ipc=""  : Set the IPC mode for the container,
'container:<name|id>': reuses another container's IPC namespace
'host': use the host's IPC namespace inside the container

IPC with the host

docker run --ipc=host <image>

IPC with another container

docker run --ipc=container:<id> <image>

IPC with another container may need the shareable option set on the initial container (if dockerd defaults IPC to private)

docker run --ipc=shareable <image>

How do I set resources allocated to a container using docker?

Memory/CPU

Docker now supports more resource allocation options:

  • CPU shares, via -c flag
  • Memory limit, via -m flag
  • Specific CPU cores, via --cpuset flag

Have a look at docker run --help for more details.

If you use lxc backend (docker -d --exec-driver=lxc), more fine grained resource allocation schemes can be specified, e.g.:

docker run --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"\
--lxc-conf="lxc.cgroup.cpu.shares = 1234"

Storage

Limiting storage is a bit trickier at the moment. Please refer to the following links for more details:

  • Resizing Docker containers with the Device Mapper plugin
  • Question on Resource Limits?
  • devicemapper - a storage backend based on Device Mapper

How does docker Images and Layers work?

how exactly does a container work on a base image?
Does the base image gets loaded into the container?

Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server.

Like FreeBSD Jails and Solaris Zones, Linux containers are self-contained execution environments -- with their own, isolated CPU, memory, block I/O, and network resources (Using CGROUPS kernel feature) -- that share the kernel of the host operating system. The result is something that feels like a virtual Machine, but sheds all the weight and startup overhead of a guest operating system.

This being said Each distribution has it's own official docker image (library), that is shipped with minimal binaries, Considered docker's best practices and it's ready to build on.

I am confused about is image is immutable, right? where is the image running, is it inside the Docker engine in the VM and how the container is actually coming into play?

Docker used to use AUFS, still uses it on debian and uses AUFS like file systems like overlay and etc on other distributions. AUFS provides layering. Each Image consists of Layers, These layers are read only. Each container has a read/write layer on top of its image layers. Read only layer are shared between containers so you will have storage space savings. Container will see the union mount of all image layers + read/write layer.

Sample Image

How is Docker different from a virtual machine?

Docker originally used LinuX Containers (LXC), but later switched to runC (formerly known as libcontainer), which runs in the same operating system as its host. This allows it to share a lot of the host operating system resources. Also, it uses a layered filesystem (AuFS) and manages networking.

AuFS is a layered file system, so you can have a read only part and a write part which are merged together. One could have the common parts of the operating system as read only (and shared amongst all of your containers) and then give each container its own mount for writing.

So, let's say you have a 1 GB container image; if you wanted to use a full VM, you would need to have 1 GB x number of VMs you want. With Docker and AuFS you can share the bulk of the 1 GB between all the containers and if you have 1000 containers you still might only have a little over 1 GB of space for the containers OS (assuming they are all running the same OS image).

A full virtualized system gets its own set of resources allocated to it, and does minimal sharing. You get more isolation, but it is much heavier (requires more resources). With Docker you get less isolation, but the containers are lightweight (require fewer resources). So you could easily run thousands of containers on a host, and it won't even blink. Try doing that with Xen, and unless you have a really big host, I don't think it is possible.

A full virtualized system usually takes minutes to start, whereas Docker/LXC/runC containers take seconds, and often even less than a second.

There are pros and cons for each type of virtualized system. If you want full isolation with guaranteed resources, a full VM is the way to go. If you just want to isolate processes from each other and want to run a ton of them on a reasonably sized host, then Docker/LXC/runC seems to be the way to go.

For more information, check out this set of blog posts which do a good job of explaining how LXC works.

Why is deploying software to a docker image (if that's the right term) easier than simply deploying to a consistent production environment?

Deploying a consistent production environment is easier said than done. Even if you use tools like Chef and Puppet, there are always OS updates and other things that change between hosts and environments.

Docker gives you the ability to snapshot the OS into a shared image, and makes it easy to deploy on other Docker hosts. Locally, dev, qa, prod, etc.: all the same image. Sure you can do this with other tools, but not nearly as easily or fast.

This is great for testing; let's say you have thousands of tests that need to connect to a database, and each test needs a pristine copy of the database and will make changes to the data. The classic approach to this is to reset the database after every test either with custom code or with tools like Flyway - this can be very time-consuming and means that tests must be run serially. However, with Docker you could create an image of your database and run up one instance per test, and then run all the tests in parallel since you know they will all be running against the same snapshot of the database. Since the tests are running in parallel and in Docker containers they could run all on the same box at the same time and should finish much faster. Try doing that with a full VM.

From comments...

Interesting! I suppose I'm still confused by the notion of "snapshot[ting] the OS". How does one do that without, well, making an image of the OS?

Well, let's see if I can explain. You start with a base image, and then make your changes, and commit those changes using docker, and it creates an image. This image contains only the differences from the base. When you want to run your image, you also need the base, and it layers your image on top of the base using a layered file system: as mentioned above, Docker uses AuFS. AuFS merges the different layers together and you get what you want; you just need to run it. You can keep adding more and more images (layers) and it will continue to only save the diffs. Since Docker typically builds on top of ready-made images from a registry, you rarely have to "snapshot" the whole OS yourself.

How are the resources distributed for several replicas for docker?

The limits are applied to each replica. For the overall cluster utilization limit, you would multiply by the number of replicas.



Related Topics



Leave a reply



Submit