How to Increase The Size of Ephemeral Storage in a Kubernetes Worker Node

How can we increase the size of ephemeral storage in a kubernetes worker node

After a lot of search, I decided to extend the size of /sda1. It is not pleasant to do that, but that is the only way I can find. Now the the ephemeral storage of the worker is increased.

Filesystem      Size  Used Avail Use% Mounted on
udev 7.9G 0 7.9G 0% /dev
tmpfs 1.6G 151M 1.5G 10% /run
/dev/sda1 118G 24G 89G 22% /

$ kubectl describe node worker1

attachable-volumes-azure-disk:  16
cpu: 4
ephemeral-storage: 123729380Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16432464Ki
pods: 110
Allocatable:
attachable-volumes-azure-disk: 16
cpu: 4
ephemeral-storage: 114028996420
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16330064Ki
pods: 110

Kubernetes: How to increase ephemeral-storage

Is it may be connected with the fact that all the volumes have a limit of 7.4 GB?

You really have a single volume /dev/vda1 and multiple mount points and not several volumes with 7.4GB

Not sure where you are running Kubernetes but that looks like a virtual volume (in a VM). You can increase the size in the VM configuration or cloud provider and then run this to increase the size of the filesystem:

  • ext4:

    $ resize2fs /dev/vda1
  • xfs:

    $ xfs_growfs /dev/vda1

Other filesystems will have their own commands too.

The most common issue for running out of disk space on the master(s) is log files, so if that's the case you can set up a cleanup job for them or change the log size configs.

What does Kubelet use to determine the ephemeral-storage capacity of the node?

Some theory

By default Capacity and Allocatable for ephemeral-storage in standard kubernetes environment is sourced from filesystem (mounted to /var/lib/kubelet).
This is the default location for kubelet directory.

The kubelet supports the following filesystem partitions:

  1. nodefs: The node's main filesystem, used for local disk volumes, emptyDir, log storage, and more. For example, nodefs contains
    /var/lib/kubelet/.
  2. imagefs: An optional filesystem that container runtimes use to store container images and container writable layers.

Kubelet auto-discovers these filesystems and ignores other
filesystems. Kubelet does not support other configurations.

From Kubernetes website about volumes:

The storage media (such as Disk or SSD) of an emptyDir volume is
determined by the medium of the filesystem holding the kubelet root
dir (typically /var/lib/kubelet).

Location for kubelet directory can be configured by providing:

  1. Command line parameter during kubelet initialization

--root-dir string
Default: /var/lib/kubelet


  1. Via kubeadm with config file (e.g.)
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
root-dir: "/data/var/lib/kubelet"

Customizing kubelet:

To customize the kubelet you can add a KubeletConfiguration next to
the ClusterConfiguration or InitConfiguration separated by ---
within the same configuration file. This file can then be passed to
kubeadm init.

When bootstrapping kubernetes cluster using kubeadm, Capacity reported by kubectl get node is equal to the disk capacity mounted into /var/lib/kubelet

However Allocatable will be reported as:
Allocatable = Capacity - 10% nodefs using the standard kubeadm configuration, since the kubelet has the following default hard eviction thresholds:

  • nodefs.available<10%

It can be configured during kubelet initialization with:
-eviction-hard mapStringString
Default: imagefs.available<15%,memory.available<100Mi,nodefs.available<10%


Example

I set up a test environment for Kubernetes with a master node and two worker nodes (worker-1 and worker-2).

Both worker nodes have volumes of the same capacity: 50Gb.

Additionally, I mounted a second volume with a capacity of 20Gb for the Worker-1 node at the path /var/lib/kubelet.
Then I created a cluster with kubeadm.

Result

From worker-1 node:

skorkin@worker-1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 49G 2.8G 46G 6% /
...
/dev/sdb 20G 45M 20G 1% /var/lib/kubelet

and

Capacity:
cpu: 2
ephemeral-storage: 20511312Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 4027428Ki
pods: 110

Size of ephemeral-storage is the same as volume mounted at /var/lib/kubelet.

From worker-2 node:

skorkin@worker-2:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 49G 2.7G 46G 6% /

and

Capacity:
cpu: 2
ephemeral-storage: 50633164Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 4027420Ki
pods: 110

How to free storage on node when status is Attempting to reclaim ephemeral-storage?

Just run 'docker system prune command' to free up the space on the node. refer the below command

$ docker system prune -a --volumes

WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all volumes not used by at least one container
- all images without at least one container associated to them
- all build cache
Are you sure you want to continue? [y/N] y

changing memory allocation of a kubernetes worker node

It's a long shot, but you can try to restart kubelet via systemctl restart kubelet. The containers should not be restarted this way and there's a hope that once restarted, it'll notice the increased memory configuration.

Kubernetes ephemeral-storage of containers

You can see the allocated resources by using kubectl describe node <insert-node-name-here> on the node that is running the pod of the deployment.

You should see something like this:

Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1130m (59%) 3750m (197%)
memory 4836Mi (90%) 7988Mi (148%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
attachable-volumes-azure-disk 0 0

When you requested 50Mi of ephemeral-storage it should show up under Requests.
When your pod tries to use more than the limit (100Mi) the pod will be evicted and restarted.

On the node side, any pod that uses more than its requested resources is subject to eviction when the node runs out of resources. In other words, Kubernetes never provides any guarantees of availability of resources beyond a Pod's requests.

In kubernetes documentation you can find more details how Ephemeral storage consumption management works here.

Note that using kubectl exec with df command might not show actual use of storage.

According to kubernetes documentation:

The kubelet can measure how much local storage it is using. It does this provided that:

  • the LocalStorageCapacityIsolation feature gate is enabled (the feature is on by default), and
  • you have set up the node using one of the supported configurations for local ephemeral storage.

If you have a different configuration, then the kubelet does not apply resource limits for ephemeral local storage.

Note: The kubelet tracks tmpfs emptyDir volumes as container memory use, rather than as local ephemeral storage.



Related Topics



Leave a reply



Submit