Where Are All My Inodes Being Used

Where are all my inodes being used?

So basically you're looking for which directories have a lot of files? Here's a first stab at it:

find . -type d -print0 | xargs -0 -n1 count_files | sort -n

where "count_files" is a shell script that does (thanks Jonathan)

echo $(ls -a "$1" | wc -l) $1

How to fix all inodes being in use?

I had a similar issue, I found out the files piling up under one specific directory using this command-

$find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n

and eventually you can get to a path where you have a dir holding numerous file.

One solution is to just delete all the files under this tmp to free up inodes. But I think you would like to triage further.

Hope it helps!

How to Free Inode Usage?

It's quite easy for a disk to have a large number of inodes used even if the disk is not very full.

An inode is allocated to a file so, if you have gazillions of files, all 1 byte each, you'll run out of inodes long before you run out of disk.

It's also possible that deleting files will not reduce the inode count if the files have multiple hard links. As I said, inodes belong to the file, not the directory entry. If a file has two directory entries linked to it, deleting one will not free the inode.

Additionally, you can delete a directory entry but, if a running process still has the file open, the inode won't be freed.

My initial advice would be to delete all the files you can, then reboot the box to ensure no processes are left holding the files open.

If you do that and you still have a problem, let us know.

By the way, if you're looking for the directories that contain lots of files, this script may help:

#!/bin/bash

# count_em - count files in all subdirectories under current directory.
echo 'echo $(ls -a "$1" | wc -l) $1' >/tmp/count_em_$$
chmod 700 /tmp/count_em_$$
find . -mount -type d -print0 | xargs -0 -n1 /tmp/count_em_$$ | sort -n
rm -f /tmp/count_em_$$

How to get all the inodes under the linux filesystem with python?

How about this?

import os
inodes = os.popen("sudo ls -Rli / | awk '{ print $1 }'").read().split('\n')
inodes = [int(i) for i in inodes if i.isnumeric()]

For my home folder, this returns a list of the inode numbers:

[11666512, 10223622, 10234894, 10223641, 10223637, 10617011, 10254828, 10249545, 10223642, 10223643, 10487015, 10223640, 11929556, 10223639, 10223644, 10486989]

To clarify, the ls command takes three flag arguments, R, l and i. R performs a recursive search to check all files in the folders and all subfolders starting with /, l formats the output to give us a list, and i gives us the inodes. We pass the results to awk to get the first column which contains the inodes and then do some simple cleaning of that data.

docker is full, all inodes are used

Found the error,
this seems to be Docker 17.06.1-ce error.
This version seems not correctly deleting images, and keeping files in /var/lib/docker/aufs/mnt/
So just upgrade to new docker version and this will be fine.
now df show me

Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/sda1 51558236 3821696 45595452 8% /
udev 10240 0 10240 0% /dev
tmpfs 1398308 57696 1340612 5% /run
tmpfs 3495768 0 3495768 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 3495768 0 3495768 0% /sys/fs/cgroup

This is better :)

Getting the percentage of used space and used inodes in a mount

Your comment expression isn't valid Go, so I can't really interpret it without guessing. With guessing, I interpret it as correct, but have I guessed what you actually mean, or merely what I think you mean? In other words, without showing actual code, I can only imagine what your final code will be. If the code I imagine isn't the actual code, the correctness of the code I imagine you will write is irrelevant.

That aside, I can answer your question here:

(what's 'unprivileged' user go to do with filesystem blocks?)

The Linux statfs call uses the same fields as 4.4BSD. The default 4.4BSD file system (the one called the "fast file system") uses a blocks-with-fragmentation approach to allocate blocks in a sort of stochastic manner. This allocation process works very well on an empty file system, and continues to work well, without extreme slowdown, on somewhat-full file systems. Computerized modeling of its behavior, however, showed pathological slowdowns (amounting to linear search, more or less) were possible if the block usage exceeded somewhere around 90%.

(Later, analysis of real file systems found that the slowdowns generally did not hit until the block usage exceeded 95%. But the idea of a 10% "reserve" was pretty well established by then.)

Hence, if a then-popular large-size disk drive of 400 MB1 gave 10% for inodes and another 10% for reserved blocks, that meant that ordinary users could allocate about 320 MB of file data. At that point the drive was "100% full", but it could go to 111% by using up the remaining blocks. Those blocks were reserved to the super-user though.

These days, instead of a "super user", one can have a capability that can be granted or revoked. However, these days we don't use the same file systems either. So there may be no difference between bfree and bavail on your system.


1Yes, the 400 MB Fujitsu Eagle was a large (in multiple senses: it used a 19 inch rack mount setup) drive back then. People are spoiled today with their multi-terabyte SSDs. /p>


Related Topics



Leave a reply



Submit