Human Readable, Recursive, Sorted List of Largest Files

How can I sort du -h output by size

As of GNU coreutils 7.5 released in August 2009, sort allows a -h parameter, which allows numeric suffixes of the kind produced by du -h:

du -hs * | sort -h

If you are using a sort that does not support -h, you can install GNU Coreutils. E.g. on an older Mac OS X:

brew install coreutils
du -hs * | gsort -h

From sort manual:

-h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G)

command to print out large files, sorted, with sizes in human readable format

find ... | sort -rn | cut -d\  -f2 | xargs df -h

for instance :) or

find $dir -type -f size +$size -print0 | xargs -0 ls -1hsS

(with a little inspiration borrowed from olibre).

Searching directories recursively for largest files in C

You said,

For some reason though, the program does not seem to be adding the information for the subdirectory files to the array.

In your function getentries, you have a recursive call for dealing with sub-directories:

     getentries(num_entries, path, level + 1);

The returned value from that recursive call is being ignored. That's why you don't see any entries corresponding to the sub-directories in your output.

Update

Here's the updated function that deals with the entries of the sub-directories.

ML_ENTRY **getentries (int *num_entries, const char *name, int level)
{
DIR *dir;
struct dirent *entry;
struct stat st;
int rv;
int n;
int size;
char path[K];
ML_ENTRY **li;

// Add variables to deal with the entries from sub-directories.
ML_ENTRY **subDirectoryLi;
int subDirctoryNumEntries;
int i;

if (!(dir = opendir (name)))
return;

size = CHUNK;
n = 0;
li = malloc(size * sizeof(ML_ENTRY *));

while (entry = readdir (dir)){
if(n >= size){
size <<= 1;
li = realloc(li, size*sizeof(ML_ENTRY*));
}
int len =
snprintf (path, sizeof (path) - 1, "%s/%s", name, entry->d_name);
rv = lstat(path, &st);
if (entry->d_type == DT_DIR){
path[len] = 0;
if (strcmp (entry->d_name, ".") == 0
|| strcmp (entry->d_name, "..") == 0)
continue;

// Process the entries from the sub-directory.
subDirectoryLi = getentries(&subDirctoryNumEntries, path, level + 1);
for ( i = 0; i < subDirctoryNumEntries; ++i, ++n )
{
if(n >= size){
size <<= 1;
li = realloc(li, size*sizeof(ML_ENTRY*));
}
li[n] = subDirectoryLi[i];
}
}
if (rv < 0)
continue;
li[n] = malloc(sizeof(ML_ENTRY));
li[n]->ml_name = strdup(entry->d_name);
li[n++]->ml_size = st.st_size;
}
closedir (dir);
*num_entries = n;
return li;
}

Shell script to recursively find and list largest files, ask confirmation to remove them and, if confirmed, remove them

Try

rm -i $(find . -name "*.log" -type f -exec du -sh {} + | sort -rh | head -n 10)

Basically, taking the output of your find command and using it with "rm -i" The "rm -i" forces the rm command to be interactive. It will prompt you if you want to remove a file.

Hope this helps.

Using ls to list directories and their total sizes

Try something like:

du -sh *

short version of:

du --summarize --human-readable *

Explanation:

du: Disk Usage

-s: Display a summary for each specified file. (Equivalent to -d 0)

-h: "Human-readable" output. Use unit suffixes: Byte, Kibibyte (KiB), Mebibyte (MiB), Gibibyte (GiB), Tebibyte (TiB) and Pebibyte (PiB). (BASE2)

How do I find the 10 largest files in a directory structure

Try this script

Get-ChildItem -re -in * |
?{ -not $_.PSIsContainer } |
sort Length -descending |
select -first 10

Breakdown:

The filter block "?{ -not $_.PSIsContainer }" is meant to filter out directories. The sort command will sort all of the remaining entries by size in descending order. The select clause will only allow the first 10 through so it will be the largest 10.

How to list the size of each file and directory and sort by descending size in Bash?

Simply navigate to directory and run following command:

du -a --max-depth=1 | sort -n

OR add -h for human readable sizes and -r to print bigger directories/files first.

du -a -h --max-depth=1 | sort -hr


Related Topics



Leave a reply



Submit