Argument List Too Long Error For Rm, Cp, Mv Commands

Argument list too long error for rm, cp, mv commands

The reason this occurs is because bash actually expands the asterisk to every matching file, producing a very long command line.

Try this:

find . -name "*.pdf" -print0 | xargs -0 rm

Warning: this is a recursive search and will find (and delete) files in subdirectories as well. Tack on -f to the rm command only if you are sure you don't want confirmation.

You can do the following to make the command non-recursive:

find . -maxdepth 1 -name "*.pdf" -print0 | xargs -0 rm

Another option is to use find's -delete flag:

find . -name "*.pdf" -delete

Argument list too long error rm command

As already stated in the comment to the answer you linked, you need to put the -maxdepth directly after the path. Like so:

find . -maxdepth 1 -name "*.txt" -print0 | xargs -0 rm

Circumvent Argument list too long in script (for loop)

Argument list too long workaroud

Argument list length is something limited by your config.

getconf ARG_MAX
2097152

But after discuss around differences between bash specifics and system (os) limitations (see comments from that other guy), this question seem wrong:

Regarding discuss on comments, OP tried something like:

ls "/simple path"/image*.{jpg,png} | wc -l
bash: /bin/ls: Argument list too long

This happen because of OS limitation, not bash!!

But tested with OP code, this work finely

for file in ./"simple path"/image*.{jpg,png} ;do echo -n a;done | wc -c
70980

Like:

 printf "%c" ./"simple path"/image*.{jpg,png} | wc -c

Reduce line length by reducing fixed part:

First step: you could reduce argument length by:

cd "/drive1/"
ls images*.{jpg,png} | wc -l

But when number of file will grow, you'll be buggy again...

More general workaround:

find "/drive1/" -type f \( -name '*.jpg' -o -name '*.png' \) -exec myscript {} +

If you want this to NOT be recursive, you may add -maxdepth as 1st option:

find "/drive1/" -maxdepth 1 -type f \( -name '*.jpg' -o -name '*.png' \) \
-exec myscript {} +

There, myscript will by run with filenames as arguments. The command line for myscript is built up until it reaches a system-defined limit.

myscript /drive1/file1.jpg '/drive1/File Name2.png' /drive1/...

From man find:

   -exec command {} +
This variant of the -exec action runs the specified command on
the selected files, but the command line is built by appending
each selected file name at the end; the total number of invoca‐
tions of the command will be much less than the number of
matched files. The command line is built in much the same way
that xargs builds its command lines. Only one instance of `{}'

Inscript sample

You could create your script like

#!/bin/bash

target=( "/drive1" "/Drive 2/Pictures" )

[ "$1" = "--run" ] && exec find "${target[@]}" -type f \( -name '*.jpg' -o \
-name '*.png' \) -exec $0 {} +

for file ;do
echo Process "$file"
done

Then you have to run this with --run as argument.

  • work with any number of files! (Recursively! See maxdepth option)

  • permit many target

  • permit spaces and special characters in file and directrories names

  • you could run same script directly on files, without --run:

     ./myscript hello world 'hello world'
    Process hello
    Process world
    Process hello world

Using pure bash

Using arrays, you could do things like:

allfiles=( "/drive 1"/images*.{jpg,png} )
[ -f "$allfiles" ] || { echo No file found.; exit ;}

echo Number of files: ${#allfiles[@]}

for file in "${allfiles[@]}";do
echo Process "$file"
done

bash: removing many files of the same extension

This means, there are so many .log files in that directory, that the argument list for the rm call gets too long. (The *.log gets expanded to file1.log, file2.log, file3.log, ... during the "real" rm call and there's a limit for the length of this argument line.)

A quick solution could be using find, like this:

find ${results}/ -type f -name '.log' -delete

Here, the find command will list all files (-type f) in your ${results} dir, ending with .log (because of -name '.log') and delete them because you issue -delete as last parameter.

Error when transferring huge files: Argument list too long

Your command is not correct try with below. It will work for you:-

find ./ -name  "sm20180416*" | xargs -I {} mv -f {} /ora_arch/ssmfep_backup/

Argument too long error while copying thousand of files from one dir to another

There was a similar problem here: Error Argument too long, this should solve your problem.

You should look into xargs. This will run a command several times, with as many arguments as can be passed in one go.

The Solution Basically boils down to this:

On Linux:

ls dir1 | xargs -I {} cp {} dir2/

On OS X:

ls dir1 | xargs -J {} cp {} dir2/

This lists all of the files in dir1 and then uses xargs to catch the lines outputed by ls. Then each file is copied over. (Tested this locally successfully)

If you are curious as to why there is a limit David posted a link under your question (link here for UNIX cp limit).

You can find the limit on your system with:

getconf ARG_MAX

if the files in your directory exceed the value of ARG_MAX the error that you have will be generated.

The link that David mentioned above explains this in great detail and is well worth reading. The summary is that traditionally Unix's limit (as reported by getconf ARG_MAX) was more or less on the cumulative size of:

  • The length of the argument strings (including the terminating '\0')
  • The length of the array of pointers to those strings, so typically 8 bytes per argument on a 64bit system
  • The length of the environment strings (including the terminating '\0'), an environment string being by convention something like var=value.
  • The length of the array of pointers to those strings, so typically 8 bytes per argument on a 64bit system


Related Topics



Leave a reply



Submit