How to use xargs properly when argument list is too long
Expanding on CristopheDs answer and assuming you're using bash:
tar c --files-from <(find $dir/temp -maxdepth 1 -name "*.parse") | lzma -9 > $dir/backup/$(date '+%Y-%m-%d')-archive.tar.lzma
The reason xargs
doesn't help you here is that it will do multiple invocations until all arguments have been used. This won't help you here since that will create several tar archives, which you don't want.
I'm using xargs, but the argument list is too long
You don't need xargs here, you can do
find . -type f -exec dos2unix '{}' +
Argument list too long error for rm, cp, mv commands
The reason this occurs is because bash actually expands the asterisk to every matching file, producing a very long command line.
Try this:
find . -name "*.pdf" -print0 | xargs -0 rm
Warning: this is a recursive search and will find (and delete) files in subdirectories as well. Tack on -f
to the rm command only if you are sure you don't want confirmation.
You can do the following to make the command non-recursive:
find . -maxdepth 1 -name "*.pdf" -print0 | xargs -0 rm
Another option is to use find's -delete
flag:
find . -name "*.pdf" -delete
Error when transferring huge files: Argument list too long
Your command is not correct try with below. It will work for you:-
find ./ -name "sm20180416*" | xargs -I {} mv -f {} /ora_arch/ssmfep_backup/
Getting argument list too long error
Try this
$ find /file/collection/*/logs/ -name "*.log" -type f -maxdepth 1 | xargs grep hello
How can I move many files without having Argument list too long?
If you use find
I would recommend you to use the -exec
attribute. So your result should be find . -name "*.jpg" -exec mv {} /home/new/location \;
.
However I would recommend to check what the find
command returns you, replacing the exec
part with: -exec ls -lrt {} \;
(Argument list too long) While opening a large list of files using *cat*
The Argument list too long error is documented in errno(3) (as E2BIG
) and related to some execve(2) system call done by your GNU bash shell. Use sysconf(3) with ARG_MAX
to query that limit.
You have several approaches:
- recompile your Linux kernel to raise that limit.
- write some small C program using appropriate syscalls(2) more appropriately, or write some Python script, or some GNU guile script, ... doing the same
- increase some limits, but using setrlimit(2) appropriately (perhaps using the shell
ulimit
builtin).
See also the documentation and the source code of GNU bash
Does argument list too long restriction apply to shell builtins?
In bash, the OS-enforced limitation on command-line length which causes the error argument list too long
is not applied to shell builtins.
This error is triggered when the execve()
syscall returns the error code E2BIG
. There is no execve()
call involved when invoking a builtin, so the error cannot take place.
Thus, both of your proposed operations are safe: cmd <<< "$string"
writes $string
to a temporary file, which does not require that it be passed as an argv element (or an environment variable, which is stored in the same pool of reserved space); and printf '%s\n' "$cmd"
takes place internal to the shell unless the shell's configuration has been modified, as with enable -n printf
, to use an external printf
implementation.
Argument list too long when concatenating lots of files in a folder
Your full code is:
rm -f /tmp/temp.files
ls -1 /var/log/processing/*.log | xargs -n1 basename > /tmp/temp.files
cat /tmp/temp.files | sed -r "s~(.*)-[0-9]{4}(-[0-9]{2})+\.log~cat /var/log/processing/\1* >> /var/log/processing/\1$(date +"-%Y-%m-%d-%H-%M").log~" | uniq | sh
cd /var/log/processing
xargs rm -rf < /tmp/temp.files
rm -f /tmp/temp.files
But the problem lies on the ls -1 /var/log/processing/*.log
part, so I am skipping the rest.
The expansion done by /var/log/processing/*.log
gives so many results that ls
itself cannot handle all of them and hence prints the "Argument list too long" message.
You can use a find
statement like this:
find /var/log/processing -name "*.log" -exec basename {} \; > /tmp/temp.files
See I am not using ls parsing (read interesting Why you shouldn't parse the output of ls).
Help Editing Code to Fix Argument list too long Error
The code below reads the content of a file whos name is given as the first parameter on the command-line and places it in a str::buffer
. Then, instead of calling the function UnicodeString
with argv[1]
, use that buffer instead.
#include<iostream>
#include<fstream>
using namespace std;
int main(int argc, char **argv)
{
std::string buffer;
if(argc > 1) {
std::ifstream t;
t.open(argv[1]);
std::string line;
while(t){
std::getline(t, line);
buffer += line + '\n';
}
}
cout << buffer;
return 0;
}
Update:
Input to UnicodeString
should be char*
. The function GetFileIntoCharPointer
does that.
Note that only the most rudimentary error checking is implemented below!
#include<iostream>
#include<fstream>
using namespace std;
char * GetFileIntoCharPointer(char *pFile, long &lRet)
{
FILE * fp = fopen(pFile,"rb");
if (fp == NULL) return 0;
fseek(fp, 0, SEEK_END);
long size = ftell(fp);
fseek(fp, 0, SEEK_SET);
char *pData = new char[size + 1];
lRet = fread(pData, sizeof(char), size, fp);
fclose(fp);
return pData;
}
int main(int argc, char **argv)
{
long Len;
char * Data = GetFileIntoCharPointer(argv[1], Len);
std::cout << Data << std::endl;
if (Data != NULL)
delete [] Data;
return 0;
}
Related Topics
Check Library Version Netcdf Linux
Driver Ch341 Usb Adapter Serial Port or Qserialport Not Works in Linux
Compile Swift Code to Native Executable for Linux
What Is The Safest Way to Empty a Directory in *Nix
How to Set Pthread CPU Affinity in Os X
How to Recursively Unzip Nested Zip Files
Building Helloworld C++ Program in Linux with Ncurses
Joining Line Breaks in Fasta File with Condition in Sed/Awk/Perl One-Liner
Window Placement: Winsplit Revolution -Like Application for Linux (Kde)
Change The Default Find-Grep Command in Emacs
Exploiting a String-Based Overflow on X86-64 with Nx (Dep) and Aslr Enabled