Redirecting Output of Bash for Loop

Redirecting output of bash for loop

Remove your semicolon.

for i in `seq 2`; do echo "$i"; done > out.dat

SUGGESTIONS

Also as suggested by Fredrik Pihl, try not to use external binaries when they are not needed, or at least when practically not:

for i in {1..2}; do echo "$i"; done > out.dat
for ((i = 1; i <= 2; ++i )); do echo "$i"; done > out.dat
for i in 1 2; do echo "$i"; done > out.dat

Also, be careful of outputs in words that may cause pathname expansion.

for a in $(echo '*'); do echo "$a"; done

Would show your files instead of just a literal *.

$() is also recommended as a clearer syntax for command substitution in Bash and POSIX shells than backticks (`), and it supports nesting.

The cleaner solutions as well for reading output to variables are

while read var; do
...
done < <(do something)

And

read ... < <(do something)  ## Could be done on a loop or with readarray.

for a in "${array[@]}"; do
:
done

Using printf can also be an easier alternative with respect to the intended function:

printf '%s\n' {1..2} > out.dat

bash stdout redirection in a for loop

When you start my_tool, there are normally 3 file-descriptors available in the tool:

  • STDIN
  • STDOUT
  • STDERR

STDIN is used for input, and therefore irrelevant for this question. STDOUT is used for standard output. This is file-descriptor 1. If you do

ls 1> /dev/null

the STDOUT of ls is written to /dev/null. If you do not add the 1, as in ls > /dev/null, it is assumed that you mean STDOUT.

STDERR is used as standard output for error messages, in the broadest sense of the word. The number of STDERR is 2.

Using ls instead of your my_command, ls > file will put the listing in the file. ls /non_existing_dir_ > file will put the STDOUT of ls in the file. But there is no output on STDOUT, and because STDERR is not redirected, it will be send to the terminal.

So, to conclude,

ls . /non_existing_dir 2>stderr >stdout

will put the directory listing of . in the file stdout and the error for the non-existing directory in stderr.

With 2>&1 you redirect the output of filedescriptor2 (STDERR) to file descriptor 1 (SDTOUT).

To complicate things a bit, you can add other file descriptor numbers. For example:

exec 3>file

will put the output of file descriptor 3 (which is newly created) in file. And

ls 1>&3

will then redirect the output of file descriptor 1 to file descriptor 3, effectively putting the output of ls in file.

Output for loop to a file

You are using > redirection, which wipes out the existing contents of the file and replaces it with the command's output, inside the loop. So this wipes out the previous contents 10 times, replacing it with one line each time.

Without the >, or with >/dev/tty, it goes to your display, where > cannot wipe anything out so you see all ten copies.

You could use >>, which will still open the file ten times, but will append (not wipe out previous contents) each time. That's not terribly efficient though, and it retains data from a previous run (which may or may not be what you want).

Or, you can redirect the entire loop once:

for ... do cmd; done >file

which runs the entire loop with output redirected, creating (or, with >>, opening for append) only once.

Bash redirect stderr to file after for loop

To redirect stderr to stdout, use:

command 2>&1

Demonstration:

ls unexisting-path 2>&1 | cat > /dev/null

Here, ls will produce an error output. This output is redirected to stdout, so it gets caught by the pipe | and sent to cat, which outputs it to stdout too. To prove it, > /dev/null is added, and as expected, nothing is displayed.

How can I loop over the output of a shell command?

Never for loop over the results of a shell command if you want to process it line by line unless you are changing the value of the internal field separator $IFS to \n. This is because the lines will get subject of word splitting which leads to the actual results you are seeing. Meaning if you for example have a file like this:

foo bar
hello world

The following for loop

for i in $(cat file); do
echo "$i"
done

gives you:

foo
bar
hello
world

Even if you use IFS='\n' the lines might still get subject of Filename expansion


I recommend to use while + read instead because read reads line by line.

Furthermore I would use pgrep if you are searching for pids belonging to a certain binary. However, since python might appear as different binaries, like python2.7 or python3.4 I suggest to pass -f to pgrep which makes it search the whole command line rather than just searching for binaries called python. But this will also find processes which have been started like cat foo.py. You have been warned! At the end you can refine the regex passed to pgrep like you wish.

Example:

pgrep -f python | while read -r pid ; do
echo "$pid"
done

or if you also want the process name:

pgrep -af python | while read -r line ; do
echo "$line"
done

If you want the process name and the pid in separate variables:

pgrep -af python | while read -r pid cmd ; do
echo "pid: $pid, cmd: $cmd"
done

You see, read offers a flexible and stable way to process the output of a command line-by-line.


Btw, if you prefer your ps .. | grep command line over pgrep use the following loop:

ps -ewo pid,etime,cmd | grep python | grep -v grep | grep -v sh \
| while read -r pid etime cmd ; do
echo "$pid $cmd $etime"
done

Note how I changed the order of etime and cmd. Thus to be able to read cmd, which can contain whitespace, into a single variable. This works because read will break down the line into variables, as many times as you specified variables. The remaining part of the line - possibly including whitespace - will get assigned to the last variable which has been specified in the command line.

redirecting out put in a nested for loop

You are overwriting the contents of hostname-text.txt each time, and when the last user is a system account or other user without a crontab, the file will be empty.

Meanwhile, you'll be getting the error output for crontab, "no crontab for ...", for all users since this is printed to stderr.

The fix depends on what you want to do. Here's a small fix that redirects the entire loop, along with error information:

servers=`cat hosts.txt`
for i in $servers;
do
echo $i
users=`ssh $i cut -d ":" -f1 /etc/passwd`
for n in $users
do
echo "Crontab for $n:"
ssh "$i" crontab -u $n -l
echo
done > "$i-test.txt" 2>&1
done

This would give you a file myhost-test.txt containing something like

Crontab for user:
0 * * * * foo
0 4 * * * bar

Crontab for mysql:
no crontab for mysql

(This ignores spacing, quoting and efficiency concerns in favor of sticking closely to your code.)

Improve bash script to echo redirect within loop

When the redirection is inside the loop the output file is opened and closed every single iteration. If you redirect the entire loop it's opened for the duration of the loop, which is much, much faster.

for ((i = 0; i < 1000000; ++i)); do
echo "test statement redirected into file a million times"
done > /tmp/output.txt

(Note also the updated loop logic.)

If you want to make it even faster, you can speed it up by simplifying this to one or two commands that loop internally rather than looping in the shell script. yes will output a fixed string over and over. Combine it with head to control how many lines are printed.

yes 'test statement redirected into file a million times' | head -1000000 > /tmp/output.txt

Will mounting the tmp directory in RAM make it faster?

Maybe, but I wouldn't bother. That's a system-wide change for a local problem. If you don't need the file on disk, it raises the question: do you even need to create the file in the first place? Why do you need a file with the same line repeated a million times? Maybe this is an XY problem. Perhaps you don't need the file at all.

For instance, if you're creating it and immediately passing it to a single subsequent command, you could try using process substitution instead to avoid needing a temp file.

yes 'test statement redirected into file a million times' | head -1000000 > /tmp/output.txt
command /tmp/output.txt

becomes

command <(yes 'test statement redirected into file a million times' | head -1000000)

Redirection to file inside Bash loop + command works randomly

Reading from a file while simultaneously overwriting it is a recipe for disaster. But with cmd $f > $f bash empties (= truncates) the file before cmd even runs. cmd $f | tee $f may work for short files because cmd and tee run in parallel and the output of cmd is buffered. If you are lucky, your system executes cmd's read operations before tee's truncate operation. The bigger the file, the less chance you have of reading all data before tee truncates it.

If you want to see this race condition between cmd's read operation and tee's truncate operation yourself, have a look at

head -c1M /dev/zero > f; LC_ALL=C strace -f -e execve,openat,read,write bash -c 'cat f | tee f' >/dev/null; wc -c f

My tee implementation from GNU coreutils 8.32 truncates the file by calling openat(… "f" … O_TRUNC …). After that operation succeeded, cat's next read will return = 0, signaling the end of file.

In your case, there are three possible solutions:

  • Use a temporary file which you rename afterwards

    awk ... "$f" > "$f.tmp"; mv "$f.tmp" "$f"
  • Use GNU awk's inplace option

    gawk -i inplace ... "$f"
  • Use sponge from GNU moreutils

    awk ... "$f" | sponge "$f"


Related Topics



Leave a reply



Submit