Redirecting Stdout with Find -Exec and Without Creating New Shell

Redirecting stdout with find -exec and without creating new shell

A simple solution would be to put a wrapper around your script:

#!/bin/sh

myscript "$1" > "$1.stdout"

Call it myscript2 and invoke it with find:

find . -type f -exec myscript2 {} \;

Note that although most implementations of find allow you to do what you have done, technically the behavior of find is unspecified if you use {} more than once in the argument list of -exec.

Redirecting stdout with find -exec

Here's a simple way...

## WARNING: this only works for simple cases...
cat $(find ! -path ./total.txt -type f ) >total.txt

Here's another way, which is more robust to various special cases, such as spaces in file names or a very large number of files...

find ! -path ./total.txt -type f -print0 |xargs -0 cat >total.txt

Redirect STDOUT and STDERR in exec()... without shell

Redirection via < and > is a shell feature, which is why it does not work in this usage. You are essentially calling /bin/ls and passing >/tmp/stdout as just another argument, which is easily visible when replacing the command by echo:

exec ('/bin/echo', '/etc', '>/tmp/stdout');

prints:

/etc >/tmp/stdout

Normally, your shell (/bin/sh) would have parsed the command, spotted the redirection attempts, opened the proper files, and also pruned the argument list going in to /bin/echo.

However -
A program started with exec() (or system()) will inherit the STDIN, STDOUT and STDERR files of its calling process. So, the proper way to handle this is to

  • close each special filehandle,
  • re-open them, pointing at your desired logfile, and
  • finally call exec() to start the program.

Rewriting your example code above, this works fine:

close STDOUT;
open (STDOUT, '>', '/tmp/stdout');
exec ('/bin/ls', '/etc');

...or, using the indirect-object syntax recommended by perldoc:

close STDOUT;
open (STDOUT, '>', '/tmp/stdout');
exec { '/bin/ls' } ('ls', '/etc');

(in fact, according to the documentation, this final syntax is the only reliable way to avoid instantiating a shell in Windows.)

Unix Find Passing Filename to Exec and In Output Redirect

This is very closely a dupe of How do I include a pipe | in my linux find -exec command?, but does not cover the case you are dealing with.

To get the filename, you can run a -exec sh -c loop

find path/to/images -name "*.png" -exec sh -c '
for file; do
base64 "$file" | tr -d "\n" > "${file}.base64.txt"
done' _ {} +

Using find -exec with + puts all the search results from find in one shot to the little script inside sh -c '..'. The _ means invoke an explicit shell to run the script defined inside '..' with the filename list collected as arguments.

An alternate version using xargs which is equally expensive as the above loop would be to do below. But this version separates filenames with NUL character -print0 and xargs reads it back delimiting on the same character, both GNU specific flags

find path/to/images -name "*.png" -print0 |
xargs -0 -I {} sh -c 'base64 {} | tr -d "\n" > {}.base64.txt'

After using `exec 1 file`, how can I stop this redirection of the STDOUT to file and restore the normal operation of STDOUT?


Q1

You have to prepare for the recovery before you do the initial exec:

exec 3>&1 1>file

To recover the original standard output later:

exec 1>&3 3>&-

The first exec copies the original file descriptor 1 (standard output) to file descriptor 3, then redirects standard output to the named file. The second exec copies file descriptor 3 to standard output again, and then closes file descriptor 3.

Q2

This is a bit open ended. It can be described at a C code level or at the shell command line level.

exec 1>file

simply redirects the standard output (1) of the shell to the named file. File descriptor one now references the named file; any output written to standard output will go to the file. (Note that prompts in an interactive shell are written to standard error, not standard output.)

exec 1>&-

simply closes the standard output of the shell. Now there is no open file for standard output. Programs may get upset if they are run with no standard output.

Q3

If you close all three of standard input, standard output and standard error, an interactive shell will exit as you close standard input (because it will get EOF when it reads the next command). A shell script will continue running, but programs that it runs may get upset because they're guaranteed 3 open file channels — standard input, standard output, standard error — and when your shell runs them, if there is no other I/O redirection, then they do not get the file channels they were promised and all hell may break loose (and the only way you'll know is that the exit status of the command will probably not be zero — success).

Send `exec()` output to another stream without redirecting stdout

If you're dead set on having a single process, then depending on how willing you are to dive into obscure C-level features of the CPython implementation, you might try looking into subinterpreters. Those are, as far as I know, the highest level of isolation CPython provides in a single process, and they allow things like separate sys.stdout objects for separate subinterpreters.

Redirecting stdout and stderr to variable within bash script

If you want to redirect for the entirety of your code, instead of using blocks, use the exec command. That is:

stdout_file="/some/path/$1/timestamp.stdout"         # Creating job-id specific folders
stderr_file="/some/path/$1/timestamp.stderr"
exec >"$stdout_file" 2>"$stderr_file"

# ...all code below this point has stdout going to stdout_file, and stderr to stderr_file

How to redirect output to a file and stdout

The command you want is named tee:

foo | tee output.file

For example, if you only care about stdout:

ls -a | tee output.file

If you want to include stderr, do:

program [arguments...] 2>&1 | tee outfile

2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.

Furthermore, if you want to append to the log file, use tee -a as:

program [arguments...] 2>&1 | tee -a outfile

How to redirect output of an entire shell script within the script itself?

Addressing the question as updated.

#...part of script without redirection...

{
#...part of script with redirection...
} > file1 2>file2 # ...and others as appropriate...

#...residue of script without redirection...

The braces '{ ... }' provide a unit of I/O redirection. The braces must appear where a command could appear - simplistically, at the start of a line or after a semi-colon. (Yes, that can be made more precise; if you want to quibble, let me know.)

You are right that you can preserve the original stdout and stderr with the redirections you showed, but it is usually simpler for the people who have to maintain the script later to understand what's going on if you scope the redirected code as shown above.

The relevant sections of the Bash manual are Grouping Commands and I/O Redirection. The relevant sections of the POSIX shell specification are Compound Commands and I/O Redirection. Bash has some extra notations, but is otherwise similar to the POSIX shell specification.

How to redirect and append both standard output and standard error to a file with Bash


cmd >>file.txt 2>&1

Bash executes the redirects from left to right as follows:

  1. >>file.txt: Open file.txt in append mode and redirect stdout there.
  2. 2>&1: Redirect stderr to "where stdout is currently going". In this case, that is a file opened in append mode. In other words, the &1 reuses the file descriptor which stdout currently uses.


Related Topics



Leave a reply



Submit