In Bash, How to Not Create The Redirect Output File Once The Command Fails

In Bash, how to not create the redirect output file once the command fails

Fixed output file names are bad news; don't use them.

You should probably redesign the processing so that you have a date-stamped file name. Failing that, you should use the mktemp command to create a temporary file, have the command you want executed write to that, and when the command is successful, you can move the temporary to the 'final' output — and you can automatically clean up the temporary on failure.

outfile="./output-$(date +%Y-%m-%d.%H:%M:%S).txt"
tmpfile="$(mktemp ./gadget-maker.XXXXXXXX)"

trap "rm -f '$tmpfile'; exit 1" 0 1 2 3 13 15

if cat a.txt > "$tmpfile"
then mv "$tmpfile" "$outfile"
else rm "$tmpfile"
fi

trap 0

You can simplify the outfile to output.txt if you insist (but it isn't safe). You can use any prefix you like with the mktemp command. Note that by creating the temporary file in the current directory, where the final output file will be created too, you avoid cross-device file copying at the mv phase of operations — it is a link() and an unlink() system call (or maybe even a rename() system call if such a thing exists on your machine; it does on Mac OS X) only.

How to stop redirecting stdout when one program fails in the chain?

When bash executes program1 | program2 | program3 > file.out, it creates file.out before program1 is started. If you want to ensure that it is never created, you'll need to buffer the output (either in memory or in a temporary file). I find the cleanest syntax for that is something like:

if v=$( set -o pipefail; program1 | program2 | program3 ); then 
echo "$v" > file.out
fi

or (this has different semantics, ignoring the return value. Depending on your use case, this may be acceptable):

v=$( program1 | program2 | program3 )
test -n "$v" && echo "$v" > file.out

If you're okay with creating the file and then deleting it, you can do

set -o pipefail
program1 | progam2 | program3 > file.out || rm file.out

If you don't want to use pipefail (eg, because you want the script to be portable), you can do something like:

{ { { program1 || echo . >&3; } | { program2 || echo . >&3;} | 
{ program3 || echo . >&3; } } 3>&1 >&4 |
if grep -q .; then exit 1; else exit 0; fi ; } 4>&1;

Bash error redirection creating file when there is no error

The shell must provide the command with a file that is ready for writing, which means it must open the file (creating it if necessary) before starting the command. One option is to use a separate process that reads from the command's standard error and only writes to the file if it gets some input.

mkfifo err
while read -r line; do
echo "$line" >> ssh.error
done < err & log_pid=$!
service ssh stop 2> err
kill "$log_pid"
rm err

This is less efficient than simply removing the empty file.

How to redirect output to file, not creating it if it does not exist?

The following works for me:

/* Printer device file must not be created if it does not
already exist. This is similar to `cat >', but open()
syscall is without O_CREAT. */

#include <fcntl.h> /* |open| */
#include <stdio.h> /* |fprintf| */
#include <unistd.h> /* |read| */

int main(int argc, char **argv)
{
int fd;
if ((fd = open(argv[1], O_WRONLY)) == -1) {
fprintf(stderr, "open: %m\n");
return 0;
}

char buf[8192];
ssize_t n, m;

while((n = read(0, buf, sizeof buf)) > 0) {
m = write(fd, buf, n);
if (m == -1) {
fprintf(stderr, "write: %m\n");
break;
}
if (m != n) {
fprintf(stderr, "TODO: stuff all bytes in a loop\n");
break;
}
}
if (n == -1) fprintf(stderr, "read: %m\n");

close(fd);
return 0;
}

How can I use a file in a command and redirect output to the same file without truncating it?

You cannot do that because bash processes the redirections first, then executes the command. So by the time grep looks at file_name, it is already empty. You can use a temporary file though.

#!/bin/sh
tmpfile=$(mktemp)
grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name > ${tmpfile}
cat ${tmpfile} > file_name
rm -f ${tmpfile}

like that, consider using mktemp to create the tmpfile but note that it's not POSIX.

redirect bash script output to log file excluding the menu

It is possible to duplicate the initial stdout to a different file handle (e.g., 3), and send the menu to that handle. After the setup (exec 3>&1), add >&3 to any command where the output should NOT go to the log file.

...

# Keep handle to original stdout on fd 3
exec 3>&1
# capture stdout/stderr to log file
exec > >(tee -a ${log_file} )
exec 2> >(tee -a ${log_file} >&2)

...
function show_menu {
}

...
# Output the clear seq and the menu to terminal only
clear >&3
show_menu >&3

# Rest of the code
while [ $opt != '' ]
...
done

Redirect all output to file in Bash

That part is written to stderr, use 2> to redirect it. For example:

foo > stdout.txt 2> stderr.txt

or if you want in same file:

foo > allout.txt 2>&1

Note: this works in (ba)sh, check your shell for proper syntax

redirect error output to function in bash?

It's not entirely clear what you mean (in a comment), but perhaps you are looking for something like:

logit(){
printf "error occured -- ";
cat
} >> "$file"

exec 3>&1
{
mkdir /tmp/pop.txt
chown ...
chmod ...
} 2>&1 1>&3 | logit

This routes the stdout of all the commands in the block to the original stdout of the script while directing the errors streams of all to the logit function. Rather than simply dumping error occurred -- as a header and ignoring newlines in the input, it might be better to implement logit as:

logit(){ sed 's/^/ERROR: /' >> "$file"; }

or maybe even add a timestamp:

logit(){ perl -ne 'printf "%s: ERROR: %s", scalar gmtime, $_'; } >> "$file"

bash redirect output to file but result is incomplete

That is a funny problem, I've never seen that happening before. I am going to go out on a limb here and suggest this, see how it works:

 sudo ./bin/sc --doctor 2>&1 | tee -a alloutput.txt


Related Topics



Leave a reply



Submit