Displaying or Redirecting a Shell's Job Control Messages

Displaying or redirecting a shell's job control messages

You can get zsh to print the segmentation fault message from the job if you start it as a background job and then immediately bring it to the foreground.

"$exepath" "$@" &
fg

This will cause zsh to print out messages for signals on the job started for $exepath.

The downside is that you will get a little bit more than you bargained for:

% crun faulty.c
faulty.c:1:5: warning: ‘main’ is usually a function [-Wmain]
int main=0;
^~~~
[2] 2080
[2] - running "$exepath" "$@"
zsh: segmentation fault (core dumped) "$exepath" "$@"

But as shown, you will get the segfault messages printed in the terminal.

Because the messages are printed by the interactive shell, not the failing process, the job messages won't get redirected should you try to redirect stdout or stderr.

So on the one hand, should you try to take useful output out of your running process and redirect it somewhere else, you don't need to worry about the job messages getting in the way. But this also means you won't get the segfault message should you try to redirect it by redirecting stderr.

Here's a demonstration of this effect, with line breaks added in-between commands for clarity:

% crun good.c > test
[2] 2071
[2] - running "$exepath" "$@"

% cat test
Hello, world!

% crun faulty.c 2>test
[2] 2092
[2] - running "$exepath" "$@"
zsh: segmentation fault (core dumped) "$exepath" "$@"

% cat test
faulty.c:1:5: warning: ‘main’ is usually a function [-Wmain]
int main=0;
^~~~

For more information, see zsh's documentation on jobs and signals. You can also poke around the C file where the magic happens.

Is there a way to make bash job control quiet?

You can use parentheses to run a background command in a subshell, and that will silence the job control messages. For example:

(sleep 10 & )

Why can't I use job control in a bash script?

What he meant is that job control is by default turned off in non-interactive mode (i.e. in a script.)

From the bash man page:

JOB CONTROL
Job control refers to the ability to selectively stop (suspend)
the execution of processes and continue (resume) their execution at a
later point.
A user typically employs this facility via an interactive interface
supplied jointly by the system’s terminal driver and bash.

and

   set [--abefhkmnptuvxBCHP] [-o option] [arg ...]
...
-m Monitor mode. Job control is enabled. This option is on by
default for interactive shells on systems that support it (see
JOB CONTROL above). Background processes run in a separate
process group and a line containing their exit status is
printed upon their completion.

When he said "is stupid" he meant that not only:

  1. is job control meant mostly for facilitating interactive control (whereas a script can work directly with the pid's), but also
  2. I quote his original answer, ... relies on the fact that you didn't start any other jobs previously in the script which is a bad assumption to make. Which is quite correct.

UPDATE

In answer to your comment: yes, nobody will stop you from using job control in your bash script -- there is no hard case for forcefully disabling set -m (i.e. yes, job control from the script will work if you want it to.) Remember that in the end, especially in scripting, there always are more than one way to skin a cat, but some ways are more portable, more reliable, make it simpler to handle error cases, parse the output, etc.

You particular circumstances may or may not warrant a way different from what lhunath (and other users) deem "best practices".

Background Process stdout Displays After Next Command

The message is printed by bash itself, not by the kill command. So you can't suppress it by redirecting any output in your script.

What you can do is to disable the monitor mode of job control by executing

set +m

Quote from the bash man page:

-m Monitor mode. Job control is enabled. This option is on by default for
interactive shells on systems that support it (see JOB CONTROL above). All
processes run in a separate process group. When a background job
completes, the shell prints a line containing its exit status.

With a +m the monitor mode is disabled.

Redirecting the output of my bash to a log file

Basic Techniques

There are multiple ways to achieve it:

exec >outputfile.txt
command1
command2
command3
command4

This changes the standard output of the entire script to the log file.

My generally preferred way to do it is:

{
command1
command2
command3
command4
} > outputfile.txt

This does I/O redirection for all the commands within the scope of the braces. Be careful, you have to treat both { and } as if they were commands; they cannot appear just anywhere. This does not create a sub-shell — which is the main reason I favour it.

You can replace the braces with parentheses:

(
command1
command2
command3
command4
) > outputfile.txt

You can be more cavalier about the placement of the parentheses than the braces, so:

(command1
command2
command3
command4) > outputfile.txt

would also work (but do that with the braces and the shell will fail to find a command {command1 (unless you happen to have a file around that's executable and ...). This creates a sub-shell. Any variable assignments done within the parentheses will not be seen/accessible outside the parentheses. This can be a show-stopper sometimes (but not always). The incremental cost of a sub-shell is pretty much negligible; it exists, but you're likely to be hard-pressed to measure it.

There's also the long-hand way:

command1 >>outputfile.txt
command2 >>outputfile.txt
command3 >>outputfile.txt
command4 >>outputfile.txt

If you wish to demonstrate that you're a neophyte shell programmer, by all means use this technique. If you wish to be considered as a more advanced shell programmer, do not.

Note that all the commands above redirect just standard output to the named file, leaving standard error going to the original destination (usually the terminal). If you want to get standard error to go to the same file, simply add 2>&1 (meaning, send file descriptor 2, standard error, to the same place as file descriptor 1, standard output) after the redirection for standard output.


Applying the Techniques

Addressing questions raised in the comments.

By using the 2>&1 >> $_logfile (as per my answer below) I got what I need, but now I do have also my echo ... in the output file. Is there a way to print them on screen as well as the file at the same time?

Ah, so you don't want everything to go to the file...that complicates things a bit. There are ways, of course; not necessarily straight-forward. I'd probably use exec 3>&1; to set file descriptor 3 going to the same place as 1 (standard output — or use 3>&2 if I wanted the echoes to standard error) before the main redirection. Then I'd create a function echoecho() { echo "$*"; echo "$*" >&3; } and I'd use echoecho Whatever to do the echoing. When you're done with file descriptor 3 (if you're not about to exit, when the system will close it) you can close it with exec 3>&-.

When you refer to exec that's supposed to be the command I'm executing in the individual script file I created and that I will execute in between the cycle right? (just have a look at my answer below to see how I have evolved the script). For the rest of the suggestion I completely lost you.

No; I'm referring the the Bash (shell) built-in command exec. It can be used to do I/O redirection permanently (for the rest of the script), or to replace the current script with a new program, as in exec ls -l — which is probably a bad example.

I guess now I even more confused than when I started :) Would that be possible create a small example … so I can understand it better?

The disadvantage of comments is that they're hard to format and limited in size. Those limitations are also benefits, but there comes a time when the answer has to be extended. Said time has arrived.

For the sake of discussion, I'm going to restrict myself to 2 commands instead of 4 as in the question (but this doesn't lose any generality). Those commands will be cmd1 and cmd2, and in fact those are two different names for the same script:

#!/bin/bash
for i in {01..10}
do
echo "$0: stdout $i - $*"
echo "$0: stderr $i - error message" >&2
done

As you can see, this script writes messages to both standard output and standard error. For example:

$ ./cmd1 trying to work
./cmd1: stdout 1 - trying to work
./cmd1: stderr 1 - error message
./cmd1: stdout 2 - trying to work
./cmd1: stderr 2 - error message

./cmd1: stdout 9 - trying to work
./cmd1: stderr 9 - error message
./cmd1: stdout 10 - trying to work
./cmd1: stderr 10 - error message
$

Now, from the answer posted by Andrea Moro we find:

#!/bin/bash

_logfile="output.txt"

# Delete output file if exist
if [ -f $_logfile ];
then
rm $_logfile
fi

for file in ./shell/*
do
$file 2>&1 >> $_logfile
done

I don't like the variable name starting _; there's no need for it that I can see. This redirects errors to where standard output is (currently) going, and then redirects standard output to the log file. So, if the sub-directory shell contains cmd1 and cmd2, the output is:

$ bash ex1.sh
./shell/cmd1: stderr 1 - error message
./shell/cmd1: stderr 2 - error message

./shell/cmd1: stderr 9 - error message
./shell/cmd1: stderr 10 - error message
./shell/cmd2: stderr 1 - error message
./shell/cmd2: stderr 2 - error message

./shell/cmd2: stderr 9 - error message
./shell/cmd2: stderr 10 - error message
$

To get both standard output and standard error to the file, you have use one of:

2>>$_logfile >>$_logfile
>>$_logfile 2>&1

I/O redirections are generally processed from left to right, except that piping controls where standard output (and standard error if you use |&) goes to before the I/O redirections are handled.

Adapting this script to generate information to standard output as well as logging to the log file, there are a variety of ways of working. I'm assuming the shebang line is #!/bin/bash from here on.

logfile="output.txt"
rm -f $logfile

for file in ./cmd1 ./cmd2
do
$file trying to work >> $logfile 2>&1
done

This removes the log file if it exists (but less verbosely than before). Everything on standard output and standard error goes to the log file. We could also write:

logfile="output.txt"
{
for file in ./cmd1 ./cmd2
do
$file trying to work
done
} >$logfile 2>&1

Or the code could use parentheses in place of the braces with only minor differences in functionality that wouldn't affect this script materially. Or, indeed, in this case, we could use:

logfile="output.txt"
for file in ./cmd1 ./cmd2
do
$file trying to work
done >$logfile 2>&1

And it is not clear that the variable is necessary, but we'll leave it in place. Note that both these use 'clobbering' I/O redirection because they create the log file just once, which in turn means there was no need to remove it (though there might be reasons to do so — related to other users running the command beforehand and leaving a non-writable file behind, but then you should probably have a date-stamped log file anyway so that isn't a problem after all).

Clearly, if we want to echo something to the original standard output as well as to the log file, we have to do something different as both standard error and standard output are going to the log file.

One option is:

logfile="output.txt"
rm -f $logfile

for file in ./cmd1 ./cmd2
do
echo $file $(date +'%Y-%m-%d %H:%M:%S')
$file trying to work >> $logfile 2>&1
done

Another option is:

exec 3>&1

logfile="output.txt"
for file in ./cmd1 ./cmd2
do
echo $file $(date +'%Y-%m-%d %H:%M:%S') >&3
$file trying to work
done >$logfile 2>&1

exec 3>&-

Now file descriptor 3 goes to the same place as the original standard output. Inside the loop, both standard output and standard error go to the log file, but the echo … >&3 sends the standard output of echo to file descriptor 3.

If you want the same echoed output to go to both the redirected standard output and the original standard output, then you can use:

exec 3>&1
echoecho()
{
echo "$*"
echo "$*" >&3
}

logfile="output.txt"
for file in ./cmd1 ./cmd2
do
echoecho $file $(date +'%Y-%m-%d %H:%M:%S')
$file trying to work
done >$logfile 2>&1

exec 3>&-

The output from this was:

$ bash ex3.sh
./cmd1 2014-01-07 14:57:13
./cmd2 2014-01-07 14:57:13
$ cat output.txt
./cmd1 2014-01-07 14:57:13
./cmd1: stdout 1 - trying to work
./cmd1: stderr 1 - error message
./cmd1: stdout 2 - trying to work
./cmd1: stderr 2 - error message

./cmd1: stdout 9 - trying to work
./cmd1: stderr 9 - error message
./cmd1: stdout 10 - trying to work
./cmd1: stderr 10 - error message
./cmd2 2014-01-07 14:57:13
./cmd2: stdout 1 - trying to work
./cmd2: stderr 1 - error message
./cmd2: stdout 2 - trying to work
./cmd2: stderr 2 - error message

./cmd2: stdout 9 - trying to work
./cmd2: stderr 9 - error message
./cmd2: stdout 10 - trying to work
./cmd2: stderr 10 - error message
$

This is roughly what I was saying in my comments, written out in full.

Redirecting command output to variable as well as console in bash not working

You have an unnecessary redirect on that tee command. Use:

VAR1=$(ps -u "${USER}" | awk 'NR>1 {print $NF}' | tee /proc/$$/fd/1)

They tee works is that it copies its input to its output, and also to any files whose names you give as arguments. The redirection just messages up with its pass-through behavior.

Something else you could do - since we're not talking about some long-running command here - is first set the variable, then print its value:

VAR1=$(ps -u "${USER}" | awk 'NR>1 {print $NF}' )
echo "$VAR1"

... much simpler :-)

What does 2 &1 mean?

File descriptor 1 is the standard output (stdout).

File descriptor 2 is the standard error (stderr).

At first, 2>1 may look like a good way to redirect stderr to stdout. However, it will actually be interpreted as "redirect stderr to a file named 1".

& indicates that what follows and precedes is a file descriptor, and not a filename. Thus, we use 2>&1. Consider >& to be a redirect merger operator.

ZSH Function Behavior? (Running C++)

As mentioned in John's link (Displaying or redirecting a shell's job control messages) it seems that adding & fg suffices.

it only display a number 2 when using while read line to redirect the content of one file to another

[] has a special meaning for the shell, it just means "a single character taken from any of the characters between the brackets". So when you run

echo [H323H]

the shell looks for a file named or H, or 2, or 3... If at least one file matches, [H323H] is replaced with all the matching file names in the output; otherwise it's reproduced as is.

source: https://unix.stackexchange.com/a/259385

Using quotes around $line would solve your problem without the need to check for files matching those characters (which would make the script not very robust)

#!/bin/bash

tempconf="/tmp/test.file"
while read -r line
do
echo "$line"
done < test.conf > "$tempconf"


Related Topics



Leave a reply



Submit