Redirecting Stdout & Stderr from Background Process

How to redirect the output of an application in background to /dev/null

You use:

yourcommand  > /dev/null 2>&1

If it should run in the Background add an &

yourcommand > /dev/null 2>&1 &

>/dev/null 2>&1 means redirect stdout to /dev/null AND stderr to the place where stdout points at that time

If you want stderr to occur on console and only stdout going to /dev/null you can use:

yourcommand 2>&1 > /dev/null

In this case stderr is redirected to stdout (e.g. your console) and afterwards the original stdout is redirected to /dev/null

If the program should not terminate you can use:

nohup yourcommand &

Without any parameter all output lands in nohup.out

Bash - redirect stdout and stderr to files with background process

Anyone know what I'm doing wrong? :)

Short Answer:

Yes. add -u to the python command and it should work.

python -u run.py  >> "$stdout_log" 2>> "$stderr_log" &

Long Answer:

It's a buffering issue (from man python):

   -u     Force stdin, stdout and stderr to be totally unbuffered.  On systems where it matters, also put stdin, stdout
and stderr in binary mode. Note that there is internal buffering in xreadlines(), readlines() and file-
object iterators ("for line in sys.stdin") which is not influenced by this option. To work around this, you
will want to use "sys.stdin.readline()" inside a "while 1:" loop.

How to redirect stderr and stdout into /var/log directory in background process?

sudo vi /etc/systemd/system/ss.service

[Unit]
Description=ss

[Service]
TimeoutStartSec=0
ExecStart=/bin/bash -c '/python sslocal -c /etc/ss.json > /var/log/ss.log 2>&1'

[Install]
WantedBy=multi-user.target

To start it after editing the config file.

sudo systemctl daemon-reload
sudo systemctl enable ss.service
sudo systemctl start ss.service
sudo systemctl status ss -l

1.ss run as a service and it start in reboot automatically.

2.ss can write log into /var/log/ss.log without permission problem.

what happens to background processes stdout and stderr when I log out?

This is the crucial loop where tee sends output to stdout and opened files:

  while (1)
{
bytes_read = read (0, buffer, sizeof buffer);
if (bytes_read < 0 && errno == EINTR)
continue;
if (bytes_read <= 0)
break;

/* Write to all NFILES + 1 descriptors.
Standard output is the first one. */
for (i = 0; i <= nfiles; i++)
if (descriptors[i]
&& fwrite (buffer, bytes_read, 1, descriptors[i]) != 1)
{
error (0, errno, "%s", files[i]);
descriptors[i] = NULL;
ok = false;
}
}

Pay closer attention on this part:

        if (descriptors[i]
&& fwrite (buffer, bytes_read, 1, descriptors[i]) != 1)
{
error (0, errno, "%s", files[i]);
descriptors[i] = NULL;
ok = false;
}

It shows that when an error occurs, tee would not close itself but just unset the file descriptor descriptors[i] = NULL and continue to keep reading data until EOF or error on input occurs besides EINTR.

The date command or anything that sends output to the pipe connected to tee would not terminated since tee still reads their data. Only that the data doesn't go anywhere besides the file foo. And even if a file argument was not provided, tee would still read their data.

This is what /proc/**/fd looks like on tee when disconnected from a terminal:

0 -> pipe:[431978]
1 -> /dev/pts/2 (deleted)
2 -> /dev/pts/2 (deleted)

And this one's from the process that connects to its pipe:

0 -> /dev/pts/2 (deleted)
1 -> pipe:[431978]
2 -> /dev/pts/2 (deleted)

You can see that tee's stdout and stderr is already EOL but it's still running.

Separately redirecting and recombining stderr/stdout without losing ordering

Preserving perfect order while performing separate redirections is not even theoretically possible without some ugly hackery. Ordering is only preserved in writes (in O_APPEND mode) directly to the same file; as soon as you put something like tee in one process but not the other, ordering guarantees go out the window and can't be retrieved without keeping information about which syscalls were invoked in what order.

So, what would that hackery look like? It might look something like this:

# eat our initialization time *before* we start the background process
sudo sysdig-probe-loader

# now, start monitoring syscalls made by children of this shell that write to fd 1 or 2
# ...funnel content into our logs.log file
sudo sysdig -s 32768 -b -p '%evt.buffer' \
"proc.apid=$$ and evt.type=write and (fd.num=1 or fd.num=2)" \
> >(base64 -i -d >logs.log) \
& sysdig_pid=$!

# Run your-program, with stderr going both to console and to errors.log
./your-program >/dev/null 2> >(tee errors.log)

That said, this remains ugly hackery: It only catches writes direct to FDs 1 and 2, and doesn't track any further redirections that may take place. (This could be improved by performing the writes to FIFOs, and using sysdig to track writes to those FIFOs; that way fdup() and similar operations would work as-expected; but the above suffices to prove the concept).


Making Separate Handling Explicit

Here we demonstrate how to use this to colorize only stderr, and leave stdout alone -- by telling sysdig to generate a stream of JSON as output, and then iterating over that:

exec {colorizer_fd}> >(
jq --unbuffered --arg startColor "$(tput setaf 1)" --arg endColor "$(tput sgr0)" -r '
if .["fd.filename"] == "stdout" then
("STDOUT: " + .["evt.buffer"])
else
("STDERR: " + $startColor + .["evt.buffer"] + $endColor)
end
'
)

sudo sysdig -s 32768 -j -p '%fd.filename %evt.buffer' \
"proc.apid=$$ and evt.type=write and proc.name != jq and (fd.num=1 or fd.num=2)" \
>&$colorizer_fd \
& sysdig_pid=$!

# Run your-program, with stdout and stderr going to two separately-named destinations
./your-program >stdout 2>stderr

Because we're keying off the output filenames (stdout and stderr), these need to be constant for the above code to work -- any temporary directory desired can be used.


Obviously, you shouldn't actually do any of this. Update your program to support whatever logging infrastructure is available in its native language (Log4j in Java, the Python logging module, etc) to allow its logging to be configured explicitly.

Bash: Silence a process put into background

One option is:

  1. Bringing the job to the foreground (see Job control).
  2. Redirecting ouput (see below).
  3. Sending to the backgrouond again.

How to redirect output of an already running process

https://superuser.com/questions/473240/redirect-stdout-while-a-process-is-running-what-is-that-process-sending-to-d

Redirect STDERR / STDOUT of a process AFTER it's been started, using command line?

https://etbe.coker.com.au/2008/02/27/redirecting-output-from-a-running-process/

Background Process stdout Displays After Next Command

The message is printed by bash itself, not by the kill command. So you can't suppress it by redirecting any output in your script.

What you can do is to disable the monitor mode of job control by executing

set +m

Quote from the bash man page:

-m Monitor mode. Job control is enabled. This option is on by default for
interactive shells on systems that support it (see JOB CONTROL above). All
processes run in a separate process group. When a background job
completes, the shell prints a line containing its exit status.

With a +m the monitor mode is disabled.

Bash won't redirect output to a file and run in the background

Thanks @KamilCuk for the help!

Adding a -u option fixed the issue:

nohup python3 -u GUI.py > GUI.log 2>&1
nohup python3 -u main.py > main.log 2>&1

But why did this work?

Because by default, the stdout/stderr output is buffered, and only gets flushed at specific intervals (e.g. every 1kB). The "-u" option eliminates the buffer with python and will write the output immediately to the destination.

NOTE: this question is actually a duplicate of nohup-is-not-writing-log-to-output-file



Related Topics



Leave a reply



Submit