Fork() and Stdout/Stderr to The Console from Child Processes

fork() and STDOUT/STDERR to the console from child processes

Writes to a filehandle are NOT atomic for STDOUT and STDIN. There are special cases for things like fifos but that's not your current situation.

When it says re-open STDOUT what that means is "create a new STDOUT instance" This new instance isn't the same as the one from the parent. It's how you can have multiple terminals open on your system and not have all the STDOUT go to the same place.

The pipe solution would connect the child to the parent via a pipe (like | in the shell) and you'd need to have the parent read out of the pipe and multiplex the output itself. The parent would be responsible for reading from the pipe and ensuring that it doesn't interleave output from the pipe and output destined to the parent's STDOUT at the same time. There's an example and writeup here of pipes.

A snippit:

use IO::Handle;

pipe(PARENTREAD, PARENTWRITE);
pipe(CHILDREAD, CHILDWRITE);

PARENTWRITE->autoflush(1);
CHILDWRITE->autoflush(1);

if ($child = fork) { # Parent code
chomp($result = <PARENTREAD>);
print "Got a value of $result from child\n";
waitpid($child,0);
} else {
print PARENTWRITE "FROM CHILD\n";
exit;
}

See how the child doesn't write to stdout but rather uses the pipe to send a message to the parent, who does the writing with its stdout. Be sure to take a look as I omitted things like closing unneeded file handles.

Redirecting STDOUT and STDERR for all child processes that will be started by the parent process

When you open a file handle, it uses the next available file descriptor. STDIN, STDOUT and STDERR are normally associated with fd 0, 1 and 2 respectively. If there are no other open handles in the process, the next handle created by open will use file descriptor 3.

If you associate fd 3 with STDOUT, many things will keep working. That's because Perl code usually deals with Perl file handles rather than file descriptors. For example, print LIST is effectively print { select() } LIST, which is the same as print STDOUT LIST by default. So your change mostly works within Perl.

However, when you execute a program, all that it gets are the file descriptors. It gets fd 0, 1 and 2. It might even get fd 3, but it doesn't care about that. It will output to fd 1.


A simple solution is to remove local *STDOUT; local *STDERR;.

*STDOUT is a glob, a structure that contains *STDOUT{IO}, the handle in question.

By using local *STDOUT, you are replacing the glob with an empty one. The original one isn't destroyed —it will be restored when the local goes out of scope— so the Perl file handle associated with the now-anonymous glob won't be closed, so the fd associated with that handle won't be closed, so the subsequent open can't reuse that fd.

If you avoid doing local *STDOUT, it means you are passing an open handle to open. open behaves specially in that circumstance: It will "reopen" the fd already associated with the Perl handle rather than creating a new fd.

$ perl -e'
open( local *STDOUT, ">", "a" ) or die;
open( local *STDERR, ">&", STDOUT ) or die;
print(fileno(STDOUT), "\n");
system("echo foo");
'
foo

$ cat a
3

$ perl -e'
open( STDOUT, ">", "a" ) or die;
open( STDERR, ">&", STDOUT ) or die;
print(fileno(STDOUT), "\n");
system("echo foo");
'

$ cat a
1
foo

If you want the redirection to be temporary. You have to play with the file descriptors.

$ perl -e'
open( local *SAVED_STDOUT, ">&", STDOUT) or die;
open( STDOUT, ">", "a" ) or die;
print(fileno(STDOUT), "\n");
system("echo foo");

open( STDOUT, ">&", SAVED_STDOUT) or die;
print(fileno(STDOUT), "\n");
system("echo bar");
'
1
bar

$ cat a
1
foo

Prepend child process console output

You could create a second child process that performs

execlp("sed", "sed", "s/^/PREFIX: /", (char *)NULL);

Connect the first child's stdout to this process's stdin with a pipe.

How to pass messages as well as stdout from child to parent in node.js child process module?

You need to set the silent property on the options object when you pass it in to fork() in order for the stdin, stdout and stderr to get piped back to the parent process.

e.g. var n = cp.fork('./child.js', [], { silent: true });

Is it possible to redirect child process's stdout to another file in parent process?

The key piece here is the POSIX function dup2, which lets you essentially replace one file descriptor with another. And if you use fork (not system), you actually have control of what happens in the child process between the fork and the exec* that loads the other executable.

#include <cstdlib>
extern "C" {
#include <fcntl.h>
#include <unistd.h>
}
#include <stdexcept>
#include <iostream>

pid_t start_child(const char* program, const char* output_filename)
{
pid_t pid = fork();
if (pid < 0) {
// fork failed!
std::perror("fork");
throw std::runtime_error("fork failed");
} else if (pid == 0) {
// This code runs in the child process.
int output_fd = open(output_filename, O_WRONLY | O_CREAT | O_TRUNC);
if (output_fd < 0) {
std::cerr << "Failed to open log file " << output_filename << ":"
<< std::endl;
std::perror("open");
std::exit(1);
}
// Replace the child's stdout and stderr handles with the log file handle:
if (dup2(output_fd, STDOUT_FILENO) < 0) {
std::perror("dup2 (stdout)");
std::exit(1);
}
if (dup2(output_fd, STDERR_FILENO) < 0) {
std::perror("dup2 (stderr)");
std::exit(1);
}
if (execl(program, program, (char*)nullptr) < 0) {
// These messages will actually go into the file.
std::cerr << "Failed to exec program " << program << ":"
<< std::endl;
std::perror("execl");
std::exit(1);
}
}
return pid;
}


Related Topics



Leave a reply



Submit