Catching a direct redirect to /dev/tty
I can't quite determine whether the screen
program mentioned by @flolo will do what you need or not. It may, but I'm not sure whether there is a logging facility built in, which appears to be what you need.
There probably is a program out there already to do what you need. I'd nominate sudosh
as a possibility.
If you end up needing to write your own, you'll probably need to use a pseudo-tty (pty) and have your application controller sit in between the user's real terminal connection and the the pty device, where it can log whatever you need it to log. That's not trivial. You can find information about this in Rochkind's "Advanced UNIX Programming, 2nd Edn" book, and no doubt other similar books (Stevens' "Advanced Programming in the UNIX Environment" book is a likely candidate, but I don't have a copy to verify that).
How to redirect a program that writes to tty?
I'm answering the second question first: as a design choice, module is an eval and they took the (questionable) choice to use stderr/tty instead of stdout/stderr to keep their side of the design easier. See here.
My solution, since I couldn't use any of the other recommended tools (e.g. script, expect) is the following python mini-wrapper:
import pty, os
pid, fd = pty.fork()
if pid == 0: # In the child process execute another command
os.execv('./my-progr', [''])
print "Execv never returns :-)"
else:
while True:
try:
print os.read(fd,65536),
except OSError:
break
Pipe direct tty output to sed
I found out script
has a -c option which runs a command and all of the output is printed to stdout as well as to a file.
My command ended up being:
script -c "buildAndStartApp" /dev/null | colorize
Capturing /dev/tty0 output to file
for those who may be interested... After couple of weeks of experimenting I share what I've learnt:
Force-preloading method - easy and quick one, but with major drawback in my case - it in fact doesn't intercept syscalls, but libc standard library calls. Drawback, as application can directly call syscall bypassing libraries and it won't be intercepted. It happened in my case.
Later on I tried kernel module, relatively quickly I could create basic one with intercepting syscalls by modifying syscall function address in syscall_table (requires some dirty hacks, like unprotecting write to that memory with CR0 register manipulation, etc...), but more I tried to progress I have faced more issues, especially in the context of intercepting process which is forking children, and I want to intercept syscalls for the forked processes as well. Of course big plus is that you have an ultimate access to any kernel structures, etc... After some time I've given up and tried the third option.
Ptrace method - looks like complex one at first sight, but is quite logical, but you need to read longy man to have understanding. Again for forked processes things get complicated, but I'm close to create workable solution now, and stuff like PTRACE_O_TRACEFORK etc. helps a lot.
My conclusion - creation of universal solution for this is not an easy stuff anyway (for instance look at 'strace' source code...), for me ptrace is the best option, however you need to invest some time to understand it, especially for forking processes.
Conclusion no. 2 - trying to solve the issue it was great adventure for me and diving into depths of linux kernel and how syscall works was amazing learning :)
Thank you melpomene for inspiration! :)
Undo global STDERR - STDOUT redirection
You can't "undo" a redirection, but you can redirect somewhere else.
A "bare" redirection in a shell script (eg, 2>/dev/null
) on a line by itself will redirect stdout or stderr for the shell itself and so for all future commands the shell runs.
Shell: redirect stdout to /dev/null and stderr to stdout
You want
./script 2>&1 1>/dev/null | ./other-script
The order here is important. Let's assume stdin (fd 0), stdout (fd 1) and stderr (fd 2) are all connected to a tty initially, so
0: /dev/tty, 1: /dev/tty, 2: /dev/tty
The first thing that gets set up is the pipe. other-script's stdin gets connected to the pipe, and script's stdout gets connected to the pipe, so script's file descriptors so far look like:
0: /dev/tty, 1: pipe, 2: /dev/tty
Next, the redirections occur, from left to right. 2>&1
makes fd 2 go wherever fd 1 is currently going, which is the pipe.
0: /dev/tty, 1: pipe, 2: pipe
Lastly, 1>/dev/null
redirects fd1 to /dev/null
0: /dev/tty, 1: /dev/null, 2: pipe
End result, script's stdout is silenced, and its stderr is sent through the pipe, which ends up in other-script's stdin.
Also see http://bash-hackers.org/wiki/doku.php/howto/redirection_tutorial
Also note that 1>/dev/null
is synonymous to, but more explicit than >/dev/null
How can a Unix program display output on screen even when stdout and stderr are redirected?
It is not written by valgrind but rather glibc and your ./myprogram is using glibc:
#define _PATH_TTY "/dev/tty"
/* Open a descriptor for /dev/tty unless the user explicitly
requests errors on standard error. */
const char *on_2 = __libc_secure_getenv ("LIBC_FATAL_STDERR_");
if (on_2 == NULL || *on_2 == '\0')
fd = open_not_cancel_2 (_PATH_TTY, O_RDWR | O_NOCTTY | O_NDELAY);
if (fd == -1)
fd = STDERR_FILENO;
...
written = WRITEV_FOR_FATAL (fd, iov, nlist, total);
Below are some relevant parts of glibc:
void
__attribute__ ((noreturn))
__stack_chk_fail (void)
{
__fortify_fail ("stack smashing detected");
}
void
__attribute__ ((noreturn))
__fortify_fail (msg)
const char *msg;
{
/* The loop is added only to keep gcc happy. */
while (1)
__libc_message (2, "*** %s ***: %s terminated\n",
msg, __libc_argv[0] ?: "<unknown>");
}
/* Abort with an error message. */
void
__libc_message (int do_abort, const char *fmt, ...)
{
va_list ap;
int fd = -1;
va_start (ap, fmt);
#ifdef FATAL_PREPARE
FATAL_PREPARE;
#endif
/* Open a descriptor for /dev/tty unless the user explicitly
requests errors on standard error. */
const char *on_2 = __libc_secure_getenv ("LIBC_FATAL_STDERR_");
if (on_2 == NULL || *on_2 == '\0')
fd = open_not_cancel_2 (_PATH_TTY, O_RDWR | O_NOCTTY | O_NDELAY);
if (fd == -1)
fd = STDERR_FILENO;
...
written = WRITEV_FOR_FATAL (fd, iov, nlist, total);
Redirecting command output to variable as well as console in bash not working
You have an unnecessary redirect on that tee command. Use:
VAR1=$(ps -u "${USER}" | awk 'NR>1 {print $NF}' | tee /proc/$$/fd/1)
They tee
works is that it copies its input to its output, and also to any files whose names you give as arguments. The redirection just messages up with its pass-through behavior.
Something else you could do - since we're not talking about some long-running command here - is first set the variable, then print its value:
VAR1=$(ps -u "${USER}" | awk 'NR>1 {print $NF}' )
echo "$VAR1"
... much simpler :-)
How to redirect output away from /dev/null
Since you can modify the command you run you can use a simple shell script as a wrapper to redirect the output to a file.
#!/bin/bash
"$@" >> logfile
If you save this in your path as capture_output.sh then you can add capture_output.sh to the start of your command to append the output of your program to logfile.
How to redirect and append both standard output and standard error to a file with Bash
cmd >>file.txt 2>&1
Bash executes the redirects from left to right as follows:
>>file.txt
: Openfile.txt
in append mode and redirectstdout
there.2>&1
: Redirectstderr
to "wherestdout
is currently going". In this case, that is a file opened in append mode. In other words, the&1
reuses the file descriptor whichstdout
currently uses.
Related Topics
Bash Alias Create File with Current Timestamp in Filename
Loading Executable or Executing a Library
Automake Subdir-Objects Is Disabled
Cron Error with Using Backquotes
Kubernetes Can't Start Due to Too Many Open Files in System
Prepend to Visual Block Not Working in Vim
How to Display the Output of a Linux Command on Stdout and Also Pipe It to Another Command
Chef Chef-Validator.Pem Security
Scp: How to Find Out That Copying Was Finished
Parsing Data from Ifconfig with Awk or Sed
How to Install the Specific Version of Postgres
Save Modifications in Place with Non Gnu Awk
Using Xargs with Special Characters