What Does "/Dev/Null" Mean at the End of Shell Commands

What does /dev/null mean at the end of shell commands

2> means "redirect standard-error" to the given file.

/dev/null is the null file. Anything written to it is discarded.

Together they mean "throw away any error messages".

What is /dev/null 2&1?

>> /dev/null redirects standard output (stdout) to /dev/null, which discards it.

(The >> seems sort of superfluous, since >> means append while > means truncate and write, and either appending to or writing to /dev/null has the same net effect. I usually just use > for that reason.)

2>&1 redirects standard error (2) to standard output (1), which then discards it as well since standard output has already been redirected.

Why do we do ' /dev/null' in certain shell commands

The purpose of < /dev/null is to send EOF to a program immediately when tries to read from its standard input. It also means that the standard input is cut from receiving input from any other sources e.g. terminal.

The /dev/null is a special device in Unix which consumes everything written to it but signals EOF whenever commands tries to read from it without actually providing any actual data to read from.

The re-directions >/dev/null 2>&1 mean that redirect all standard output produced by the command to the null device which it consumes and redirect all error stream output(2) to the console.

Answering to another OP's question

So what happens if I don't send EOF to nc? I have try it and the script also worked

Nothing is expected to happen. By doing </dev/null we just signal the nc to not expect to read anything from standard input.


Another classic example of why you would need this is when using ssh if incorrectly used with a while loop. Consider an example, if you have a file servers.txt containing a list of server names and want to do an operation over ssh for all the hosts. You would do something like

while read -r server; do
ssh "$server" "uname -a"
done < servers.txt

You could think this being pretty straight forward with nothing expected to go wrong. But it does!

ssh will read from standard input. You have already added a read command with an input redirection over the file, so it also reads from standard input. The logic will be screwed up because after the first read command, the ssh will end up swallowing rest of the standard input which is undesired in your case.

So what do we do? We close the standard input of ssh by doing </dev/null as

ssh "$server" "uname -a" < /dev/null

What does 2&1 mean?

File descriptor 1 is the standard output (stdout).

File descriptor 2 is the standard error (stderr).

At first, 2>1 may look like a good way to redirect stderr to stdout. However, it will actually be interpreted as "redirect stderr to a file named 1".

& indicates that what follows and precedes is a file descriptor, and not a filename. Thus, we use 2>&1. Consider >& to be a redirect merger operator.

What does this cronjob do?

2 represents the stderr stream, and it's saying to redirect it to same place that stream 1 (stdout) was directed, which is /dev/null (sometimes referred to as the "bit bucket").

If you didn't want the output to go to /dev/null, you could put, for example, a filename there, and the output of stderr and stdout would go there.

Ex:

0 */2 * * * python /path/to/file.py >> your_filename 2>&1

Finally, the >> (as opposed to >) means append, so in the case of a filename, the output would be appended instead of overwriting the file. With /dev/null, it doesn't matter though since you are throwing away the output anyway.

How to redirect output away from /dev/null

Since you can modify the command you run you can use a simple shell script as a wrapper to redirect the output to a file.

#!/bin/bash
"$@" >> logfile

If you save this in your path as capture_output.sh then you can add capture_output.sh to the start of your command to append the output of your program to logfile.

How to execute linux command in background?

nohup sleep 10 2>/dev/null &

The nohup command prints a message to stderr, and 2>/dev/null sends stderr to /dev/null.

How do I use the nohup command without getting nohup.out?

The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.

 nohup command >/dev/null 2>&1   # doesn't create nohup.out

Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.

If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:

 nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out

On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:

nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal 

Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)

Explanation:

In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.

But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.

The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.

The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).

Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.

So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".

When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.

The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.

How does cat EOF work in bash?

This is called heredoc format to provide a string into stdin. See https://en.wikipedia.org/wiki/Here_document#Unix_shells for more details.


From man bash:

Here Documents


This type of redirection instructs the shell to read input from
the current source until a line
containing only word (with no trailing
blanks) is seen.

All of the lines read up to that point are then used as the
standard input for a command.

The format of here-documents is:

          <<[-]word
here-document
delimiter

No parameter expansion, command substitution, arithmetic expansion, or
pathname expansion is performed on
word. If any characters in word are
quoted, the
delimiter is the result of quote removal on word, and the lines
in the here-document are not expanded.
If word is unquoted, all lines of the
here-document are subjected to parameter expansion, command
substitution, and arithmetic
expansion. In the latter case, the
character sequence \<newline> is
ignored, and \ must be used to quote the characters \, $, and `.

If the redirection operator is <<-, then all leading tab characters
are stripped from input lines and the
line containing delimiter. This
allows here-documents within shell scripts to be indented in a natural fashion.



Related Topics



Leave a reply



Submit