When Did Hup Stop Getting Sent and What How to Do About It

When did HUP stop getting sent and what can I do about it?

I believe you're looking for the huponexit shell option. You can set this easily with

$ shopt -s huponexit

Some details from the bash man page:

The shell exits by default upon receipt of a SIGHUP. Before
exiting, an interactive shell resends the SIGHUP to all jobs, running
or
stopped. Stopped jobs are sent SIGCONT to ensure that they receive the SIGHUP. To prevent the shell from sending the signal to
a par-
ticular job, it should be removed from the jobs table with the disown builtin (see SHELL BUILTIN COMMANDS below) or marked to not
receive
SIGHUP using disown -h.

If the huponexit shell option has been set with shopt, bash sends a SIGHUP to all jobs when an interactive login shell exits.

How do I use the nohup command without getting nohup.out?

The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.

 nohup command >/dev/null 2>&1   # doesn't create nohup.out

Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.

If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:

 nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out

On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:

nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal 

Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)

Explanation:

In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.

But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.

The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.

The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).

Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.

So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".

When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.

The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.

Differences of the supervisord stopsignal options

I believe those options refer to Linux Signals. You can read more on the man pages - https://man7.org/linux/man-pages/man7/signal.7.html or check out this more descriptive article which the below table was taken from - https://www.computerhope.com/unix/signals.htm

As the man-pages detail, SIGINT (INT) would be the right choice to signal an Interrupt from keyboard.







































SignalDescription
SIGTERMThe TERM signal is sent to a process to request its termination. Unlike the KILL signal, it can be caught and interpreted or ignored by the process. This signal allows the process to perform nice termination releasing resources and saving state if appropriate. It should be noted that SIGINT is nearly identical to SIGTERM.
SIGHUPThe HUP signal is sent to a process when its controlling terminal is closed. It was originally designed to notify a serial line drop (HUP stands for "Hang Up"). In modern systems, this signal usually indicates the controlling pseudo or virtual terminal is closed.
SIGINTThe INT signal is sent to a process by its controlling terminal when a user wants to interrupt the process. This signal is often initiated by pressing Ctrl+C, but on some systems, the "delete" character or "break" key can be used.
SIGQUITThe QUIT signal is sent to a process by its controlling terminal when the user requests that the process perform a core dump.
SIGKILLForcefully terminate a process. With STOP, this is one of two signals which cannot be intercepted, ignored, or handled by the process itself.
SIGUSR1User-defined signal 1. This is one of two signals designated for custom user signal handling.
SIGUSR2User-defined signal 2. This is one of two signals designated for custom user signal handling.

Does linux kill background processes if we close the terminal from which it has started?

Who should kill jobs?

Normally, foreground and background jobs are killed by SIGHUP sent by kernel or shell in different circumstances.


When does kernel send SIGHUP?

Kernel sends SIGHUP to controlling process:

  • for real (hardware) terminal: when disconnect is detected in a terminal driver, e.g. on hang-up on modem line;
  • for pseudoterminal (pty): when last descriptor referencing master side of pty is closed, e.g. when you close terminal window.

Kernel sends SIGHUP to other process groups:

  • to foreground process group, when controlling process terminates;
  • to orphaned process group, when it becomes orphaned and it has stopped members.

Controlling process is the session leader that established the connection to the controlling terminal.

Typically, the controlling process is your shell. So, to sum up:

  • kernel sends SIGHUP to the shell when real or pseudoterminal is disconnected/closed;
  • kernel sends SIGHUP to foreground process group when the shell terminates;
  • kernel sends SIGHUP to orphaned process group if it contains stopped processes.

Note that kernel does not send SIGHUP to background process group if it contains no stopped processes.


When does bash send SIGHUP?

Bash sends SIGHUP to all jobs (foreground and background):

  • when it receives SIGHUP, and it is an interactive shell (and job control support is enabled at compile-time);
  • when it exits, it is an interactive login shell, and huponexit option is set (and job control support is enabled at compile-time).

See more details here.

Notes:

  • bash does not send SIGHUP to jobs removed from job list using disown;
  • processes started using nohup ignore SIGHUP.

More details here.


What about other shells?

Usually, shells propagate SIGHUP. Generating SIGHUP at normal exit is less common.


Telnet or SSH

Under telnet or SSH, the following should happen when connection is closed (e.g. when you close telnet window on PC):

  1. client is killed;
  2. server detects that client connection is closed;
  3. server closes master side of pty;
  4. kernel detects that master pty is closed and sends SIGHUP to bash;
  5. bash receives SIGHUP, sends SIGHUP to all jobs and terminates;
  6. each job receives SIGHUP and terminates.

Problem

I can reproduce your issue using bash and telnetd from busybox or dropbear SSH server: sometimes, background job doesn't receive SIGHUP (and doesn't terminate) when client connection is closed.

It seems that a race condition occurs when server (telnetd or dropbear) closes master side of pty:

  1. normally, bash receives SIGHUP and immediately kills background jobs (as expected) and terminates;
  2. but sometimes, bash detects EOF on slave side of pty before handling SIGHUP.

When bash detects EOF, it by default terminates immediately without sending SIGHUP. And background job remains running!


Solution

It is possible to configure bash to send SIGHUP on normal exit (including EOF) too:

  • Ensure that bash is started as login shell. The huponexit works only for login shells, AFAIK.

    Login shell is enabled by -l option or leading hyphen in argv[0]. You can configure telnetd to run /bin/bash -l or better /bin/login which invokes /bin/sh in login shell mode.

    E.g.:


    telnetd -l /bin/login
  • Enable huponexit option.

    E.g.:


    shopt -s huponexit

    Type this in bash session every time or add it to .bashrc or /etc/profile.


Why does the race occur?

bash unblocks signals only when it's safe, and blocks them when some code section can't be safely interrupted by a signal handler.

Such critical sections invoke interruption points from time to time, and if signal is received when a critical section is executed, it's handler is delayed until next interruption point happens or critical section is exited.

You can start digging from quit.h in the source code.

Thus, it seems that in our case bash sometimes receives SIGHUP when it's in a critical section. SIGHUP handler execution is delayed, and bash reads EOF and terminates before exiting critical section or calling next interruption point.


Reference

  • "Job Control" section in official Glibc manual.
  • Chapter 34 "Process Groups, Sessions, and Job Control" of "The Linux Programming Interface" book.

How do I make a CGI::Fast based application kill -HUP aware?

You could use the block version of the eval function and the alarm function to add a timeout to Fast::CGI. In the code below I have set a five second timeout. If it times out it will go back to the start of the loop which will check to see if $continue has been set to zero yet. If it hasn't then it we start a new CGI::Fast object. This means that a maximum of five seconds will go by after you send a HUP before the code will start to stop if it was waiting for a new CGI::Fast object (the timeout has no affect on the rest of the loop).

my $continue = 1;
$SIG{HUP} = sub { $continue = 0 };
while ($continue) {
my $cgi;
eval {
local $SIG{ALRM} = sub { die "timeout\n" };
alarm 5; #set an alarm for five seconds
$cgi = CGI::Fast->new;
alarm 0; #turn alarm off
};
if ($@) {
next if $@ eq "timeout\n"; #we will see the HUP signal's change now
#died for a reason other than a timeout, so raise the error
die $@;
}
last unless defined $cgi; #CGI::Fast has asked us to exit
do_stuff($cgi);
}

#clean up

How to get the process ID to kill a nohup process?

When using nohup and you put the task in the background, the background operator (&) will give you the PID at the command prompt. If your plan is to manually manage the process, you can save that PID and use it later to kill the process if needed, via kill PID or kill -9 PID (if you need to force kill). Alternatively, you can find the PID later on by ps -ef | grep "command name" and locate the PID from there. Note that nohup keyword/command itself does not appear in the ps output for the command in question.

If you use a script, you could do something like this in the script:

nohup my_command > my.log 2>&1 &
echo $! > save_pid.txt

This will run my_command saving all output into my.log (in a script, $! represents the PID of the last process executed). The 2 is the file descriptor for standard error (stderr) and 2>&1 tells the shell to route standard error output to the standard output (file descriptor 1). It requires &1 so that the shell knows it's a file descriptor in that context instead of just a file named 1. The 2>&1 is needed to capture any error messages that normally are written to standard error into our my.log file (which is coming from standard output). See I/O Redirection for more details on handling I/O redirection with the shell.

If the command sends output on a regular basis, you can check the output occasionally with tail my.log, or if you want to follow it "live" you can use tail -f my.log. Finally, if you need to kill the process, you can do it via:

kill -9 `cat save_pid.txt`
rm save_pid.txt

Can we stop event hub capture for time being & re-enable?

Sure, you can disable the capture temporarily. However, make sure your retention period is long enough - 90 days for dedicated, that no data is purged w/o being captured.

See the capture description and the 'enabled' bool here > https://learn.microsoft.com/en-us/azure/templates/microsoft.eventhub/namespaces/eventhubs?tabs=bicep#capturedescription

See the overview of retention policy feature here > https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-features#event-retention



Related Topics



Leave a reply



Submit