Sigterm Does Not Reach Node Script When Docker Runs It with '/Bin/Sh -C'

SIGTERM does not reach node script when docker runs it with `/bin/sh -c`

There are tools designed to solve this problem:

  • https://github.com/yelp/dumb-init
  • https://github.com/krallin/tini

I think if you only have a single process, all you need to do is explicitly handle the signal with a signal handler, which bash doesn't do for you.

Using the ["node", "."] syntax, you could use https://nodejs.org/api/process.html#process_signal_events and just have it exit on SIGTERM. I believe that would be enough.

Or using a bash script you can use trap "exit 0" TERM

You could also use a process supervisor like http://skarnet.org/software/s6/

SIGTERM does not reach node script when docker runs it with `/bin/sh -c`

There are tools designed to solve this problem:

  • https://github.com/yelp/dumb-init
  • https://github.com/krallin/tini

I think if you only have a single process, all you need to do is explicitly handle the signal with a signal handler, which bash doesn't do for you.

Using the ["node", "."] syntax, you could use https://nodejs.org/api/process.html#process_signal_events and just have it exit on SIGTERM. I believe that would be enough.

Or using a bash script you can use trap "exit 0" TERM

You could also use a process supervisor like http://skarnet.org/software/s6/

Dockerized Node JS project getting: command failed signal SIGTERM error

I assume you're either incorrectly specifying your script in the package.json or your script is not server.js.

A minimal repro of your question works:

Using the Node.JS Getting Started guide's example with one minor tweak:
https://nodejs.org/en/docs/guides/getting-started-guide/

NOTE Change const hostname = '127.0.0.1'; to const hostname = '0.0.0.0'; This is necessary to access the containerized app from the host.

Adding a package.json because you have one and to show npm start:

package.json:

{
"name": "66281738",
"version": "0.0.1",
"scripts": {
"start": "node app.js"
}
}

NOTE I believe npm start defaults to "start": "node server.js"

Using your Dockerfile and:

QUESTION="66281738"
docker build --tag=${QUESTION} --file=./Dockerfile .
docker run --interactive --tty --publish=7777:3000 ${QUESTION}

Yields:

> 66281738@0.0.1 start
> node app.js

Server running at http://0.0.0.0:3000/

NOTE docker run binds the container's :3000 port to the host's :7777 just to show these need not be the same.

Then:

curl --request GET http://localhost:3000/

Yields:

Hello World

How to catch SIGTERM properly in Docker?

You need to make sure the main container process is your actual application, and not a shell wrapper.

As you have the CMD currently, a shell invokes it. The argument list $@ will always be empty. The shell parses /repo/run_api.sh and sees that it's followed by a semicolon, so it might need to do something else. So even though your script correctly ends with exec gunicorn ... to hand off control directly to the other process, it's still running underneath a shell, and when you docker stop the container, it goes to the shell wrapper.

The easiest way to avoid this shell is to use an exec form CMD:

CMD ["/repo/run_api.sh"]

This will cause your script to run directly, without having a /bin/sh -c wrapper invoking it, and when the script eventually exec another process, that process becomes the main process and will receive the docker stop signal.

How to run a cron job inside a docker container?

You can copy your crontab into an image, in order for the container launched from said image to run the job.


Important: as noted in docker-cron issue 3: use LF, not CRLF for your cron file.


See "Run a cron job with Docker" from Julien Boulay in his Ekito/docker-cron:

Let’s create a new file called "hello-cron" to describe our job.

# must be ended with a new line "LF" (Unix) and not "CRLF" (Windows)
* * * * * echo "Hello world" >> /var/log/cron.log 2>&1
# An empty line is required at the end of this file for a valid cron file.

If you are wondering what is 2>&1, Ayman Hourieh explains.

The following Dockerfile describes all the steps to build your image

FROM ubuntu:latest
MAINTAINER docker@ekito.fr

RUN apt-get update && apt-get -y install cron

# Copy hello-cron file to the cron.d directory
COPY hello-cron /etc/cron.d/hello-cron

# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron

# Apply cron job
RUN crontab /etc/cron.d/hello-cron

# Create the log file to be able to run tail
RUN touch /var/log/cron.log

# Run the command on container startup
CMD cron && tail -f /var/log/cron.log

(see Gaafar's comment and How do I make apt-get install less noisy?:

apt-get -y install -qq --force-yes cron can work too)

As noted by Nathan Lloyd in the comments:

Quick note about a gotcha:

If you're adding a script file and telling cron to run it, remember to

RUN chmod 0744 /the_script

Cron fails silently if you forget.


OR, make sure your job itself redirect directly to stdout/stderr instead of a log file, as described in hugoShaka's answer:

 * * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2

Replace the last Dockerfile line with

CMD ["cron", "-f"]

See also (about cron -f, which is to say cron "foreground") "docker ubuntu cron -f is not working"


Build and run it:

sudo docker build --rm -t ekito/cron-example .
sudo docker run -t -i ekito/cron-example

Be patient, wait for 2 minutes and your command-line should display:

Hello world
Hello world

Eric adds in the comments:

Do note that tail may not display the correct file if it is created during image build.

If that is the case, you need to create or touch the file during container runtime in order for tail to pick up the correct file.

See "Output of tail -f at the end of a docker CMD is not showing".


See more in "Running Cron in Docker" (Apr. 2021) from Jason Kulatunga, as he commented below

See Jason's image AnalogJ/docker-cron based on:

  • Dockerfile installing cronie/crond, depending on distribution.

  • an entrypoint initializing /etc/environment and then calling

    cron -f -l 2

How to execute a script when I terminate a docker container

The docker stop command attempts to stop a running container first by sending a SIGTERM signal to the root process (PID 1) in the container. If the process hasn't exited within the timeout period a SIGKILL signal will be sent.

In practice, that means that you have to define an ENTRYPOINT script, which will intercept (trap) the SIGTERM signal and execute any shutdown logic as appropriate.

A sample entrypoint script can look something like this:

#!/bin/bash

#Define cleanup procedure
cleanup() {
echo "Container stopped, performing cleanup..."
}

#Trap SIGTERM
trap 'cleanup' SIGTERM

#Execute a command
"${@}" &

#Wait
wait $!

(shell signal handling, with respect to wait, is explained in a bit more details here)

Note, that with the entrypoint above, the cleanup logic will only be executed if container is stopped explicitly, if you wish it to also run when the underlying process/command stops by itself (or fails), you can restructure it as follows.

...

#Trap SIGTERM
trap 'true' SIGTERM

#Execute command
"${@}" &

#Wait
wait $!

#Cleanup
cleanup

docker-compose down does not cascade down SIGTERM through two layers of bash scripts

I found the solution.

in the sig_handler function of the entrypoint script, the following wait instruction solves it:

sig_handler() 
{
echo "[LAYER1] killing children with pid "$pid
[[ $pid ]] && kill $pid
wait $pid # this is crucial
exit 1
}

So before actually quitting the container, the wait $pid forces it to actually wait out the exit from the subsequent script. I tested this 5 iterations of scripts, it all works.



Related Topics



Leave a reply



Submit