Bash script command to wait until docker-compose process has finished before moving on
As already stated in the other answers you'll have to do an application-specific readiness-check for your container. Personally I prefer to provide these checks/scripts with the container image, e.g. by adding the wait-for-it.sh
(see ErikMD's answer) or similar scripts to the image and executing them within the running container e.g. with docker exec
(as proposed by Ahmed Arafa's answer).
This has some advantages over running the check on the host:
- you can provide all required scripts and dependencies with the container image
- you don't need to make any assumptions about the host (e.g. when testing via an api endpoint: is
wget
/curl
available on the host, or even abash
/shell? Is thedocker
/docker-compose
command executed on the same host as the docker deamon, i.e. could you reach the container vialocalhost
?) - you don't have to expose any ports/endpoints to the outside world only for checking container status
- you can provide different check scripts with different versions of the image without having to modify the start script
So, to apply this method to your example, simply add a script - e.g. is_ready.sh
- to the image, execute it within the container with docker-compose exec
and act upon its exit status:
# Execute applications
cd /opt/program
docker-compose up -d
echo "waiting for message queue..."
while ! docker-compose exec rabbitmq /is_ready.sh; do sleep 1; done
echo "starting ingest manager"
cd /opt/program/scripts
chmod +x start-manager.sh
./start-manager.sh &
where is_ready.sh
may look like this:
#!/bin/bash
rabbitmqctl status
Going even further down this road you may leverage the native healtcheck feature of docker and docker-compose. With these docker
will automatically execute the defined healthcheck script/command and indicate the current health in the container status.
Incorporated into your script this could look like:
# Execute applications
cd /opt/program
docker-compose up -d
echo "waiting for message queue..."
is_healthy() {
service="$1"
container_id="$(docker-compose ps -q "$service")"
health_status="$(docker inspect -f "{{.State.Health.Status}}" "$container_id")"
if [ "$health_status" = "healthy" ]; then
return 0
else
return 1
fi
}
while ! is_healthy rabbitmq; do sleep 1; done
echo "starting ingest manager"
cd /opt/program/scripts
chmod +x start-manager.sh
./start-manager.sh &
with the healthcheck defined in the docker-compose.yml
...
services:
rabbitmq:
...
healtcheck:
test: rabbitmqctl status
For more complex healthchecks you can also add a longer script to the image and execute that instead.
how to wait for docker exec to complete before continue in shell script
The docker exec
command will wait until it completes by default. The possible reasons for docker exec
to return before the command it runs has completed, that I can think of, are:
- You explicitly told
docker exec
to run in the background with the detach flag, aka-d
. - The command you are exec'ing inside the container returns before a process it runs has completed, e.g. launching a background daemon. In that scenario, you need to adjust the command you are running.
Here are some examples:
$ # launch a container to test:
$ docker run -d --rm --name test-exec busybox tail -f /dev/null
a218f90f941698960ee5a9750b552dad10359d91ea137868b50b4f762c293bc3
$ # test a sleep command, works as expected
$ time docker exec -it test-exec sleep 10
real 0m10.356s
user 0m0.044s
sys 0m0.040s
$ # test running without -it, still works
$ time docker exec test-exec sleep 10
real 0m10.292s
user 0m0.040s
sys 0m0.040s
$ # test running that command with -d, runs in the background as requested
$ time docker exec -itd test-exec sleep 10
real 0m0.196s
user 0m0.056s
sys 0m0.024s
$ # run a command inside the container in the background using a shell and &
$ time docker exec -it test-exec /bin/sh -c 'sleep 10 &'
real 0m0.289s
user 0m0.048s
sys 0m0.044s
How can I wait for a docker container to be up and running?
As commented in a similar issue for docker 1.12
HEALTHCHECK
support is merged upstream as per docker/docker#23218 - this can be considered to determine when a container is healthy prior to starting the next in the order
This is available since docker 1.12rc3 (2016-07-14)
docker-compose
is in the process of supporting a functionality to wait for specific conditions.
It uses
libcompose
(so I don't have to rebuild the docker interaction) and adds a bunch of config commands for this. Check it out here: https://github.com/dansteen/controlled-compose
You can use it in Dockerfile like this:
HEALTHCHECK --interval=5m --timeout=3s \
CMD curl -f http://localhost/ || exit 1
Official docs: https://docs.docker.com/engine/reference/builder/#/healthcheck
Docker Compose wait for container X before starting Y
Finally found a solution with a docker-compose method. Since docker-compose file format 2.1 you can define healthchecks.
I did it in a example project
you need to install at least docker 1.12.0+.
I also needed to extend the rabbitmq-management Dockerfile, because curl isn't installed on the official image.
Now I test if the management page of the rabbitmq-container is available. If curl finishes with exitcode 0 the container app (python pika) will be started and publish a message to hello queue. Its now working (output).
docker-compose (version 2.1):
version: '2.1'
services:
app:
build: app/.
depends_on:
rabbit:
condition: service_healthy
links:
- rabbit
rabbit:
build: rabbitmq/.
ports:
- "15672:15672"
- "5672:5672"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
output:
rabbit_1 | =INFO REPORT==== 25-Jan-2017::14:44:21 ===
rabbit_1 | closing AMQP connection <0.718.0> (172.18.0.3:36590 -> 172.18.0.2:5672)
app_1 | [x] Sent 'Hello World!'
healthcheckcompose_app_1 exited with code 0
Dockerfile (rabbitmq + curl):
FROM rabbitmq:3-management
RUN apt-get update
RUN apt-get install -y curl
EXPOSE 4369 5671 5672 25672 15671 15672
Version 3 no longer supports the condition form of depends_on.
So i moved from depends_on to restart on-failure. Now my app container will restart 2-3 times until it is working, but it is still a docker-compose feature without overwriting the entrypoint.
docker-compose (version 3):
version: "3"
services:
rabbitmq: # login guest:guest
image: rabbitmq:management
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
app:
build: ./app/
environment:
- HOSTNAMERABBIT=rabbitmq
restart: on-failure
depends_on:
- rabbitmq
links:
- rabbitmq
Finish background process when next process is completed
When you launch a command in the background, the special parameter $!
contains its process ID. You can save this in a variable and later kill(1) it.
In plain shell-script syntax, without Make-related escaping:
./check_container.sh &
CHECK_CONTAINER_PID=$!
docker-compose run --rm tests
RESULT=$?
kill "$CHECK_CONTAINER_PID"
Why do sleep & wait in bash?
It makes sense to sleep in background and then wait, when one wants to handle signals in a timely manner.
When bash is executing an external command in the foreground, it does
not handle any signals received until the foreground process
terminates
(detailed explanation here).
While the second example implements a signal handler, for the first one it makes no difference whether the sleep is executed in foreground or not. There is no trap and the signal is not propagated to the nginx
process.
To make it respond to the SIGTERM
signal, the entrypoint should be something this:
/bin/sh -c 'nginx -g \"daemon off;\" & trap exit TERM; while :; do sleep 6h & wait $${!}; nginx -s reload; done'
To test it:
docker run --name test --rm --entrypoint="/bin/sh" nginx -c 'nginx -g "daemon off;" & trap exit TERM; while :; do sleep 20 & wait ${!}; echo running; done'
Stop the container
docker stop test
or send the TERM
signal (docker stop
sends a TERM
followed by KILL
if the main process does not exit)
docker kill --signal=SIGTERM test
By doing this, the scripts exits immediately. Now if we remove the wait ${!}
the trap is executed when sleep
ends. All that works well for the second example too.
Note: in both cases the intention is to check certificate renewal every 12h and reload the configuration every 6h as mentioned in the guide
The two commands do that just fine. IMHO the additional wait in the first example is just an oversight of the developers.
EDITED:
It seems the rationalization above, which was meant to give possible reasons behind the background sleep, might create some confusion.
(There is a related post Why use nginx with “daemon off” in background with docker?).
While the command suggested in the answer above is an improvement over the one in the question it is still flawed because, as mentioned in the linked post, the nginx
server should be the main process and not a child. That can be easily achieved using the exec
system call. The script becomes:
'while :; do sleep 6h; nginx -s reload; done & exec nginx -g "daemon off;"'
(More info in section Configure app as PID 1 in Docker best practices)
This, IMHO, is far better because not only is nginx
monitored but it also handle signals. A configuration reload (nginx -s reload
), for example, can also be done manually by simply sending the HUP
signal to the docker container (See Controlling nginx).
How to start docker container after succesfull wait-for-it script
Let's clear up a couple of things first. When you use both ENTRYPOINT
and CMD
in a Dockerfile, the CMD
value is passed to the ENTRYPOINT
as a parameter. So what you currently have in the file translates to
/entrypoint.sh npm start
This is executed when you start the container. Without knowing what's happening in entrypoint.sh
, it's hard to tell what impact this has.
Docker
You could make the following changes, please give this a try:
- Remove the
ENTRYPOINT
from theDockerfile
. Change
CMD
to the following:CMD /wait-for-it.sh localhost && /entrypoint.sh npm start
When doing that, please adjust the following:
- The path for
wait-for-it.sh
- please adjust to wherever you're copying the script file in the Dockerfile. I suggest you copy it to the same folder asentrypoint.sh
. - The
localhost
argument for thewait-for-it.sh
script file, please replace with your database host.
What the above does, is run the wait-for-it.sh
script and then, once the database is up, it runs the previous command that you had in ENTRYPOINT
and CMD
. It should be comparable to what you currently have.
As an alternative, you could also call the wait-for-it.sh
script from your entrypoint.sh
script and only run the additional steps (npm start
once the wait script has succeeded). Up to you...
Docker-Compose
If you are using Docker-Compose for starting up your containers, you can overwrite the command that is executed when the container starts using the command
attribute in your docker-compose.yaml
file, e.g.
command: >
bash -c "
/wait-for-it.sh localhost
&& /entrypoint.sh npm start
"
Please note the use of
bash -c
for starting multiple commands using your shell of choice (Bash in this case)- the quotes, you'll need them for having multiple lines.
Using this method, you can basically chain multiple commands to run after each other, combining them using the &&
operator.
Wait-for-it Script
BTW: I use this wait-for-it script for a similar purpose with good results in the same manner as described above. It's slightly more robust than you version of the wait script, and supports pretty much any host/port combination. I use it to wait for MySQL - your question is not clear whether it's about MySQL or PostgreSQL.
Related Topics
What Happens Internally When Deleting an Opened File in Linux
Using '*' in Docker Exec Command
What Is The Maximum Number of Subdirectories Allowed in Ext4
Where Is Default Installation Directory for Mongodb
How to Send Data Using Curl from Linux Command Line
Human Readable, Recursive, Sorted List of Largest Files
Exit from Bash Script But Keep The Process Running
Sending Command to Process Using /Proc
Ensure Config.H Is Included Once
Linux/Unix Socket Self-Connection
On Building Docker Image Level=Error Msg="Can't Close Tar Writer: Io: Read/Write on Closed Pipe"
Alternative for Netcat Utility
How to Check If Linux Console Screensaver Has Blanked Screen