Run a Shell Script from Docker-Compose Command, Inside the Container

Run a shell script from docker-compose command, inside the container

First thing, You are copying entrypoint.sh to $APP which you passed from your build args but you did not mentioned that and second thing you need to set permission for entrypoint.sh. Better to add these three lines so you will not need to add command in docker-compose file.

FROM python:3.6-alpine3.7
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev \
libffi-dev openssl-dev
RUN pip install --upgrade pip
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt

COPY . .
# These line for /entrypoint.sh
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
entrypoint "/entrypoint.sh"

docker compose for api will be

  api:
build: .
container_name: app
expose:
- "5000"

or you can use you own also will work fine

version: "2"

services:
api:
build: .
container_name: app
command: /bin/sh -c "entrypoint.sh"
expose:
- "5000"

Now you can check with docker run command too.

docker run -it --rm myapp

How i can run sh script inside docker-compose file in background?

You can run a command in a background process using a single ampersand & after the command. See this comment discussing how to run multiple commands in the background using subshells.

You can use something like this to run the first two commands in the background and continue onto the final command.

command:
sh -c '(./root/clone_repo.sh &) &&
(appium -p 4723 &) &&
./root/start_emu.sh'

How to execute script by host after starting docker container

You can't use the Docker tools to execute commands on the host system. A general design point around Docker is that containers shouldn't be able to affect the host.

Nothing stops you from writing your own shell script that runs on the host and does the steps you need:

#!/bin/sh
docker-compose -d up
./wait-for.sh localhost 1433
./script.sh

(The wait-for.sh script is the same as described in the answers to Docker Compose wait for container X before starting Y that don't depend on Docker health checks.)

For your use case it may be possible to run the data importer in a separate container. A typical setup could look like this; note that the importer will run every time you run docker-compose up. You may want to actually build this into a separate image.

version: '3.8'
services:
db: { same: as you have currently }
importer:
image: mcr.microsoft.com/mssql/server:2017-latest
entrypoint: ./wait-for.sh db 1433 -- ./script.sh
workdir: /import
volumes: [.:/import]

The open-source database containers also generally support putting scripts in /docker-entrypoint-initdb.d that get executed the first time the container is launched, but the SQL Server image doesn't seem to support this; questions like How can I restore an SQL Server database when starting the Docker container? have a complicated setup to replicate this behavior.

How do i run a shell script inside docker-compose

Your script isn't working because Alpine base images don't have GNU bash. Your script almost limits itself to the POSIX Shell Command Language; if you do, you can change the "shebang" line to #!/bin/sh.

#!/bin/sh
# ^^^ not bash
pytest # individual lines don't need to end with ;
err=$?
# use [ ... ] (test), not ((...))
if [ "$err" -ne 5 ] && [ "$err" -ne 0 ]; then
exit "$err"
fi
flake8

In the context of a CI system, it is important to remove the volumes: line that mounts a local directory over your container's /app directory: having that line means you are not testing what's in your image at all, but instead a possibly-related code tree that's on the host system.

In practice I'd suggest running both of these tools in a non-Docker environment. It will be simpler to run them and collect their results. Especially a style checker like flake8 will have very few dependencies on system packages or other containers being started, and ideally your unit tests can also run without hard-to-set-up context like a database container as well. I'd suggest a sequence like:

  1. Check out the source code.
  2. Create a virtual environment and install its dependencies.
  3. Run pytest, flake8, and similar test tools.
  4. Then build a Docker image, without test-only tools.
  5. Run the image with its assorted dependencies.
  6. Run further tests based on network calls into the container.

Run a bash script in docker-compose

Docker container command should run for as long as you expect for container to run.

When your script starts, there is completely nothing else running in container. There is no apache2 or anything, just your script. And it ends with service apache2 restart and quits right after. It doesn't care about any background processes that you just started. It only cares that your foreground process - your bash script - has finished already.

As you can see in your image (using latest version at time of writing this): https://hub.docker.com/layers/owncloud/library/owncloud/latest/images/sha256-57e690e039c947e4de6bdae767b57b402d3ed9b9ed9f12ba5d31d3cf92def4b8?context=explore it is using CMD ["apache2-foreground"] to run. And that's how you should end your bash script so it also runs apache2 in foreground:

#!/bin/bash
a2enmod ssl
a2ensite default-ssl
openssl req -x509 -nodes -days 99999 -newkey rsa:2048 -subj "/C=US/ST=Ohio/L=Cleveland/O=Data/CN=fake.domain.com" -keyout /etc/ssl/private/ssl-cert-snakeoil.key -out /etc/ssl/certs/ssl-cert-snakeoil.pem
apache2-foreground

How do I execute a shell script against my localstack Docker container after it loads?

You can use mount volume instead of "command" to execute your script at startup container.

volumes:
- ./create_bucket.sh:/docker-entrypoint-initaws.d/create_bucket.sh

Also as they specify in their documentation, localstack must be precisely configured to work with docker-compose.

Please note that there’s a few pitfalls when configuring your stack manually via docker-compose (e.g., required container name, Docker network, volume mounts, environment variables, etc.)

In your case I guess you are missing some volumes, container name and variables.

Here is an example of a docker-compose.yml found here, which I have more or less adapted to your case

version: '3.8'

services:
localstack:
image: localstack/localstack
container_name: localstack-example
hostname: localstack
ports:
- "4566-4583:4566-4583"
environment:
# Declare which aws services will be used in localstack
- SERVICES=s3
- DEBUG=1
# These variables are needed for localstack
- AWS_DEFAULT_REGION=<region>
- AWS_ACCESS_KEY_ID=<id>
- AWS_SECRET_ACCESS_KEY=<access_key>
- DOCKER_HOST=unix:///var/run/docker.sock
- DATA_DIR=/tmp/localstack/data
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- /var/run/docker.sock:/var/run/docker.sock
- ./create_bucket.sh:/docker-entrypoint-initaws.d/create_bucket.sh

Other sources:

  • Running shell script against Localstack in docker container
  • https://docs.localstack.cloud/localstack/configuration/


Related Topics



Leave a reply



Submit