How to run a venv in the docker?
The example you show doesn't need any OS-level dependencies for Python dependency builds. That simplifies things significantly: you can do things in a single Docker build stage, without a virtual environment, and there wouldn't be any particular benefit from splitting it up.
FROM python:3.7
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["./bot.py"]
The place where a multi-stage build with a virtual environment helps is if you need a full C toolchain to build Python libraries. In this case, in a first stage, you install the C toolchain and set up the virtual environment. In the second stage you need to COPY --from=...
the entire virtual environment to the final image.
# Builder stage:
FROM python:3.7 AS builder
# Install OS-level dependencies
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
build-essential
# libmysql-client-dev, for example
# Create the virtual environment
RUN python3 -m venv /venv
ENV PATH=/venv/bin:$PATH
# Install Python dependencies
WORKDIR /app
COPY requirements.txt .
RUN pip3 install -r requirements.txt
# If your setup.py/setup.cfg has a console script entry point,
# install the application too
# COPY . .
# RUN pip3 install .
# Final stage:
FROM python:3.7 # must be _exactly_ the same image as the builder
# Install OS-level dependencies if needed (libmysqlclient, not ...-dev)
# RUN apt-get update && apt-get install ...
# Copy the virtual environment; must be _exactly_ the same path
COPY --from=builder /venv /venv
ENV PATH=/venv/bin:$PATH
# Copy in the application (if it wasn't `pip install`ed into the venv)
WORKDIR /app
COPY . .
# Say how to run it
EXPOSE 8000
CMD ["./bot.py"]
Dockerfile - activate Python virtualvenv - ubuntu
To explain your error message, it's because by default, Docker runs these commands with /bin/sh
. source
is a Bash command. You have two options:
- Specify using Bash as the shell to run these commands by adding
SHELL ["/bin/bash", "-c"]
before the RUN commands - Use the dot operator:
RUN . gnsave/bin/activate && pip install -r requirements.txt
Docker can't find Python venv executable
virtualenv
is not standalone environment that can be copied between OSes (or containers):
$ python -m venv venv
$ ls -l venv/bin/
total 36
-rw-r--r-- 1 user user 1990 Jun 2 08:35 activate
-rw-r--r-- 1 user user 916 Jun 2 08:35 activate.csh
-rw-r--r-- 1 user user 2058 Jun 2 08:35 activate.fish
-rw-r--r-- 1 user user 9033 Jun 2 08:35 Activate.ps1
-rwxr-xr-x 1 user user 239 Jun 2 08:35 pip
-rwxr-xr-x 1 user user 239 Jun 2 08:35 pip3
-rwxr-xr-x 1 user user 239 Jun 2 08:35 pip3.10
lrwxrwxrwx 1 user user 46 Jun 2 08:35 python -> /home/user/.pyenv/versions/3.10.2/bin/python
lrwxrwxrwx 1 user user 6 Jun 2 08:35 python3 -> python
lrwxrwxrwx 1 user user 6 Jun 2 08:35 python3.10 -> python
As you can see python
executables are just links to original python executable. It is something like snapshot for your original python that may be reverted or applied. But snapshot is useless if you don't have original base. So you have to create venv
at same environment that will be used on.
However in case of containers you don't need venv
at all. Container is already an isolated environment and you don't need one more isolation level with venv. (At least my question about why we need to use venv
inside container is still don't have an answer)
In short: remove all venv
related lines:
# syntax=docker/dockerfile:1
FROM gcr.io/distroless/python3
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
ENTRYPOINT [ "python" , "main.py"]
However if you need to some extra libraries/compile tools (like gcc
) to build python libraries when installing it by pip
then venv
may be used to be able move only resulted library binaries without store compile tools inside container.
In this case you have to use same (or compatible) python base at build
image and resulted image (venv
"snapshot" should be applied to compatible base).
Let's see this example:
FROM debian:11-slim AS build
...
FROM gcr.io/distroless/python3-debian11
...
Both images at least Debian
based.
Or another example:
FROM python:3.9-slim as compiler
...
FROM python:3.9-slim as runner
...
And again base of builder
and runner
is the same
Looks like python:3.9.5-slim-buster
and gcr.io/distroless/python3
are both Debian based and should be compatible, but probably it is not fully compatible.
You change endpoint to ENTRYPOINT [ "sleep" , "600"]
. That will allow to keep container running for 10 minutes. After that attach to running container: docker exec -it container_name bash
and check is python
executable exists: ls -l /app/venv/bin/
or just simply use it without venv
as I said before
Error activating virtualenv with docker ENTRYPOINT script
The problem is your compose config, it's overriding /app_server
with the directory from the host. Just delete the volumes from docker compose setup.
(As an aside, I recommend against using Alpine, it'll often result in slow Docker builds: https://pythonspeed.com/articles/alpine-docker-python/)
Is there a way to automatically activate a virtualenv as a docker entrypoint?
As an alternative to just sourcing the script inline with the command, you could make a script that acts as an ENTRYPOINT
. An example entrypoint.sh
would look something like:
#!/bin/bash
source venv/bin/activate
exec "$@"
Then in your Dockerfile
you would copy this file and set it as the ENTRYPOINT
:
FROM myimage
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Now you can run it like docker run mynewimage flask <sub command>
or docker run mynewimage gunicorn
.
Install and using pip and virtualenv in Docker
On Debian-based platforms, including Ubuntu, the command installed by python3-pip
is called pip3
in order for it to peacefully coexist with any system-installed Python 2 and its pip
.
Somewhat similarly, the virtualenv
command is not installed by the package python3-virtualenv
; to get that, you need apt-get install -y virtualenv
.
Note that venv
is included in the Python 3 standard library, so you don't really need to install anything at all.
python3 -m venv newenv
Why would you want a virtualenv inside Docker anyway, though? (There are situations where it makes sense but in the vast majority of cases, you want the Docker container to be as simple as possible, which means, install everything as root, and rebuild the whole container if something needs to be updated.)
As an aside, you generally want to minimize the number of RUN
statements. Making many layers while debugging is perhaps defensible, but layers which do nothing are definitely just wasteful. Perhaps also discover that apt-get
can install more than one package at a time.
RUN apt-get update -y && \
apt-get install -y python3 python3-pip && \
...
The &&
causes the entire RUN
sequence to fail as soon as one of the commands fails.
Related Topics
What Is an 'Endpoint' in Flask
How to Add a Question Mark [] Button on the Top of a Tkinter Window
Python Raises Syntaxerror on "=" in If Statement
Variable Defined with With-Statement Available Outside of With-Block
Why Does Appending to One List Also Append to All Other Lists in My List of Lists
Where to Put Django Startup Code
How to Plot Normal Distribution
Xrange(2**100)' -> Overflowerror: Long Int Too Large to Convert to Int
Finding First and Last Index of Some Value in a List in Python
Python: Sorting Items from Top Left to Bottom Right with Opencv
Write Dictionary of Lists to a CSV File
Python Split String into Multiple String
How Do Chained Comparisons in Python Actually Work
How to Tell a Python Script to Use a Particular Version
Multiprocessing VS Multithreading VS Asyncio in Python 3
Do I Need to Import Submodules Directly
How to Plot Multi-Color Line If X-Axis Is Date Time Index of Pandas