Daemonizing Celery

Daemonization celery in production

OK, now that we know what was the error (from your comment), the following few steps will most likely solve your problem:

  1. Create a virtual environment for your Celery: python3 -m venv /home/classgram/venv

  2. Your enviroment file either has to export PYTHONPATH=/home/classgram/www/hamclassy-backend, which is the simplest solution, good for testing, and development. However, for production I recommend you to build a Python package (wheel), and install it in your virtual environment.

  3. Modify your systemd service file (and possibly the environment file too) so that you simply execute Celery directly from the Python virtual environment we created above. It should run Celery this way (or similar): /home/classgram/venv/bin/celery multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}. Sure, you can make it shorter by setting CELERY_BIN to /home/classgram/venv/bin/celery.

Daemonizing celery

I replicated your steps and it resulted in the same issue. The problem was the celery user not having a shell.

sudo useradd -N -M --system -s /bin/false celery

Changing -s /bin/false to -s /bin/bash fixed the issue. The reason being is that the celeryd init script uses the celery user's shell to execute celery commands. Without a shell, the su command below exits silently.

_chuid () {
su "$CELERYD_USER" -c "$CELERYD_MULTI $*"
}

How to setup celery as daemon

Yes, create new user (celery is a good name). No need for any special attributes. Regular user should be fine. You define the necessary environment variables inside the /etc/conf.d/celery file.

Let's say you have created celery user in /home/celery... Log-in as that user, and create Python 3 virtual environment: python3 -m venv ~/venv. After that your /etc/conf.d/celery should have something like:

CELERY_BIN=/home/celery/venv/bin/celery   
CELERY_APP=myproject.myapp # change this to however you named it
CELERY_OPTS=-Ofair -c12 # any other options here

You need to define here all the vars you used in your systemd service file.

Also, there is no need for /bin/sh -c in Exec{Start/Stop/Reload} - ${CELERY_BIN} multi ... will work as ${CELERY_BIN} should point to the Celery script in your virtual environment, which is executable.

Flask+Celery as a Daemon

You could also use supervisord to run your celery worker. A bonus is supervisord will also monitors and restarts your worker if something went kaput. Below is example from my working image extracted for your situation...

File supervisord.conf

[supervisord]
nodaemon=true

[program:celery]
command=celery worker -A proj --loglevel=INFO
directory=/path/to/project
user=nobody
numprocs=1
stdout_logfile=/var/log/celery/worker.log
stderr_logfile=/var/log/celery/worker.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
stopasgroup=true
priority=1000

File start.sh

#!/bin/bash
set -e
exec /usr/bin/supervisord -c /etc/supervisor/supervisord.conf

File Dockerfile

# Your other Dockerfile content here

ENTRYPOINT ["/entrypoint.sh"]
CMD ["/start.sh"]

Systemd - daemonizing celery - Failed at step CHDIR spawning /bin/sh: No such file or directory

I was able to solve it!

I found this link that was a tutorial... which said that WorkingDirectory and CELERYD_CHDIR are the same.

I also read something on SO that suggested using a virtual environment... so I did that, too :).

The updated files:

/etc/systemd/system/celery.service

[Unit]
Description=Celery Service
After=network.target

[Service]
Type=forking
User=apache
Group=apache
#Environment=PATH=/opt/python39/lib:/home/ec2-user/DjangoProjects/myproj
#Environment=PATH=/home/ec2-user/DjangoProjects/myproj
EnvironmentFile=/etc/conf.d/celery
#WorkingDirectory=/opt/python39
WorkingDirectory=/home/ec2-user/DjangoProjects/myproj
ExecStart=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi start $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}"'
ExecReload=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi restart $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
Restart=always

[Install]
WantedBy=multi-user.target

/etc/conf.d/celery

# Name of nodes to start
# here we have a single node
#CELERYD_NODES="w1"
# or we could have three nodes:
CELERYD_NODES="w1 w2 w3"

# Absolute or relative path to the 'celery' command:
#CELERY_BIN="/home/ec2-user/.local/bin/celery"
#CELERY_BIN="/opt/python39/bin/celery"
CELERY_BIN="/home/ec2-user/.virtualenvs/myproj_prod/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"

CELERYD_CHDIR="/home/ec2-user/DjangoProjects/myproj"

# App instance to use
# comment out this line if you don't use an app
#CELERY_APP="myproj"
#CELERY_APP="myproj.celery_tasks"
CELERY_APP="myproj.celery_tasks"
#CELERY_APP="myproj.celery_tasks:myapp"
# ^^ ??? confusion ??? ^^
# or fully qualified:
#CELERY_APP="proj.tasks:app"

# How to call manage.py
CELERYD_MULTI="multi"

# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"

# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"

# you may wish to add these options for Celery Beat
CELERYBEAT_PID_FILE="/var/run/celery/beat.pid"
CELERYBEAT_LOG_FILE="/var/log/celery/beat.log"

DJANGO_SETTINGS_MODULE="myproj.settings"

After I created /var/run/celery/ and /var/log/celery/ folders, I ran chmod and gave access to the user and group that would be running the service to those folders - apache.



Related Topics



Leave a reply



Submit