Python Daemon and Systemd Service

Adding a python daemon to the systemd

Enabling the daemon with a full path name worked around the issue but there is a better solution.

The issue was the service was in a user directory but was started as a system service. However /usr/lib was not the right place to add new service files anyway. That directory is for files shipped as part of operating system packages. The correct directory to add a new system service is in /etc/systemd/system See related docs about systemd paths.

You still want to enable the service to make sure it gets loaded at boot time.

How to make a Python script run like a service or daemon in Linux

You have two options here.

  1. Make a proper cron job that calls your script. Cron is a common name for a GNU/Linux daemon that periodically launches scripts according to a schedule you set. You add your script into a crontab or place a symlink to it into a special directory and the daemon handles the job of launching it in the background. You can read more at Wikipedia. There is a variety of different cron daemons, but your GNU/Linux system should have it already installed.

  2. Use some kind of python approach (a library, for example) for your script to be able to daemonize itself. Yes, it will require a simple event loop (where your events are timer triggering, possibly, provided by sleep function).

I wouldn't recommend you to choose 2., because you would be, in fact, repeating cron functionality. The Linux system paradigm is to let multiple simple tools interact and solve your problems. Unless there are additional reasons why you should make a daemon (in addition to trigger periodically), choose the other approach.

Also, if you use daemonize with a loop and a crash happens, no one will check the mail after that (as pointed out by Ivan Nevostruev in comments to this answer). While if the script is added as a cron job, it will just trigger again.

process management with python: execute service or systemd or init.d script

I found a way using systemd dbus interface. Here is the code:

import dbus
import subprocess
import os
import sys
import time

SYSTEMD_BUSNAME = 'org.freedesktop.systemd1'
SYSTEMD_PATH = '/org/freedesktop/systemd1'
SYSTEMD_MANAGER_INTERFACE = 'org.freedesktop.systemd1.Manager'
SYSTEMD_UNIT_INTERFACE = 'org.freedesktop.systemd1.Unit'

bus = dbus.SystemBus()

proxy = bus.get_object('org.freedesktop.PolicyKit1', '/org/freedesktop/PolicyKit1/Authority')
authority = dbus.Interface(proxy, dbus_interface='org.freedesktop.PolicyKit1.Authority')
system_bus_name = bus.get_unique_name()

subject = ('system-bus-name', {'name' : system_bus_name})
action_id = 'org.freedesktop.systemd1.manage-units'
details = {}
flags = 1 # AllowUserInteraction flag
cancellation_id = '' # No cancellation id

result = authority.CheckAuthorization(subject, action_id, details, flags, cancellation_id)

if result[1] != 0:
sys.exit("Need administrative privilege")

systemd_object = bus.get_object(SYSTEMD_BUSNAME, SYSTEMD_PATH)
systemd_manager = dbus.Interface(systemd_object, SYSTEMD_MANAGER_INTERFACE)

unit = systemd_manager.GetUnit('cups.service')
unit_object = bus.get_object(SYSTEMD_BUSNAME, unit)
#unit_interface = dbus.Interface(unit_object, SYSTEMD_UNIT_INTERFACE)

#unit_interface.Stop('replace')
systemd_manager.StartUnit('cups.service', 'replace')

while list(systemd_manager.ListJobs()):
time.sleep(2)
print 'there are pending jobs, lets wait for them to finish.'

prop_unit = dbus.Interface(unit_object, 'org.freedesktop.DBus.Properties')

active_state = prop_unit.Get('org.freedesktop.systemd1.Unit', 'ActiveState')

sub_state = prop_unit.Get('org.freedesktop.systemd1.Unit', 'SubState')

print active_state, sub_state

How can I keep my python-daemon process running or restart it on fail?

If you really want to run a script 24/7 in background, the cleanest and easiest way to do it would surely be to create a systemd service.

There are already many descriptions of how to do that, for example here.

One of the advantages of systemd, in addition to being able to launch a service at startup, is to be able to restart it after failure.

Restart=on-failure

If all you want to do is automatically restart the program after a crash, the easiest method would probably be to use a bash script.

You can use the until loop, which is used to execute a given set of commands as long as the given condition evaluates to false.

#!/bin/bash

until python /path/to/script.py; do
echo "The program crashed at `date +%H:%M:%S`. Restarting the script..."
done

If the command returns a non zero exit-status, then the script is restarted.

systemd execstart python daemon dynamically using virtualenv from environment variable

After some more reading, I stumbled on the answer here. The problem is the first argument of ExecStart must be a literal:

ExecStart= Commands with their arguments that are executed when this
service is started. For each of the specified commands, the first
argument must be an absolute and literal path to an executable.

Reading on further on ExecStart it says:

Variables whose value is not known at expansion time are treated as
empty strings. Note that the first argument (i.e. the program to
execute) may not be a variable
.

I also ended up stumbling across this answer which looks like the same problem. In the end this is what worked: wrapping the entire thing with shell to run:

[Unit]
Description=pipeline remove tickets worker instances as a service, instance %i
Requires=pipeline-remove.service
Before=pipeline-remove.service
BindsTo=pipeline-remove.service

[Service]
PermissionsStartOnly=true
Type=idle
User=root
EnvironmentFile=/etc/profile.d/pipeline_envvars.sh
ExecStart=/bin/sh -c '${PIPELINE_VIRTUALENV}/bin/python /pipeline/python/daemons/remove_tickets.py'
Restart=always
TimeoutStartSec=10
RestartSec=10

[Install]
WantedBy=pipeline-remove.service

So ExecStart=/bin/sh -c '<your command>' saves the day, I can now specify the python interpreter path for my environment variable. Will leave the answer up, hopefully, saves others some time.



Related Topics



Leave a reply



Submit