Tornado Python as Daemon

Tornado Python as daemon

Taken from the official documentation here.

Overview

Most Tornado apps are run as single processes. For production, this usually means a fairly straight-forward combination of external process management and proxying. Here are some gathered best practices/resources.

Development

When debug mode is enabled, templates are not cached and the app will automatically restart during development. This will fail if a Python syntax error occurs, however. (This can be worked around w/ some additional code or by using Supervisor in development)

You might want to run your app from a terminal multiplexer like screen or tmux for more flexibility in leaving things running and tracing fatal errors.

Instrumentation

Production

Typically in production, multiple tornado app processes are run (at least one per core) with a frontend proxy. Tornado developer bdarnell has a tornado-production-skeleton illustrating this using Supervisor (process management) and nginx (proxying).

Process Management

Traditionally, Tornado apps are single-process and require an external process manager, however HTTPServer can be run with multiple processes. Additionally, there are a couple additional helpers for helping out with managing multiple processes.

Supervisor

  • Managing multiple Pylons apps with Supervisor - an excellent tutorial for getting started with Supervisor
  • Deploy tornado application - short tutorial w/ Supervisor and nginx setup walkthrough
  • supervisord-example.conf - this is an example cfg for running 4 tornado instances
  • Nginx Supervisod Module - this module allows nginx to start/stop backends on demand
Daemonizing

  • start-stop-daemon example - if you are running a standard Linux system this is an easy way to daemonize your Tornado app
  • Upstart example - Upstart is built into Ubuntu and can respawn crashed instances.
Tornado Multi-Process

As mentioned above, Tornado's HTTPServer can be configured for both multiple processes on a single or multiple sockets.

Proxying

The official docs includes an example for running nginx as a load balancing proxy and for serving static files.

Deployment

  • Deploying Python web (Tornado) applications to multiple servers - short discussion on using a Load Balancer for live migration w/o downtime
  • Rolling Deployment w/ Fabric
  • Python Deployment Anti-Patterns
  • buedafab - a nice collection of Fabric
    deployment scripts w/ EC2

Tornado Python as daemon

Taken from the official documentation here.

Overview

Most Tornado apps are run as single processes. For production, this usually means a fairly straight-forward combination of external process management and proxying. Here are some gathered best practices/resources.

Development

When debug mode is enabled, templates are not cached and the app will automatically restart during development. This will fail if a Python syntax error occurs, however. (This can be worked around w/ some additional code or by using Supervisor in development)

You might want to run your app from a terminal multiplexer like screen or tmux for more flexibility in leaving things running and tracing fatal errors.

Instrumentation

Production

Typically in production, multiple tornado app processes are run (at least one per core) with a frontend proxy. Tornado developer bdarnell has a tornado-production-skeleton illustrating this using Supervisor (process management) and nginx (proxying).

Process Management

Traditionally, Tornado apps are single-process and require an external process manager, however HTTPServer can be run with multiple processes. Additionally, there are a couple additional helpers for helping out with managing multiple processes.

Supervisor

  • Managing multiple Pylons apps with Supervisor - an excellent tutorial for getting started with Supervisor
  • Deploy tornado application - short tutorial w/ Supervisor and nginx setup walkthrough
  • supervisord-example.conf - this is an example cfg for running 4 tornado instances
  • Nginx Supervisod Module - this module allows nginx to start/stop backends on demand
Daemonizing

  • start-stop-daemon example - if you are running a standard Linux system this is an easy way to daemonize your Tornado app
  • Upstart example - Upstart is built into Ubuntu and can respawn crashed instances.
Tornado Multi-Process

As mentioned above, Tornado's HTTPServer can be configured for both multiple processes on a single or multiple sockets.

Proxying

The official docs includes an example for running nginx as a load balancing proxy and for serving static files.

Deployment

  • Deploying Python web (Tornado) applications to multiple servers - short discussion on using a Load Balancer for live migration w/o downtime
  • Rolling Deployment w/ Fabric
  • Python Deployment Anti-Patterns
  • buedafab - a nice collection of Fabric
    deployment scripts w/ EC2

What's the best way of communication between tornado and Python based daemon?

ZeroMQ can be used for this purpose. It has various sockets for different purposes and it's fast enough to never be your bottleneck. For asynchronous you can use DEALER/ROUTER sockets and for strict synchronous mode you can use REQ/REP sockets.

You can use the python binding for this --> http://www.zeromq.org/bindings:python.

For the async mode you can try something like this from the zguide chapter 3 Router-to-dealer async routing :

In your case, the "client" in the diagram will be your web server and your daemon will be the "worker".

zguide router-dealer-async-routing


For synchronous you can try a simple request-reply broker or some variant to suit your need.

a simple request-reply-broker

The diagram above shows a strictly synchronous cycle of send/recv at the REQ/REP sockets. Read through the zguide link to understand how it works. They also have a python code snippet on the page.

Django application not logging tornado daemon uwsgi

When you use Django - you don't need configure logger, because Django make it themselves when load.

When you use Tornado there is no autoconfig loggger. Try to config manually.

import logging.config

logging.config.dictConfig(settings.LOGGING)

Also for future reading: 15.8. logging.config — Logging configuration

Celery Flower as Daemon

You could keep it is as a command line program and run it under the supervisord daemon. This is a common solution in the python world (although supervisord works with any command, not just python), and I use it all the time.

Supervisord makes the program think it is still running in a terminal. There are many examples how to use supervisord, but one that I use for a python proxy server can be found here, scroll down to "Installing the proxy server as a service".

Giving my Python application a web interface to monitor it, using Tornado

Is it possible to use the threading package and run Tornado inside of its own thread?

Edit:

The threading module documentation at http://docs.python.org/library/threading.html has more details, but I am imagining something like this:

import threading
t = threading.Thread(target = tornado.ioloop.IOLoop.instance().start)
t.start()

Let me know if that works!

How to wrap python daemon around my code

Let's keep it simple. Project tree:

$ tree
.
├── daemon.py
├── main.py
├── server.py
└── __init__.py

daemon.py is a Daemon class from http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/, server.py is threaded version of code from Python BaseHTTPServer and Tornado, __init__.py is an empty file which allows us to import code from other files in directory. main.py is:

#!/usr/bin/env python

import sys, time, threading
from daemon import Daemon
from server import run_tornado, run_base_http_server

class ServerDaemon(Daemon):
def run(self):
threads = [
threading.Thread(target=run_tornado),
threading.Thread(target=run_base_http_server)
]

for thread in threads:
thread.start()
for thread in threads:
thread.join()

if __name__ == "__main__":
daemon = ServerDaemon('/tmp/server-daemon.pid')

if len(sys.argv) == 2:
if 'start' == sys.argv[1]:
daemon.start()
elif 'stop' == sys.argv[1]:
daemon.stop()
elif 'restart' == sys.argv[1]:
daemon.restart()
else:
print "Unknown command"
sys.exit(2)
sys.exit(0)
else:
print "usage: %s start|stop|restart" % sys.argv[0]
sys.exit(2)

Run it with:

$ python main.py start

For first version from Python BaseHTTPServer and Tornado change
if __name__ == '__main__':
to def myfun(): and call it from run() method of Daemon subclass.

How to stop (and restart!) a Tornado server?

Don't use threads like this unless you really need to - they complicate things quite a bit. For tests, use tornado.testing.AsyncTestCase or AsyncHTTPTestCase.

To free the port, you need to stop the HTTPServer, not just the IOLoop. In fact, you might not even need to stop the IOLoop at all. (but normally I'd restart everything by just letting the process exit and restarting it from scratch).

A non-threaded version of your example would be something like:

#! /usr/bin/env python

import tornado.ioloop
import tornado.web
import time

class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world!\n")

def start_app(*args, **kwargs):
application = tornado.web.Application([
(r"/", MainHandler),
])
server = application.listen(8888)
print "Starting app"
return server

def stop_tornado():
ioloop = tornado.ioloop.IOLoop.current()
ioloop.add_callback(ioloop.stop)
print "Asked Tornado to exit"

def main():
server = start_app()
tornado.ioloop.IOLoop.current().add_timeout(
datetime.timedelta(seconds=1),
stop_tornado)
tornado.ioloop.IOLoop.current().start()
print "Tornado finished"
server.stop()

# Starting over
start_app()
tornado.ioloop.IOLoop.current().start()

Tornado difference between run in executor and defining async methods

run_on_executor is for interfacing with blocking non-async code.

You are correct that async code is only executed in a single thread.
Maybe an example would illustrate the point.

Let's say your Tornado web service interfaces with a library that makes use of requests to fetch country info for a given IP address. Since requests is a non-async library, calling this function would block the Tornado event loop.

So, you have two options: try to find the replacement for the library that is async-compatible OR run the blocking code in a different thread/process and have your event loop await its result like for normal async code without blocking the event loop. The latter option is run_on_executor which allows you to run the task in different thread or process, and asyncio would "await" its completion.



Related Topics



Leave a reply



Submit