Configuring a single Nginx + uWSGI server to serve multiple Flask apps

Someone on the Flask mailing list asked how to serve multiple Flask apps via uWSGI + Nginx. Anytime you’re working with uWSGI there are multiple ways to do things, but here’s how I do things for RockClimbing.com. I spent several days reading the uWSGI docs + various blog posts around the net, so this should be reasonably correct.

This example shows how to serve multiple Flask apps where each app has its own domain name. If you want to mount multiple Flask apps under a single domain name, see this example in the Flask docs (the pull request hasn’t been merged yet as of time of this writing).

In general, use the uWSGI Emperor, even if you’re only running a single Flask app. The Emperor is a master process that watches over the app(s) to make sure everything is running correctly. Each child app is called a ‘vassal’. If for some reason an app/vassal crashes, then the Emperor will reload it. A number of older blog posts recommend managing your uWSGI apps with supervisord… use the Emperor instead because it offers extra benefits like automatically reloading the app/vassal if you make changes to the vassal’s uWSGI config file.

Create a very basic uWSGI Emperor config called /etc/uwsgi/emperor.ini:

If you want to pass a particular option to all vassals, you can specify the option in the emperor.ini file using the vassal-set parameter.

Create a simple vassal config file in  /etc/uwsgi/vassals/app_name.ini. By default, the config file name will be used for the vassal process name. I manage my vassals using Ansible, so this config has several Jinja2 variables that look like  {{ variable }}. Just manually replace those with what you need.

You can bind the app to either a TCP socket or a Unix socket. Just make sure Nginx is passing requests to the same socket that the vassal is listening on.

For security, specify a non-root Linux user/group for the vassal to run under. Typically you’ll run each vassal as a separate user/group, and then run the Emperor as root so it can start each app process and then drop privileges before actually serving requests.

Since we’re walking through how to run multiple Flask apps, you’ll want to run each under a separate virtualenv to avoid package conflicts. For example, if one Flask app requires sqlalchemy 0.8 and another requires sqlalchemy 0.9, they’ll need to be in separate virtualenvs. uWSGI makes it easy to specify which virtualenv to run the vassal under by passing the virtualenv parameter.

For Flask, typically the callable is app, and the module is the filename where app is defined. You’ll also need to tell uWSGI to cd to the path of the Flask app before trying to import  module.

Lastly, if you do any googling about how to scale Flask + uWSGI, you’ll hit a blog post by David Cramer where he found it better to running multiple uWSGI instances and have Nginx handle the load balancing. The thundering herd problem that David experienced is better solved by setting  thunder-lock = true in your vassal config (or set it globally for all vassals in your emperor.ini config). It’s better to let uWSGI handle the load balancing rather than Nginx because Nginx doesn’t know which uWSGI processes are busy and just round-robins through them when it sends requests. If instead you let uWSGI handle the load balancing, it will intelligently pass requests to the processes that are free.

There’s also a number of options commented out–those are simply reminders to myself that those options exist, but I don’t currently use them.

At this point, test that the Emperor starts and correctly loads the vassals by running uwsgi --emperor /etc/uwsgi/emperor.ini. In production, I use systemd to manage the Emperor–the uWSGI docs have an excellent example systemd config file.

Next you need to configure Nginx to pass requests to the proper socket. Here’s an extremely simple Nginx config showing the server and location blocks. I run something a bit more complex in production, but this is easier to understand:

Feel free to ask questions or suggest improvements in the comments.

Also read...

Comments

  1. Hi Jeff, thank you for the post! I’m using uwsgi’s emperor mode to serve a big number of apps. I would like to scale it down by keeping a limited number of processes while serving all the instances, like what mod_wsgi does with WSGIDaemonProcess and the processes parameter. Do you have any tip to do it?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *