Django application deployment of containerized deployment practice (2)

In the previous article, some students felt that it was difficult to understand without detailed, let me explain briefly.

In the case of our development:
    browser request → python manage.py runserver (such as 8000) → to application code (Django, Flask, etc.)

Deployed online:
    domain name request → DNS resolution → server IP → Nginx (port 80) → proxy forwarding 127.0.0.1: 8000 (IP is not necessarily 127.0.0.1) → application code logic to the project.

During the entire deployment process, we added a layer of docker for isolated deployment, which not only solved the inconsistency of multiple environments in development (dev) test (test) online (prod), but also achieved the purpose of packaging once and running everywhere. , We don’t need to use virtualenv for Python package environment isolation every day, which is very convenient in multiplayer development mode.

In fact, I have already talked about the Docker containerized deployment practice in the docker introduction chapter -getting started . As for the novice, if you feel that it is not easy to get started at the beginning, you can consider removing the intermediate link of docker and running the service directly on the Linux machine.

After explaining the above, let’s move on to our topic today:
Django + Nginx + Gunicorn deployment

Gunicorn

Gunicorn is "Green Unicorn". It originally came from Unicorn in the Ruby community. It is a Python WSGI HTTP server for Unix. Gunicorn is widely compatible with various web frameworks and is simple and lightweight.

The reason why we use uWSGI or Gunicorn is Flask. Django's built-in WSGI service performance is not good enough. It is generally used in testing and development environments, and the more high-performance WSGI service is mainly used online.

As an introduction, I quote an official example:

$ pip install gunicorn
  $ cat myapp.py
    def app(environ, start_response):
        data = b"Hello, World!\n"
        start_response("200 OK", [
            ("Content-Type", "text/plain"),
            ("Content-Length", str(len(data)))
        ])
        return iter([data])
  $ gunicorn -w 4 myapp:app
  [2014-09-10 10:22:28 +0000] [30869] [INFO] Listening at: http://127.0.0.1:8000 (30869)
  [2014-09-10 10:22:28 +0000] [30869] [INFO] Using worker: sync
  [2014-09-10 10:22:28 +0000] [30874] [INFO] Booting worker with pid: 30874
  [2014-09-10 10:22:28 +0000] [30875] [INFO] Booting worker with pid: 30875
  [2014-09-10 10:22:28 +0000] [30876] [INFO] Booting worker with pid: 30876
  [2014-09-10 10:22:28 +0000] [30877] [INFO] Booting worker with pid: 30877

After gunicorn is installed, we can gunicorn -h view the configuration. Normally, for convenience, we always put gunicorn in the configuration file.

One thing to mention here is that gunicorn has --statsd-host this which allows another way to track requests. I mentioned statsd in the monitoring article before. You can refer to my previous blog "Use Statsd+Graphite+Grafana to build a web monitoring system" , Click to read the original text.

Like uWSGI, I give a simple supervisor example:

# gunicorn.conf.py
import multiprocessing
import socket
bind = '0.0.0.0:9527'
workers = multiprocessing.cpu_count() * 2 + 1 
worker_class = 'gevent' # 搭配gevent运行
daemon = False
proc_name = 'yourproject'
pidfile = '/data/run/gunicorn.pid'
loglevel = 'error'
accesslog = '/data/yourproject/supervisor/gunicorn.access.log'
errorlog = '/data/yourproject/supervisor/gunicorn.error.log'
max_requests = 200000
# StatsD integration
# StatsD host is omitted here, please append `--statsd-host` to gunicorn
# statsd_host = 'localhost:8125'
statsd_prefix = socket.gethostname()

The above mentioned why the number of workers is the number of CPU cores*2+1. There is not much scientific basis for this. It is mainly based on the read and write operations of one work and the processing of requests by the other work. You can configure it according to your own situation. For more special configurations, you can consult the documents yourself.

supervisor & nginx & docker-compose

The supervisor is the same as the Django application deployment using Docker containerized deployment practice (1) in the previous article . The only change is that our command has been changed from uUWSGI to gunicorn. I will not list the complete supervisor configuration here.

[program:gunicorn]
command=/path/to/gunicorn main:application -c /path/to/gunicorn.conf.py
directory=/path/to/project
user=nobody
autostart=true
autorestart=true
redirect_stderr=true

Nginx is the same as the previous article. Here is a simple example:

  server {
    listen 80;
    server_name example.org;
    access_log  /var/log/nginx/example.log;
    location / {
        proxy_pass http://127.0.0.1:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
  }

The docker-compose configuration is the same as the previous article, and there is more content, so it will not be listed. You can refer to the previous article for the configuration of Django application deployment (1) using Docker containerized deployment practices .

At the end

Today we mainly elaborated on the second method of Django deployment. In fact, this process is almost the same as without docker. You can strip off docker, which will not have much impact on your entire process.

In the same way, the whole process of my deployment is relatively simple, but the actual situation will be somewhat different. I wonder if you understand? Welcome everyone to leave me a message and we will discuss together.


Docker containerized deployment is related to our next article, we will talk about the big killer of Kubernets.


Guess you like

Origin blog.51cto.com/15009257/2552385
Recommended