Flask run Performance Tuning

Outline

Discovery platform currently used in the performance is relatively low use of the process, so it is necessary to find ways to performance tuning.

Use of tools

Siege is an http load testing and benchmarking tool. It is designed to let Web developers measure their code under duress, to see how it would stand up and loaded onto the Internet. Siege supports basic authentication, cookies, HTTP, HTTPS and FTP protocols. It allows the user configurable number of simulated client access server. These customers put the server in "under siege".

Plainly Siege is a multithreaded http server stress testing tool, the official website here , the most recent version 3.1.4. How installation can be viewed on the official website. The official website seems to have not updated for a long time, I installed in mac siege has reached version 4.0.4. Mac installation directly brew it.

brew install siege

siege
SIEGE 4.0.4
Usage: siege [options]
       siege [options] URL
       siege -g URL
Options:
  -V, --version             VERSION, prints the version number.
  -h, --help                HELP, prints this section.
  -C, --config              CONFIGURATION, show the current config.
  -v, --verbose             VERBOSE, prints notification to screen.
  -q, --quiet               QUIET turns verbose off and suppresses output.
  -g, --get                 GET, pull down HTTP headers and display the
                            transaction. Great for application debugging.
  -p, --print               PRINT, like GET only it prints the entire page.
  -c, --concurrent=NUM      CONCURRENT users, default is 10
  -r, --reps=NUM            REPS, number of times to run the test.
  -t, --time=NUMm           TIMED testing where "m" is modifier S, M, or H
                            ex: --time=1H, one hour test.
  -d, --delay=NUM           Time DELAY, random delay before each requst
  -b, --benchmark           BENCHMARK: no delays between requests.
  -i, --internet            INTERNET user simulation, hits URLs randomly.
  -f, --file=FILE           FILE, select a specific URLS FILE.
  -R, --rc=FILE             RC, specify an siegerc file
  -l, --log[=FILE]          LOG to FILE. If FILE is not specified, the
                            default is used: PREFIX/var/siege.log
  -m, --mark="text"         MARK, mark the log file with a string.
                            between .001 and NUM. (NOT COUNTED IN STATS)
  -H, --header="text"       Add a header to request (can be many)
  -A, --user-agent="text"   Sets User-Agent in request
  -T, --content-type="text" Sets Content-Type in request
      --no-parser           NO PARSER, turn off the HTML page parser
      --no-follow           NO FOLLOW, do not follow HTTP redirects

Copyright (C) 2017 by Jeffrey Fulmer, et al.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.

Several commonly used commands directly given, specific meaning of each parameter of the command line can be seen 参考1.

# get 请求
siege -c 1000 -r 100 -b url
# post 请求
siege -c 1000 -r 100 -b url POST {\"accountId\":\"123\",\"platform\":\"ios\"}"

test

Test code

Look at the file tree structure, tree

➜  flask tree
.
├── hello1.py
├── hello1.pyc
├── hello2.py
├── hello2.pyc
├── hello3.py
└── templates
    └── hello.html

Here is a template is not used, only returns the string Flask code.

# file hello1.py
from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello, World!'

app.run(debug=False, threaded=True, host="127.0.0.1", port=5000)

Here is the code for a Flask use template files.

# file hello2.py
from flask import Flask,render_template

app = Flask(__name__)

@app.route('/hello/')
@app.route('/hello/<name>')
def hello(name=None):
    return render_template('hello.html', name=name)

app.run(debug=False, threaded=True, host="127.0.0.1", port=5000)

hello.html file

<!doctype html>
<title>Hello from Flask</title>
{% if name %}
  <h1>Hello {{ name }}!</h1>
{% else %}
  <h1>Hello, World!</h1>
{% endif %}

flask run directly

First look at the test results hello1.py

# 100 并发
siege -c 100 -r 10 -b http://127.0.0.1:5000

Transactions:		        1000 hits
Availability:		      100.00 %
Elapsed time:		        1.17 secs
Data transferred:	        0.01 MB
Response time:		        0.11 secs
Transaction rate:	      854.70 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		       92.12
Successful transactions:        1000
Failed transactions:	           0
Longest transaction:	        0.14
Shortest transaction:	        0.01

# 200并发
# siege -c 200 -r 10 -b http://127.0.0.1:5000

Transactions:		        1789 hits
Availability:		       89.45 %
Elapsed time:		        2.26 secs
Data transferred:	        0.02 MB
Response time:		        0.17 secs
Transaction rate:	      791.59 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		      134.37
Successful transactions:        1789
Failed transactions:	         211
Longest transaction:	        2.09
Shortest transaction:	        0.00

# 1000 并发
siege -c 1000 -r 10 -b http://127.0.0.1:5000

Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		       16.29 secs
Data transferred:	        0.12 MB
Response time:		        0.00 secs
Transaction rate:	      613.87 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		        2.13
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.08
Shortest transaction:	        0.00

200 do not know why when there will be a decline in availability, but can be seen from the trend, access rate has been lower and lower, 1000 concurrent time has come to 613 / s up.

Take a look at the code in the second paragraph


# 100 并发
siege -c 100 -r 10 -b http://127.0.0.1:5000/hello/libai

Transactions:		        1000 hits
Availability:		      100.00 %
Elapsed time:		        1.26 secs
Data transferred:	        0.07 MB
Response time:		        0.12 secs
Transaction rate:	      793.65 trans/sec
Throughput:		        0.06 MB/sec
Concurrency:		       93.97
Successful transactions:        1000
Failed transactions:	           0
Longest transaction:	        0.14
Shortest transaction:	        0.04

# 200并发
siege -c 200 -r 10 -b http://127.0.0.1:5000/hello/libai
Transactions:		        1837 hits
Availability:		       91.85 %
Elapsed time:		        2.52 secs
Data transferred:	        0.13 MB
Response time:		        0.18 secs
Transaction rate:	      728.97 trans/sec
Throughput:		        0.05 MB/sec
Concurrency:		      134.77
Successful transactions:        1837
Failed transactions:	         163
Longest transaction:	        2.18
Shortest transaction:	        0.00

# 1000 并发
siege -c 1000 -r 10 -b http://127.0.0.1:5000/hello/libai
Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		       17.22 secs
Data transferred:	        0.70 MB
Response time:		        0.01 secs
Transaction rate:	      580.72 trans/sec
Throughput:		        0.04 MB/sec
Concurrency:		        7.51
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.09
Shortest transaction:	        0.00

other methods

Flask official reference document recommended deployment for testing.

> Although lightweight and easy to use but Flask built-in server does not apply to the production, it is also not scale well. This article illustrates the use of proper methods Flask in a production environment.
If you want to Flask application deployment to WSGI server not listed here, please consult its documentation about how to use the WSGI, just remember: Flask application object is essentially a WSGI application.

Here are several to choose from in the way of official performance testing.

Gunicorn

Gunicorn 'Green Unicorn' is a WSGI HTTP server under UNIX, which is a pre-fork worker model transplanted from Ruby's Unicorn project. It supports both eventlet, also supports greenlet. On Gunicorn run Flask application is very simple:

gunicorn myproject:app

Of course, in order to use gunicorn, we first have pip install gunicornto gunicorn installation. To use gunicorn start hello1.py, needs to be inside the code

app.run(debug=False, threaded=True, host="127.0.0.1", port=5000)

Deleted. Then execute the command

# 其中 -w 为开启n个进程 -b 为绑定ip和端口
gunicorn hello1:app -w 4 -b 127.0.0.1:4000

gunicorn default network model using synchronous blocking (-k sync), for large concurrent access may not good enough, it also supports other better model, such as: gevent or meinheld. So, we will block model can be replaced gevent.

# 其中 -w 为开启n个进程 -b 为绑定ip和端口 -k 为替换阻塞模型为gevent
gunicorn hello1:app -w 4 -b 127.0.0.1:4000  -k gevent

Now I tested four cases were 10 times 1000 concurrent access, and a process, and the next four process gevnent non gevent model, look at the results.

Before testing, be sure to set the larger value of ulimit, whether the person will report Too many open files error, I set to 65535
ulimit 65535

gunicorn hello1:app -w 1 -b 127.0.0.1:4000
siege -c 1000 -r 10 -b http://127.0.0.1:4000
Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		       15.21 secs
Data transferred:	        0.12 MB
Response time:		        0.00 secs
Transaction rate:	      657.46 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		        0.85
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.01
Shortest transaction:	        0.00

可以看到,单进程比flask直接启动稍稍好一点。

gunicorn hello1:app -w 4 -b 127.0.0.1:4000
siege -c 1000 -r 10 -b http://127.0.0.1:4000

Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		       15.19 secs
Data transferred:	        0.12 MB
Response time:		        0.00 secs
Transaction rate:	      658.33 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		        0.75
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.01
Shortest transaction:	        0.00

# 使用gevent,记得 pip install gevent
gunicorn hello1:app -w 1 -b 127.0.0.1:4000  -k gevent
Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		       15.20 secs
Data transferred:	        0.12 MB
Response time:		        0.00 secs
Transaction rate:	      657.89 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		        1.33
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.02
Shortest transaction:	        0.00

gunicorn hello1:app -w 4 -b 127.0.0.1:4000  -k gevent

Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		       15.51 secs
Data transferred:	        0.12 MB
Response time:		        0.00 secs
Transaction rate:	      644.75 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		        1.06
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.28
Shortest transaction:	        0.00

It can be seen in 1000, when the number of concurrent use gunicorn and genent not obvious, but when we modify the number of concurrent 100 or 200 is tested

gunicorn hello1:app -w 1 -b 127.0.0.1:4000  -k gevent
siege -c 200 -r 10 -b http://127.0.0.1:4000
Transactions:		        1991 hits
Availability:		       99.55 %
Elapsed time:		        1.62 secs
Data transferred:	        0.02 MB
Response time:		        0.14 secs
Transaction rate:	     1229.01 trans/sec
Throughput:		        0.02 MB/sec
Concurrency:		      167.71
Successful transactions:        1991
Failed transactions:	           9
Longest transaction:	        0.34
Shortest transaction:	        0.00

gunicorn hello1:app -w 4 -b 127.0.0.1:4000  -k gevent
siege -c 200 -r 10 -b http://127.0.0.1:4000
Transactions:		        2000 hits
Availability:		      100.00 %
Elapsed time:		        0.71 secs
Data transferred:	        0.02 MB
Response time:		        0.04 secs
Transaction rate:	     2816.90 trans/sec
Throughput:		        0.03 MB/sec
Concurrency:		      122.51
Successful transactions:        2000
Failed transactions:	           0
Longest transaction:	        0.17
Shortest transaction:	        0.00

4 can be seen in the process, use gevent time has reached 2816.

Re-test of 200 concurrent efficiency hello2.py.

gunicorn hello2:app -w 1 -b 127.0.0.1:4000  -k gevent
siege -c 200 -r 10 -b http://127.0.0.1:4000/hello/2
Transactions:		        1998 hits
Availability:		       99.90 %
Elapsed time:		        1.72 secs
Data transferred:	        0.13 MB
Response time:		        0.14 secs
Transaction rate:	     1161.63 trans/sec
Throughput:		        0.08 MB/sec
Concurrency:		      168.12
Successful transactions:        1998
Failed transactions:	           2
Longest transaction:	        0.35
Shortest transaction:	        0.00

gunicorn hello2:app -w 4 -b 127.0.0.1:4000  -k gevent
siege -c 200 -r 10 -b http://127.0.0.1:4000/hello/2
Transactions:		        2000 hits
Availability:		      100.00 %
Elapsed time:		        0.71 secs
Data transferred:	        0.13 MB
Response time:		        0.05 secs
Transaction rate:	     2816.90 trans/sec
Throughput:		        0.19 MB/sec
Concurrency:		      128.59
Successful transactions:        2000
Failed transactions:	           0
Longest transaction:	        0.14
Shortest transaction:	        0.0

And efficiency can be seen that the difference does not have hello1.py, it reached 2800, the performance substantially be improved four times.

uWSGI

Official website link uWSGI , installation please open the link to view. Under the direct use Mac brew install uWSGIcan be installed, after installation, run under the site directory

uWSGI --http 127.0.0.1:4000 --module hello1:app

Not enough time, first wrote here

usWSGI and ngnix

uswgi installation, use pip install uswgiit.

Write the configuration file uswgi.ini, the file uswgi profile.

[uwsgi]
# 是否启用主进程
master = true
# 虚拟python环境的目录,即virtualenv生成虚拟环境的目录
home = venv
# wsgi启动文件
wsgi-file = manage.py
# wsgi启动文件中new出的app对象
callable = app
# 绑定端口
socket = 0.0.0.0:5000
# 开启几个进程
processes = 4
# 每个进程几个线程
threads = 2
# 允许的缓冲大小
buffer-size = 32768
# 接受的协议,这里要注意!!!!!!直接使用uwsgi启动时必须有这项,没有的话会造成服务可以启动,但是浏览器不能访问;在只是用nginx进行代理访问时,这项必须删除,否则nginx不能正常代理到uwsgi服务。
protocol=http

Which uwsgi startup file manage.py, which is above hello1 hello1.py, which commented app.run(debug=False, threaded=True, host="127.0.0.1", port=5000).

from flask import Flask
from hello1 import app

manager = Manager(app)

if __name__ == '__main__':
    manager.run()

Then use the command uswgi uswgi.inito start the program, visit the local 127.0.0.1:5000 you can see the helloworld. Then you need and nginx used together, and after installing nginx, find nginx configuration files, if you are using apt or yum install nginx, then nginx configuration file /etc/nginx/nginx.confin order not to affect the overall results, here modify /etc/nginx/sites-available/defaultthe file, the file in /etc/nginx/nginx.confincluded, so the configuration is in effect. Configuration file contents.

# nginx ip 访问次数限制,具体内容请查看参考6,7
limit_req_zone $binary_remote_addr zone=allips:100m rate=50r/s;  

server {
	listen 80 default_server;
	listen [::]:80 default_server;
	# nginx ip 访问次数限制,具体内容请查看参考6,7
	limit_req   zone=allips  burst=20  nodelay; 
	root /var/www/html;
	# Add index.php to the list if you are using PHP
	index index.html index.htm index.nginx-debian.html;
	server_name _;
	# 静态文件代理,nginx的静态文件访问速度比其他容器快很多。
	location /themes/  {
		alias       /home/dc/CTFd_M/CTFd/themes/;
	}
	# uwsgi配置
	location / {
		include uwsgi_params;
		uwsgi_pass 127.0.0.1:5000; 
		# python virtualenv 路径
		uwsgi_param UWSGI_PYHOME /home/dc/CTFd_M/venv; 
		# 当前项目路径
		uwsgi_param UWSGI_CHDIR /home/dc/CTFd_M; 
		# 启动文件
		uwsgi_param UWSGI_SCRIPT manage:app; 
		# 超时
		uwsgi_read_timeout 100;
	}
}

Then start nginx server, access to 127.0.0.1 can be a normal visit, due to the configuration of the machine may have a problem, can not be successfully used in this way for system access, compared to the results of the latter is my new virtual machine, Ubuntu Server 16.04,2 nuclear , 2G memory performance, and the access web page is not here in front of hello1.pythis testing procedure, but a complete application platform, from the Throughputsee the properties, processing speed has reached 20 + M / s to.

# 下面的两个测试均是物理机上访问虚机环境,虚机环境为Ubuntu Server 16.04
# 使用uswgi启动
siege -c 200 -r 10 -b http://192.168.2.151:5000/index.html
Transactions:		       56681 hits
Availability:		       99.90 %
Elapsed time:		      163.48 secs
Data transferred:	     3385.71 MB
Response time:		        0.52 secs
Transaction rate:	      346.72 trans/sec
Throughput:		       20.71 MB/sec
Concurrency:		      180.97
Successful transactions:       56681
Failed transactions:	          59
Longest transaction:	       32.23
Shortest transaction:	        0.05

# 使用uswsgi和nginx做静态代理后
siege -c 200 -r 10 -b http://192.168.2.151/index.html

Transactions:		       53708 hits
Availability:		       99.73 %
Elapsed time:		      122.13 secs
Data transferred:	     3195.15 MB
Response time:		        0.29 secs
Transaction rate:	      439.76 trans/sec
Throughput:		       26.16 MB/sec
Concurrency:		      127.83
Successful transactions:       53708
Failed transactions:	         148
Longest transaction:	      103.07
Shortest transaction:	        0.00

It can be seen and used with uswsgi Nginx, can improve the efficiency of a number, raised from 346 / s to 439 / s.

reference

  1. siege stress test tool installation and presentation
  2. Flask official documents
  3. Start with gunicorn + gevent Flask project
  4. CGI, FastCGI, WSGI, uWSGI, uwsgi 简述
  5. Flask + uwsgi + Nginx deploy applications
  6. ip nginx limit number of requests and the number of concurrent
  7. Nginx limit the number of visits and the number of concurrent
Published 16 original articles · won praise 0 · Views 234

Guess you like

Origin blog.csdn.net/m0_46232048/article/details/104483728