Do a simple test against the web framework Flask, Tornado, and Japronto
Test environment, virtual machine Ubuntu16.04 4 core CPU 8G memory
View server configuration
root@localhost:/home/frog/test# uname -a
Linux localhost 4.4.0-87-generic #110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
root@localhost:/home/frog/test# cat /proc/cpuinfo | grep model\ name
model name : Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
model name : Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
model name : Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
model name : Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
root@localhost:/home/frog/test#
root@localhost:/home/frog/test# cat /proc/meminfo | grep MemTotal
MemTotal: 8175012 kB
The pressure test uses wrk software (the project address is https://github.com/wg/wrk ), which is a pressure test software that can generate very high loads on a multi-core cpu machine and adopts a multi-threaded design.
We first use 10 threads and 10000 connections to test for 30 s
1,Flask
Server script: flask_test.py
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
app.run(host='0.0.0.0',port=8881)
Test Results
root@localhost:/home/frog# wrk -t10 -c10000 -d30s --latency "http://192.168.3.81:8881"
Running 30s test @ http://192.168.3.81:8881
10 threads and 10000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 353.26ms 285.36ms 1.91s 85.11%
Req/Sec 60.07 72.79 565.00 90.04%
Latency Distribution
50% 224.33ms
75% 597.30ms
90% 666.42ms
99% 1.69s
7360 requests in 30.10s, 1.17MB read
Socket errors: connect 0, read 113, write 0, timeout 451
Requests/sec: 244.55
Transfer/sec: 39.64KB
root@localhost:/home/frog#
2,Tornado
Server script: tornado_test.py
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
def make_app():
return tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
app = make_app()
app.listen(8882)
tornado.ioloop.IOLoop.current().start()
Test Results
root@localhost:/home/frog# wrk -t10 -c10000 -d30s --latency "http://192.168.3.81:8882"
Running 30s test @ http://192.168.3.81:8882
10 threads and 10000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.16s 436.92ms 2.00s 69.59%
Req/Sec 131.43 174.27 1.11k 86.81%
Latency Distribution
50% 1.04s
75% 1.53s
90% 1.86s
99% 1.99s
24406 requests in 30.09s, 4.82MB read
Socket errors: connect 0, read 0, write 0, timeout 7901
Requests/sec: 811.07
Transfer/sec: 163.96KB
root@localhost:/home/frog#
3 , ready
Server script: japronto_test.py
from japronto import Application
def hello(request):
return request.Response(text='Hello world')
app = Application()
app.router.add_route('/', hello)
app.run(debug=True,port=8808)
Test Results
root@localhost:/home/frog# wrk -t10 -c10000 -d30s --latency "http://192.168.3.81:8808"
Running 30s test @ http://192.168.3.81:8808
10 threads and 10000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 458.47ms 189.77ms 2.00s 64.91%
Req/Sec 1.62k 1.46k 8.39k 72.49%
Latency Distribution
50% 438.24ms
75% 589.58ms
90% 708.52ms
99% 889.29ms
362614 requests in 30.10s, 31.47MB read
Socket errors: connect 0, read 0, write 0, timeout 1572
Requests/sec: 12047.43
Transfer/sec: 1.05MB
root@localhost:/home/frog#
to sum up
From the results,
frame | Number of requests per second (Requests/sec) |
---|---|
flask | 244.55 |
tornado | 811.07 |
already ready | 12047.43 |
Obviously, Japronto requests per second performance is much higher than the other two.