Python-based performance testing tool locust (a simple comparison with LR)

background

Recently, I developed a small interface by myself. After the function test, I suddenly wanted to test the performance. I used to use HP's LoadRunner for performance testing. I just watched locust a while ago. Bar.
Since I am familiar with LR, I just make a comparison, which is more conducive to the understanding of new things.

Base

locust's official website: http://locust.io/

You can also refer to the introduction of other students in the forum: https://testerhome.com/topics/2888

Currently locust only supports Python 2 version.

test requirements

Verify that in the case of the same server side, use LR and locust to perform performance tests respectively, and in the case of the same concurrent users, verify the differences in performance test indicators such as average response time and TPS value.
For convenience, use the http protocol, one get request, one post request, and the transaction ratio is 1:1.

Service-Terminal

In order to be simple and easy to understand, I wrote a server-side with Python bottle framework, 2 transactions, a get, a post request, and 2 different sleeps were added to the transaction.
code show as below:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

__author__ = 'among,[email protected]'

from bottle import *
from time import sleep

app = Bottle()


@app.route('/transaction_1', method='GET')
def tr1():
    sleep(0.2)
    resp = dict()
    resp['status'] = 0
    resp['value'] = 'xxx'
    return resp


@app.route('/transaction_2', method='POST')
def tr2():
    parm1 = request.forms.get('parm1')
    parm2 = request.forms.get('parm2')
    sleep(0.5)
    resp = dict()
    resp['status'] = 0
    resp['value'] = 'yyy'
    return resp


run(app=app, server='cherrypy', host='0.0.0.0', port=7070, reloader=False, debug=False)

The server is deployed on a separate Windows machine, based on Python 3, and listens on port 7070 after startup.

Test scripts in LR

In another Windows machine, LR 11 is used, and the script of the http/html protocol is used. The main code is as follows:
2 actions are used to divide the transaction ratio.
action1:

Action1()
{
    lr_start_transaction("get");
    web_reg_find("Text=xxx",
        LAST);
    web_custom_request("Head",
        "URL=http://10.0.244.108:7070/transaction_1", 
        "Method=GET",
        "Resource=0",
        "Referer=",
        LAST);
    lr_end_transaction("get", LR_AUTO);
    return 0;
}

action2:

Action2()
{
    lr_start_transaction("post");
    web_reg_find("Text=yyy",
        LAST);  
    web_custom_request("Head",
        "URL=http://10.0.244.108:7070/transaction_2", 
        "Method=POST",
        "Resource=0",
        "Referer=",
        "Body=parm1=123&parm2=abc",
        LAST);
    lr_end_transaction("post", LR_AUTO);
    return 0;
}

Use a 1:1 ratio to set the execution ratio of 2 transactions:

 

 

The execution method in LR can be directly placed in the scene and executed.

test script in locust

In another mac, use locust to perform the tests, all through code. code show as below:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

__author__ = 'among,[email protected]'

from locust import *

class mytest(TaskSet):
    @task(weight=1)
    def transaction_1(self):
        with self.client.get(name='get', url='/transaction_1', catch_response=True) as response:
            if 'xxx' in response.content:
                response.success()
            else:
                response.failure('error')

    @task(weight=1)
    def transaction_2(self):
        dt = {
            'parm1': '123',
            'parm2': 'abc'
        }

        with self.client.post(name='post', url='/transaction_2', data=dt, catch_response=True) as response:
            if 'yyy' in response.content:
                response.success()
            else:
                response.failure('error')


class myrun(HttpLocust):
    task_set = mytest
    host = 'http://10.0.244.108:7070'
    min_wait = 0
    max_wait = 0

For specific parameters, see the official documentation.

其中:

1. 主类继承HttpLocust,用于测试http协议的系统;

2. min_wait和max_wait用于设置执行task过程中的等待时间,相当于LR中Pacing的设置,这里都设置为0;

3. task装饰器类似于LR中的事务,可以做嵌套;

4. weight相当于权重,如2个事务是1:1,保持比例一致就行;

5. 这里写了2个事务,分别为get和post;对response的判断通过python的语法实现,类似于LR中的检查点。

执行方法,通过命令行启动:
如下图:

 

 

LR中的测试过程和结果

测试过程:
直接设置并发用户数和加载方式,10个用户并发,同时加载就可以了。

 

 

测试结果:
平均响应时间:

 

 

TPS:

 

 

事务:

 

 

Locust中的测试过程和结果

测试过程:
使用浏览器打开http://127.0.0.1:8089

 


设置需要的并发用户数和用户加载策略。
这里设置相同的10用户并发,Hatch Rate是每秒启动多少用户的意思。这里设置为10,就是同时启动10个了。注意,这里不好设置执行多久,和LR不一样。(可以不启动浏览器,直接在启动参数中设置并发用户数,执行多少个事务后结束,具体用-h可以看到帮助)

 

启动执行后:

 

 


其中,Average中为平均响应时间等测试指标,最后一列的reqs/sec相当于LR中的TPS。(这里locust把它叫做rps),其他指标都比较好理解了。

 

最后的结果:
在web页面中可以下载原始的测试结果数据。
在停掉python命令后,在终端中也可以看到一些信息,最后的一行是百分之X的响应时间,表示百分之多少的交易在XXX响应时间内。
这里比LR中的要多点,包括了50%到100%的响应时间。

 

 

结果比较

在相同的服务器端环境,测试的结果值相似,没有多大的区别。
在设置交易比例的过程中,可以看到get和post交易的比例都存在差异。这个也无法避免(除非自己写脚本划分)。所以tps方面存在些差异。不过总体差距很小。

总结

性能测试,重点是考察并发用户数、响应时间、tps这类指标。

一直用的是LR,LR在一起概念上更易于理解,在有lr的基础上,在看其他的工具,就比较容易了。

locust也可以支持分布式执行(多执行机),用来简单测试这类http的接口,也算比较方便。
而且,locust全部基于Python脚本,扩展性不错,号称可以测试任何协议和系统。

最后,我还是那句话,看什么事情,用什么工具最高效易用,用合适的工具做合适的事情即可。

 

备注:

调试的时候都是连接的代理,这样方便。
一种方法是自己在外面的py中先调试好,然后再放到locust中执行。
另一种是使用命令行启动 locust
如:locust -f loc1.py --no-web -c 1 -n 4
-c是指并发数,-n是执行的次数。少跑点,调试方便。这样配合代理,调试也不复杂。

用命令行还可以自动化执行,执行的结果也可以通过日志或output来分析。

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=327058560&siteId=291194637