关于连接池与高并发的测试结果

Redis与服务接口相结合的高并发

Xadserver接口与redis结合实现高并发需要满足以下三个条件:

  1. Gevent模拟高并发请求时,服务接口满足模拟请求的QPS
  2. Redis可支持的最大连接数满足高并发的数量
  3. 本次测试使用连接池,降低客户端连接redis以及销毁连接的系统开销(连接池数量需大于等于高并发的数量,原因下边会说明)

事前需要关注的三个概念:xadserver并发数,redis支持的最大连接数,连接池的最大连接数(之后如果需要动态设置redis的最大连接数而又不想重启redis影响线上的服务,可以通过config set maxclients 65535 命令实时设置):

127.0.0.1:6379> config get maxclients

1) "maxclients"

2) "2000"

先来查询一下,我们当前redis支持的最大连接数:

实验1

为了说明连接池最大连接数与并发数的关系,我们 先做以下这个试验;

# coding=utf-8
from gevent import monkey
import requests
monkey.patch_all()
import gevent
import redis
import time

import redis

Pool = redis.ConnectionPool(host='127.0.0.1', port=6379, max_connections=10, db=2)

pr = redis.Redis(connection_pool=Pool, decode_responses=True)
# print pr.get('__h5_campaign_info__122671')

import sys
def getFunc(key):
    """key"""
    
v = pr.get('__h5_campaign_info__122671')
    print v


def call_gevent(count):
    """调用gevent 模拟高并发"""
    
begin_time = time.time()
    run_gevent_list = []
    num = 1
    for i in range(count):
        print('--------------%d--Test-------------' % i)
        mykey = 'test' + str(num)
        run_gevent_list.append(gevent.spawn(getFunc, mykey))
        num = num + 1
    gevent.joinall(run_gevent_list)
    end = time.time()
    print('测试并发量' + str(count))
    print('单次测试时间(平均)s:', (end - begin_time) / count)
    print('累计测试时间 s:', end - begin_time)


if __name__ == '__main__':
    # 并发请求数量
    test_count = 20  # 改变并发量查看测试效果。。我这里取7000,10000,20000进行测试。记得将rdis的最大连接数改为30000并重启redis。
    while 1:
        call_gevent(count=test_count)

如代码所示,我们设置的连接池最大连接数是10,而并发数我们设置为20,执行该并发条件下的返回结果,发现出现如下异常;

Traceback (most recent call last):

  File "src/gevent/greenlet.py", line 716, in gevent._greenlet.Greenlet.run

  File "/Users/liquid/PycharmProjects/aliyun_sls/redis高并发.py", line 19, in getFunc

    v = pr.get('__h5_campaign_info__122671')

  File "/Users/liquid/PycharmProjects/aliyun_sls/venv/lib/python2.7/site-packages/redis/client.py", line 1207, in get

    return self.execute_command('GET', name)

  File "/Users/liquid/PycharmProjects/aliyun_sls/venv/lib/python2.7/site-packages/redis/client.py", line 752, in execute_command

    connection = pool.get_connection(command_name, **options)

  File "/Users/liquid/PycharmProjects/aliyun_sls/venv/lib/python2.7/site-packages/redis/connection.py", line 970, in get_connection

    connection = self.make_connection()

  File "/Users/liquid/PycharmProjects/aliyun_sls/venv/lib/python2.7/site-packages/redis/connection.py", line 986, in make_connection

    raise ConnectionError("Too many connections")

ConnectionError: Too many connections

2019-04-16T06:55:17Z <Greenlet "Greenlet-2" at 0x102051050: getFunc('test13')> failed with ConnectionError

查看redis的当前连接数为11(去除本就存在的1个连接,可知redis当前连接数为10,而更多的并发数并没有分配到资源)

127.0.0.1:6379> info clients

# Clients

connected_clients:11

client_recent_max_input_buffer:2

client_recent_max_output_buffer:0

blocked_clients:0

实验2

之后我们设置redis连接池的最大连接数为20,再试一次;

测试并发量20

('\xe5\x8d\x95\xe6\xac\xa1\xe6\xb5\x8b\xe8\xaf\x95\xe6\x97\xb6\xe9\x97\xb4\xef\xbc\x88\xe5\xb9\xb3\xe5\x9d\x87\xef\xbc\x89s:', 7.859468460083007e-05)

('\xe7\xb4\xaf\xe8\xae\xa1\xe6\xb5\x8b\xe8\xaf\x95\xe6\x97\xb6\xe9\x97\xb4 s:', 0.0015718936920166016)

可以正常处理并发的请求,并不会报错

之后再去查看redis的当前连接数为21(去除本就存在的1个连接,可知redis当前连接数为20每一个并发的请求都分配到了的redis连接池中的资源),

127.0.0.1:6379> info clients

# Clients

connected_clients:21

client_recent_max_input_buffer:2

client_recent_max_output_buffer:0

blocked_clients:0

实验3

此时我们修改我们连接池的最大连接数为30,再次执行代码,代码依然没有报错

测试并发量20

('\xe5\x8d\x95\xe6\xac\xa1\xe6\xb5\x8b\xe8\xaf\x95\xe6\x97\xb6\xe9\x97\xb4\xef\xbc\x88\xe5\xb9\xb3\xe5\x9d\x87\xef\xbc\x89s:', 7.630586624145508e-05)

('\xe7\xb4\xaf\xe8\xae\xa1\xe6\xb5\x8b\xe8\xaf\x95\xe6\x97\xb6\xe9\x97\xb4 s:', 0.0015261173248291016)

结论:

在使用redis的连接池访问redis里的资源时,连接池数必须大于等于并发数(二者同时小于redis可支持的最大连接数),否则多出来的并发数将会因为分配不到redis的资源而收到报错信息(参考地址:https://redis.io/topics/clients

上述三个实验并没有与我们的接口服务结合在一起,下面将结合接口服务再次实验

实验4

# coding=utf-8
from gevent import monkey
import requests
monkey.patch_all()
import gevent
import redis
import time

def getFunc(key):
    """key"""
   
v = requests.get('http://127.0.0.1:91/sss')
    print v


def call_gevent(count):
    """调用gevent 模拟高并发"""
   
begin_time = time.time()
    run_gevent_list = []
    num = 1
    for i in range(count):
        print('--------------%d--Test-------------' % i)
        mykey = 'test' + str(num)
        run_gevent_list.append(gevent.spawn(getFunc, mykey))
        num = num + 1
    gevent.joinall(run_gevent_list)
    end = time.time()
    print('测试并发量' + str(count))
    print('单次测试时间(平均)s:', (end - begin_time) / count)
    print('累计测试时间 s:', end - begin_time)


if __name__ == '__main__':
    # 并发请求数量
    test_count = 100  # 改变并发量查看测试效果。。我这里取7000,10000,20000进行测试。记得将rdis的最大连接数改为30000并重启redis。
    while 1:
        call_gevent(count=test_count)

接口代码如下:

Pool = redis.ConnectionPool(host='127.0.0.1', port=6379, max_connections=50, db=2)

# 从池子中拿一个链接
pr = redis.Redis(connection_pool=Pool, decode_responses=True)
@app.route("/sss", methods=["GET", "POST"])
def test_concurrent():
    try:
        pr.get('__h5_campaign_info__111127')
        return json.dumps({'code':1})
    except:
        traceback.print_exc()
        return json.dumps({'code':0})

本地机器上,该接口最多支持2600的并发(uwsgi启动两个进程处理请求),所以等下模拟请求时,数量并不会太多(100的并发量做测试)

此时并发量为100,而我们设置的redis连接池为50,按照预期,应该是100个请求分摊到连接池的50个资源上,多余的请求资源等待前50个资源的释放(事实上连接池会在初始化时申请一部分资源,使用完后归还连接池,从而达到减少连接redis与注销连接开销的目的),接下来看接口的响应如何

有一半的请求响应如下(预期中的):

127.0.0.1 - - [16/Apr/2019 15:20:07] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:20:07] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:20:07] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:20:07] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:20:07] "GET /sss HTTP/1.1" 200 -

另一半的请求响应(预期外的):

Traceback (most recent call last):

  File "run.py", line 533, in test_concurrent

    pr.get('__h5_campaign_info__111127')

  File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/client.py", line 880, in get

    return self.execute_command('GET', name)

  File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/client.py", line 570, in execute_command

    connection = pool.get_connection(command_name, **options)

  File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 897, in get_connection

    connection = self.make_connection()

  File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 904, in make_connection

    raise ConnectionError("Too many connections")

ConnectionError: Too many connections

此时查询redis中的客户端连接数为51(去除本就存在的一个连接),数量和连接池申请的资源相匹配

127.0.0.1:6379> info clients

# Clients

connected_clients:51

client_recent_max_input_buffer:2

client_recent_max_output_buffer:0

blocked_clients:0

实验5

此时修改连接池的最大连接数为100,并发量依然控制在100

全部请求的响应为:

127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -

此时查询redis的客户端连接数(去除本就存在的一个连接),数量和连接池申请的资源完全匹配

127.0.0.1:6379> info clients

# Clients

connected_clients:101

client_recent_max_input_buffer:2

client_recent_max_output_buffer:0

blocked_clients:0

实验6

连接池的最大连接数设置为200,并发量控制在100

全部请求的响应为:

127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 –

此时查询redis的客户端连接数(去除本就存在的一个连接),数量和并发数完全匹配

127.0.0.1:6379> info clients

# Clients

connected_clients:101

client_recent_max_input_buffer:2

client_recent_max_output_buffer:0

blocked_clients:0

这6个实验全部都是基于redis可支持的最大连接数大于连接池的最大连接数以及并发数

实验7

此时我们修改redis可支持的最大连接数为20,再次实验(修改命令如下):

127.0.0.1:6379> config get maxclients

1) "maxclients"

2) "2000"

127.0.0.1:6379> config set maxclients 20

OK

127.0.0.1:6379> config get maxclients

1) "maxclients"

2) "20"

此时修改连接池的数量为30,而我们的并发也控制在30,再次实验:

返回结果有20个如下:

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -

127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 –

剩余10个请求的返回结果如下:

Traceback (most recent call last):

  File "run.py", line 533, in test_concurrent

    pr.get('__h5_campaign_info__111127')

  File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/client.py", line 880, in get

    return self.execute_command('GET', name)

  File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/client.py", line 578, in execute_command

    connection.send_command(*args)

  File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 563, in send_command

    self.send_packed_command(self.pack_command(*args))

  File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 538, in send_packed_command

    self.connect()

  File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 446, in connect

    self.on_connect()

  File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 520, in on_connect

    if nativestr(self.read_response()) != 'OK':

  File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 577, in read_response

    response = self._parser.read_response()

  File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 255, in read_response

    raise error

ConnectionError: max number of clients reached

此时可分配资源内的20个请求可以正常返回,而资源外的10个请求则按照redis拒绝请求的信息返回

综合以上7个实验,我们得出以下结论:

1,                  redis可支持的最大连接数必须大于等于连接池设置的最大连接数或者并发数的任意一个数值

2,                  redis连接池申请资源的数量必须大于等于并发数,否则多余的并发请求将会因为分配不到资源而出现异常

3,                  考虑到redis不停创建连接和销毁连接的系统开销会影响我们的接口质量,所以我们在xadserver项目中使用redis连接池申请到足够的资源供并发请求分配调用

仍需确认的点:

1,修改机器的文件描述符以及redis配置的最大连接数超过我们的并发数,redis是否可以按照预期接受并处理请求

2,接口内调用redis实时获取数据,对接口的响应速度影响有多大(需要完全模拟线上环境测试)

猜你喜欢

转载自www.cnblogs.com/575dsj/p/10776085.html