Storage stress testing tool - Cosbench tutorial

Storage stress testing tool - Cosbench tutorial

Cosbench is a cloud storage testing tool developed by the Intel team based on Java. Its full name is Cloud Object Storage Bench. This article mainly focuses on testing storage services that support the aws-s3 protocol, including seaweedfs and Huawei Cloud Storage.

1 installation

github address: https://github.com/intel-cloud/cosbench
This article explains version 0.4.2.c4. You can download github according to your needs. the corresponding release version.

  • The download address of cosbench mentioned in this article: https://github.com/intel-cloud/cosbench/releases/download/v0.4.2.c4/0.4.2.c4.zip

Notice:

cosbench is developed based on java, so a local java environment is required

# 安装 curl
yum install -y curl
# 安装 nc
yum install -y nmap-ncat
# 安装 openjdk
yum install -y java-1.8.0-openjdk

After downloading, unzip:

Script function description:

  • start-all.sh\stop-all.sh: Start\stop contronller and driver on the current node
  • start-controller.sh\stop-controller.sh: Start\stop contronller on the current node
  • start-driver.sh\stop-driver.sh: Start\stop the driver on the current node
  • cli.sh: command line client

Insert image description here

2 run

2.1 Single node operation

Directly starting start-all.sh in the directory will run the driver and controller at the same time on the current node, and check whether they have been started through the following command.

$ netstat -an |grep LISTEN| grep 19088
tcp 0 0 :::19088 :::* LISTEN
$ netstat -an |grep LISTEN| grep 18088
tcp 0 0 :::18088 :::* LISTEN

Insert image description here
After running successfully, enter directly in the browser:

http://localhost:19088/controller/index.html

You can access the page

2.2 Multi-driver operation

The configuration file of controller is conf/controller.conf, the main contents are as follows:

[controller]
drivers = 1
concurrency = 4
log_level = INFO
log_file = log/system.log
archive_dir = archive
 
[driver1]
name = driver1
url = http://driver_url:driver_port/driver

where drivers under controller specifies the number of drivers, and concurrency specifies the number of tasks that can be run simultaneously
[driver << n >>] is the setting information of the driver. Where n is an increasing integer. The driver configuration must be in this format, otherwise it will not take effect
After configuring the driver information in the controller, just start the driver on the corresponding machine. The driver startup command As follows:
start-driver.sh <number of drivers> < ip > <start port>, for example: start-driver.sh 4 10.252.1.111 18088
If there are multiple workers, the monitoring port of the woker does not increase in units of 100. After the driver is started, the controller is started and the entire system is started

3 XML file writing

3.1 Standard format

<?xml version="1.0" encoding="UTF-8" ?>
<workload name="s3-sample" description="sample benchmark for s3">
    <!--s3服务地址及认证信息配置ak、sk、endpoint等-->
    <storage type="s3" config="accesskey=<accesskey>;secretkey=<scretkey>;proxyhost=<proxyhost>;proxyport=<proxyport>;endpoint=<endpoint>" />
    <workflow>
        <workstage name="init">
        <!--init 任务阶段 创建一个bucket两个bucket,桶名前缀为s3testqwer-->
        <work type="init" workers="1" config="cprefix=s3testqwer;containers=r(1,2)" />
        </workstage>

        <workstage name="prepare">
            <!--prepare 任务阶段 在init创建的桶里创建10个object 大小为64kb-->
            <work type="prepare" workers="1" config="cprefix=s3testqwer;containers=r(1,2);objects=r(1,10);sizes=c(64)KB" />
        </workstage>

        <workstage name="main">
        <!--main 任务执行阶段执行mainwork runtime:多长时间后执行任务 workers: 同时执行任务的worker数-->
            <work name="main" workers="8" runtime="30">
                <!--operation为执行的具体操作 ratio为该操作占用总操作的比例-->
                <operation type="read" ratio="80" config="cprefix=s3testqwer;containers=u(1,2);objects=u(1,10)" />
                <operation type="write" ratio="20" config="cprefix=s3testqwer;containers=u(1,2);objects=u(11,20);sizes=c(64)KB" />
            </work>
        </workstage>

        <workstage name="cleanup">
        <!--cleanup 任务阶段 删除任务执行时创建的object-->
            <work type="cleanup" workers="1" config="cprefix=s3testqwer;containers=r(1,2);objects=r(1,20)" />
        </workstage>

        <workstage name="dispose">
        <!--dispose 任务阶段 清理调为该任务创建的桶-->
            <work type="dispose" workers="1" config="cprefix=s3testqwer;containers=r(1,2)" />
        </workstage>
    </workflow>
</workload>

3.2 Create data

If we create data through cosbench, we can omit the subsequent main, cleanup, dispose and other steps.

For example: Create two buckets in the 30.16.13.137:8060 environment, and the bucket name prefix is ​​s3testqwer
objects=r(1,10);sizes=c(64)KB

  • There are 10 objects in the specified bucket and the file size is 64KB.
<?xml version="1.0" encoding="UTF-8" ?>
<workload name="s3-sample" description="sample benchmark for s3">
    <storage type="s3" config="accesskey=9fadssca999fadsfvzx;secretkey=99kfasodu0321r203safdsfa9;endpoint=http://30.16.13.137:8060" />
    <workflow>
        <workstage name="init">
        <work type="init" workers="1" config="cprefix=s3testqwer;containers=r(1,2)" />
        </workstage>

        <workstage name="prepare">
            <work type="prepare" workers="1" config="cprefix=s3testqwer;containers=r(1,2);objects=r(1,10);sizes=c(64)KB" />
        </workstage>
    </workflow>
</workload>

4 Submit task

Execute sh cli.sh submit conf/s3-config-sample.xml or manually upload the task configuration on the controller page

You can view all tasks and corresponding status at http://127.0.0.1:18088/driver/index.html. The interface is as follows:
Insert image description here

Guess you like

Origin blog.csdn.net/weixin_45565886/article/details/132891118