Huawei Cloud Yaoyun Server L Instance Evaluation | Lightweight Application Server Showdown: In-depth evaluation of the disk performance of Huawei Cloud Yaoyun Server L instance based on fio

Insert image description here

This article is included in the column: #CloudComputingIntroduction and Practice - Huawei Cloud column. This series of blog posts is still being updated.

The list of related Huawei Cloud Yaoyun Server L instance evaluation articles is as follows:

1. Evaluation background

This article is a continuation of the previous article: " Evaluation of Huawei Cloud Yao Cloud Server L Instance | Lightweight Application Server Showdown: In-depth Evaluation of Huawei Cloud Yao Cloud Server L Instance Based on Geekbench " and Huawei Cloud Yao Cloud Server L Instance Evaluation | Lightweight Application Server Showdown: After in-depth evaluation of Huawei Cloud Yaoyun Server L instance based on STREAM , we continue to discuss Huawei Cloud's innovative product [Huawei Cloud Yaoyun Server L instance] to provide readers with comprehensive evaluation information. The focus of this article is to test its [Huawei Cloud Yaoyun Server L instance] Disk I/O performance of Yaoyun Server L instance] . We still selected the comparison machine, which is the lightweight application server lighthouse from a friend. After detailed testing using fio , we will strictly test and analyze the CPU processor performance of Huawei Cloud's latest Yunyao Cloud Server L instance . To help you have a clearer understanding of Huawei Cloud's latest Yunyao Cloud Server L instance, and to help you make better judgments among many complicated servers.

2. Evaluation Statement

Any time I test its cloud computing or other products, I stand on the following statement:

Although this article was written in the context of participating in the Huawei Cloud Yaoyun Server L instance evaluation event, this blogger is conducting the evaluation from a neutral perspective. There is no reason to brag because it is an event article. This is against the background . The purpose of the essay and my personal original intention .

3. Parameters and preparation of the server under evaluation

3.1 Basic parameters of the evaluated server

2核2GHuawei Cloud Yaoyun Server L instances currently provide 2核4G3 2核8Gtypes of CPU and memory specifications: The test machine selected here is the lightweight application server Lighthouse. This lightweight application server is similar to the Huawei Cloud Yaoyun Server L instance. They both provide different application scenarios, failed image support and specifications. It can be said that They are mutual benchmarking products.

The Huawei Cloud Yaoyun Server L instance and the lightweight application server Lighthouse of a friend used in this test are both located in the Guangzhou area, and the configurations are both 2核2G. The configuration parameter table is:

Specifications and configuration Huawei Cloud Yaoyun Server L instance Lighthouse, a lightweight application server from a friend
Number of cores 2 cores 2 cores
Memory 2G 2G
operating system CentOS 7.6 CentOS 7.6
area Guangzhou Guangzhou

3.2 Test machine procurement

3.2.1 Huawei Cloud Yaoyun Server L instance

Because this article focuses on the processor-level evaluation of Huawei Cloud Yaoyun Server L instances, due to limited space, the steps here are skipped. For details on how to purchase Huawei Cloud Yaoyun Server L instances, please refer to my previous blog post. That’s it for Section 3: Evaluation of Huawei Cloud Cloud Server L Instance | Starting from Scratch: A Comprehensive Usage Analysis Guide for Cloud Server L Instance

The server specifications after purchase are as follows:

Insert image description here

3.2.2 Friendly supplier’s lightweight application server lighthouse

I will skip the purchasing steps of the comparison test machine because it is not the protagonist today. I will directly post the screenshots after purchase below.

Insert image description here

4. Use fio to test disk I/O performance

"fio" is a tool widely used to evaluate disk I/O performance. It supports up to 19 different I/O engines, including but not limited to: sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, etc. fio is constantly updated, and the latest version is v3.19. You can get relevant information on its official website "fio".

There are two main ways to perform disk stress testing using fio. One is to configure through command line parameters; the other is to set test parameters by reading the configuration file. There isn't much functional difference between the two methods, but the latter allows you to combine tools like sh and screen to run tests over long periods of time.

In order to demonstrate how to use fio, we directly use the Huawei Cloud Yaoyun Server L instance for demonstration.

4.1 fio installation

This article uses Huawei Cloud Yaoyun Server L instance to evaluate the cloud host of CentOS 7.6 operating system as an example. You can directly yuminstall the test tool fio.

[root@hcss-ecs-d51e ~]# yum install -y libaio
[root@hcss-ecs-d51e ~]# yum install -y libaio-devel
[root@hcss-ecs-d51e ~]# yum install -y fio

After Complete! appears, confirm that the installation is complete, as shown below:

Insert image description here

4.2 Common test scenarios and test methods for fio

4.2.1 Test scenario

  • Test delay

    The queue depth is 1 and bs is set to 4k, which is to simulate the latency test of reading and writing a single queue.

  • Hesitation

    The queue depth is 32, bs is set to 128k, and the maximum capacity is tested to fill the entire disk bandwidth.

  • iops

    The queue depth is 32, and bs is set to 4k. It is necessary to read more disks in the shortest time.
    Small files are usually tested for random read and write, and large files are usually tested for sequential read and write.

Common test scenario references ( important ):

  1. Sequential read and write (throughput, commonly used unit is MB/s): The storage location of files on the hard disk is continuous.

    Applicable scenarios: copying large files (such as video and music). Even if the speed is very high, it has no reference value for database performance.

  2. 4K random read and write (IOPS, common unit is times): read and write data at random locations on the hard disk, 4KB each time.

    Applicable scenarios: operating system operation, software operation, database.

4.2.2 Test methods

There are two testing methods for fio, one is to use the command line directly, and the other is to use the test configuration file. The difference is that the test configuration file can be used to perform tests regularly.

Note (very important):

  • Please do not perform fio tests on the system disk to avoid damaging important system files.
  • To avoid data corruption caused by underlying file system metadata corruption, please do not perform tests on business data disks.
  • Please make sure that /etc/fstabthe file configuration items do not contain the mounting configuration of the hard disk under test, otherwise the cloud server will fail to start.
  • When testing hard disk performance, it is recommended to directly test the raw data disk (such as /dev/vdb).
  • When testing file system performance, it is recommended to specify specific file tests (such as /data/file).
  • It is recommended that the fio test be performed on a free disk that does not hold important data, and that the file system be re-created after the test is completed.
(1) Test using command line

filenameYou need to specify the specific disk drive letter in the device, as /dev/sdashown below:

Read BW sequentially

fio -name=Seq_Read_IOPS_Test -group_reporting -direct=1 -iodepth=128 -rw=read -ioengine=libaio -refill_buffers -norandommap -randrepeat=0 -bs=4k -size=10G -numjobs=1 -runtime=600 -filename=/dev/sda

Write BW in order

fio -name=Seq_Write_IOPS_Test -group_reporting  -direct=1 -iodepth=128 -rw=write -ioengine=libaio -refill_buffers -norandommap -randrepeat=0 -bs=4k -size=10G -numjobs=1 -runtime=600 -filename=/dev/sda

Random read IOPS

fio -name=Rand_Read_IOPS_Test -group_reporting -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -refill_buffers -norandommap -randrepeat=0 -bs=4k -size=10G -numjobs=1 -runtime=600 -filename=/dev/sda

Random write IOPS

fio -name=Rand_Write_IOPS_Test -group_reporting -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -refill_buffers -norandommap -randrepeat=0 -bs=4k -size=10G -numjobs=1 -runtime=600 -filename=/dev/sda

mixed literacy

fio -name=Read_Write_IOPS_Test -group_reporting -direct=1 -iodepth=128 -rw=randrw -rwmixread=70 -refill_buffers -norandommap -randrepeat=0 -ioengine=libaio -bs=4k -size=10G -numjobs=1 -runtime=600 -ioscheduler=noop -filename=/dev/sda

write throughput

fio -name=Write_BandWidth_Test -group_reporting -direct=1 -iodepth=32 -rw=write -ioengine=libaio -refill_buffers -norandommap -randrepeat=0 -bs=1024k -size=10G -numjobs=1 -runtime=600 -filename=/dev/sda

read throughput

fio -name=Read_BandWidth_Test -group_reporting -direct=1 -iodepth=32 -rw=read -ioengine=libaio -refill_buffers -norandommap -randrepeat=0 -bs=1024k -size=10G -numjobs=1 -runtime=600 -filename=/dev/sda
(2) Use configuration files for testing

The following is an example test file. You can directly create a test file called: fio.conf

# fio.conf
[global]
ioengine=libaio
iodepth=128
direct=0
thread=1
numjobs=16
norandommap=1
randrepeat=0
runtime=60
ramp_time=6
size=1g
directory=/your/path

[read4k-rand]
stonewall
group_reporting
bs=4k
rw=randread

[read64k-seq]
stonewall
group_reporting
bs=64k
rw=read


[write4k-rand]
stonewall
group_reporting
bs=4k
rw=randwrite

[write64k-seq]
stonewall
group_reporting
bs=64k
rw=write

After saving the file, run it directly with the following command:

fio fio.conf

4.2.3 Description of each operating parameter

  • filename: Specifies the name of the file (device). You can specify multiple files at the same time by separating them with colons, such as filename=/dev/sda:/dev/sdb.
  • directory: Set the path prefix of filename. In subsequent benchmarks, devices will be specified this way.
  • name: Specify the name of the job, which means starting a new job on the command line.
  • direct: bool type, default is 0, if set to 1, it means not to use io buffer.
  • ioengine: I/O engine, currently fio supports 19 types of ioengine. The default value is sync synchronous blocking I/O, libaio is Linux's native asynchronous I/O. For information about synchronous, asynchronous, blocking and non-blocking models, please refer to the article "Using Asynchronous I/O to Greatly Improve Application Performance".
  • iodepth: If ioengine adopts asynchronous mode, this parameter indicates the number of io units maintained by a batch of submissions. For this parameter, please refer to the article "In-depth understanding and misunderstandings of Fio stress testing tools and io queues".
  • rw: I/O mode, random reading and writing, sequential reading and writing, etc. Optional values: read, write, randread, randwrite, rw, randrw.
  • bs: I/O block size, default is 4k. It can be increased when testing sequential reading and writing.
  • size: Specifies the size of the files processed by the job.
  • numjobs: Specifies the number of clones (threads) of the job.
  • time_based: If the file is read and written before the time specified by runtime has expired, it will continue to be repeated until the runtime time ends.
  • runtime: Specifies how many seconds to stop the process. If this parameter is not specified, fio will execute until the reading and writing of the specified file are completely completed.
  • group_reporting: When numjobs is also specified, the output results are displayed by group.

If it is a configuration file, the configuration file is in ini format, that is, it has the concept of block, and the key-value pair is set through "=" under the block.

4.2 Run fio test on Huawei Cloud Yaoyun Server L instance

The following instructions are to use fio to test on Huawei Cloud Yaoyun Server L instance. Due to limited space, only examples (pictures) of reading IOPS sequentially are given below.

Insert image description here

  • bw : disk throughput, this is the focus of sequential read and write inspection
  • iops : The number of disk reads and writes per second. This is the focus of random read and write inspections.

During the above test process, the throughput BW of the sequential read test results is 104MB/s.

4.3 Run the fio test on lighthouse, a lightweight application server from a friend

Run the fio test on the friend's lightweight application server lighthouse.
In the above section 4.2, the steps to run the fio test on the Huawei Cloud Yaoyun Server L instance have already introduced in detail how to use fio and related details. The intermediate process is omitted here to save space. Directly give the results of running fio on the friend's lightweight application server lighthouse:

Take executing sequential reading as an example:

Insert image description here

5. Final test comparison results

This is the same as when I used Geekbench and STREAM to test before. Not much nonsense, let’s go directly to the final result of fio:

fio hard disk read and write performance test Sequential read (MB/s) Sequential writing (MB/s) Random reads (IOPS) Random write (IOPS)
Huawei Cloud Yaoyun Server L instance 104 90.9 5101 5122
Lighthouse, a lightweight application server from a friend 23.7 23.6 6050 6064

As can be seen from the above table, under the same test file size of 1G, the sequential read and sequential write speeds of Huawei Cloud Yaoyun Server L instance are much higher than those of competing lightweight application servers, but the speed of random reads and random writes In the test, the IOPS of the lightweight application server of a friend was slightly higher than that of the Huawei Cloud Yaoyun Server L instance, but the difference was not as exaggerated as sequential reading and writing.

This also means that if you are reading and writing large files, the speed of Huawei Cloud Yao Cloud Server L instance will be much higher than that of competing products, while the speed of databases or applications on Huawei Cloud Yao Cloud Server L instance will be slightly lower than that of competing products. .

6. Summary at the end of the article

In this article, we conduct a comprehensive evaluation and comparison of the hard disk I/O performance of the evaluated servers. In the focus of this article, we use the fio tool to test the hard disk read I/O performance of these two servers. We introduced the installation process, usage method and related detailed operating parameter report of the fio tool in detail. The fio test was run on both servers to conduct an in-depth evaluation of their I/O performance on the hard disk.

The evaluation and comparison in this article can help readers better understand the memory performance of these two servers and provide them with valuable information for making informed decisions. Whether in an enterprise environment or a personal application, optimizing memory performance is a critical step in improving overall system performance.

[ 本文作者 ]   bluetata
[ 原文链接 ]   https://bluetata.blog.csdn.net/article/details/132954789
[ 最后更新 ]   09/18/2023 1:45
[ 版权声明 ]   如果您在非 CSDN 网站内看到这一行,
说明网络爬虫可能在本人还没有完整发布的时候就抓走了我的文章,
可能导致内容不完整,请去上述的原文链接查看原文。

Guess you like

Origin blog.csdn.net/dietime1943/article/details/132954789