foreword
Linux hard disk speed measurement. In the project, the operating system deployed on the third-party virtualization platform is often encountered. Due to the server hard disk and the virtualization platform's own problems, the hard disk speed is slow, resulting in the slow timeout of the business system.
This article mainly introduces the hard disk read and write speed and IO speed measurement under linux
“
Simple hard disk read and write speed test: dd+hdparm
1. Test the hard disk read speed: hdparm
During the read and write process of the hard disk, part of the data is cached in the memory through buffer and cache to improve the read and write speed. hdparm distinguishes between the two caches.
Cache cache: The purpose of hard disk cache is to increase the speed of reading/writing information on the hard disk. When the hard disk is reading and writing, more data is stored in the cache to improve the access speed of the hard disk.
Buffer cache: To get information from the hard disk, we first put the read data in the buffer, and then the computer directly fetches the data from the buffer, and then read it from the hard disk after the buffer data is fetched, so that it can reduce The number of reads and writes of the hard disk, the operation of the computer on the buffer is much faster than the operation on the hard disk, so the application of the buffer can greatly improve the running speed of the computer
a. Test the reading speed of the hard disk under the buffer cache
[root@localhost ~]# hdparm -t /dev/sda
/dev/sda:
Timing buffered disk reads: 1118 MB in 3.01 seconds = 371.97 MB/sec
b. Test the hard disk read speed under the cache cache
[root@localhost ~]# hdparm -T /dev/sda
/dev/sda:
Timing buffered disk reads: 1118 MB in 3.01 seconds = 371.97 MB/sec
c. Test the speed of reading 10G data without using buffer cache. The default offset is 1G speed
[root@localhost ~]# hdparm -t --direct --offset 10 /dev/sda
/dev/sda:
Timing O_DIRECT disk reads (offset 10 GB): 1808 MB in 3.00 seconds = 602.32 MB/sec
2. Test the read and write speed of the hard disk: dd
dd is used for copying, read from if, write to of, the test results are sequential read and write speed
if=/dev/zero does not generate IO, so it can be used to test pure write speed
Similarly of=/dev/null does not generate IO, which can be used to test the pure read speed
bs is the size of each read or write, that is, the size of a block, and count is the number of read and write blocks
a. Test pure write speed, 8k block size 125000 times
[root@localhost ~]# dd if=/dev/zero of=test bs=8k count=125000
125000+0 records in
125000+0 records out
1024000000 bytes (1.0 GB) copied, 5.10565 s, 201 MB/s
b. Test the pure read speed, 125,000 times of block size of 8k
[root@localhost ~]# dd if=test of=/dev/null bs=8k count=125000
125000+0 records in
125000+0 records out
1024000000 bytes (1.0 GB) copied, 0.361756 s, 2.8 GB/s
c. Before the above dd command is completed, the system does not actually write the file to the hard disk. dd first writes the data to the "write cache" of the operating system, and then completes the write operation. The sync function needs to be called to flush the data in the "write cache" to the hard disk.
1. -fsync means that the data has been written to the hard disk, but it is finally written to the hard disk after being cached
[root@localhost ~]# dd if=/dev/zero of=test bs=8k count=125000 conv=fsync
125000+0 records in
125000+0 records out
1024000000 bytes (1.0 GB) copied, 5.31894 s, 193 MB/s
2. -dsync can be regarded as a simulated database insertion operation. After reading a piece of data in /dev/zone, it will be written to the hard disk immediately, and the speed will be very slow
[ root @ localhost ~ ] # dd if = / dev / zero of = test bs = 8k count = 1250 oflayers = dsync
1250+0 records in
1250+0 records out
10240000 bytes (10 MB) copied, 1.18191 s, 8.7 MB/s
“
Professional HDD stress testing tool: FIO
When using fio for testing, the old hard drive needs to be unmounted and then formatted. Or add a new hard drive and format it. If the disk is already mounted, executing fio will prompt /dev/sdb appears mounted, and 'allow_mounted_write' isn't set. Aborting.
umount /dev/sdb # Unmount the hard disk
mkfs.ext4 /dev/sdb # format the hard disk
a. Parameter description
filename=/dev/sdb Test file name, usually select the data directory of the disk to be tested.
direct=1 The test process bypasses the buffer that comes with the machine. Make the test results more realistic.
rw=randwrite test random write I/O
rw=randrw test random write and read I/O
bs=16k The block file size of a single io is 16k, if not written, the default is 4k
size=1g The test file size is 1g
iodepth test io depth
numjobs=1 The number of test threads this time
runtime=60 The test time is 60 seconds
ioengine=psync io engine uses pync mode
rwmixwrite=30 In the mode of mixed reading and writing, writing accounts for 30%
group_reporting for displaying results, summarizing per-process information
b. 4K random write, queue depth 1, 10g data, test thread number 1, test time 60s
fio -name=fiotest -filename=/dev/sdb -group_reporting -direct=1 -ioengine=libaio -iodepth=1 -size=1g -rw=randwrite -bs=4k -numjobs=1 -runtime=60
c. Test 1M sequential read, the queue depth is 1, 10g data, the number of test threads is 1, and the test time is 60s:
fio -name=fiotest -filename=/dev/vdb -group_reporting -direct=1 -ioengine=libaio -iodepth=1 -size=1g -rw=read -bs=1M -numjobs=1 -runtime=60
d. Analysis of result parameters
Disk throughput bw, this is the main parameter of sequential read and write
The number of reads and writes per second iops of the disk, this is the main parameter of random read and write
io=How many M IOs are executed
bw=average IO bandwidth
iops=IOPS
runt=thread running time
slat=submission delay
clat=completion delay
lat = response time
bw=bandwidth
cpu = utilization
IO depths=io queue
IO submit=The number of IOs to be submitted for a single IO submission
IO latencies = distribution of IO latency
io=How many size IOs are executed in total
aggrb=group total bandwidth
minb=minimum.average bandwidth.
maxb=maximum average bandwidth.
mint=minimum running time of threads in the group.
maxt=The maximum running time of threads in the group.
ios=The total number of IO performed by all groups.
merge=The total number of IO merges that occurred.
ticks=Number of ticks we kept the disk busy.
io_queue=Total time spent on the queue.
util=disk utilization