2017-4-10 Disk Performance Test

Original link: http://www.cnblogs.com/yue-hong/p/6692876.html

 

     Just bought a SSD would like to try the effect, although they are still virtual machine windows hard disk partitions, but still a little effect, mainly to be able to easily do the experiment, openstack several nodes resource requirements are too high, the computer does not work with high .

   

First, test the mechanical hard disk.
1, a first embodiment dd command
[the root Agent @ ~] # CD / mnt /
[the root Agent @ mnt] # LS
[the root Agent @ mnt] # Time dd IF = / dev / COUNT ZERO. 1G BS = = = 1 of 1GB.file ## writes data to the disk file from the black hole, the test write performance
. 1 + 0 Records in
. 1 + 0 Records OUT
1073741824 bytes (1.1 GB) copied appears, 274.62 S, 3.9 MB / S

real 4m35.165s
user 0m0.005s
sys 0m47.671s

[Root @ agent mnt] # time dd of = 1GB.file bs = 1G count = 1 if = / dev / zero ## read data from the disk file to a black hole, read performance test

1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 227.994 s, 4.7 MB/s

3m48.952s Real
the User 0m0.003s
SYS 0m27.997s
Summary: read performance than write speed quickly, the mechanical hard disk is too slow!

2, a second embodiment iostat command
[root @ agent mnt] # iostat -m ## megabytes for display
Linux 3.10.0-327.el7.x86_64 (agent) 04/10/2017 _x86_64_ ( 1 CPU)

avg-cpu: %user %nice %system %iowait %steal %idle
5.96 0.00 13.65 18.21 0.00 62.18

Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 95.47 2.56 3.13 3953 4834
dm-0 47.44 1.22 1.72 1885 2659
dm-1 700.88 1.33 1.40 2062 2172

What iowait big reason is? Under linux in a large number of network communications, the top command to see iowait very large, up to 98%. High-speed cpu can cause high iowait value, but this does not mean that the disk is the bottleneck of the system. The only explanation is a method of system disk bottleneck is the high read / write time, generally more than 20ms, on behalf of the less healthy disk performance
iowait higher the figure the more CPU resources waiting for I / O processed

3, a third embodiment hdparm command, it should not be hdparm test disk speed, but the test memory cache and buffer speed.
[root @ agent ~] # hdparm -Tt / dev / sda

/dev/sda:
Timing cached reads: 4156 MB in 2.00 seconds = 2078.06 MB/sec
Timing buffered disk reads: 252 MB in 3.06 seconds = 82.45 MB/sec


Two, the SSD SSD
1, a first embodiment dd command
[root @ ssd mnt] # time dd if = / dev / zero bs = 1G count = 1 of = 1GB.file
records read into the 1 + 0
records 1 +0 write
1,073,741,824 bytes (1.1 GB) has been copied, 45.908 seconds, 23.4 MB / sec

0m46.179s Real
User 0m0.004s
SYS 0m29.975s
[the root @ SSD mnt] # = Time of 1GB.file dd BS. 1G = IF COUNT = 1 = / dev / ZERO
records read into the 1 + 0
records 1+ 0 to write
1,073,741,824 bytes (1.1 GB) has been copied, 41.1962 seconds, 26.1 MB / sec

real 0m41.583s
user 0m0.002s
sys 0m27.560s

总结:可以看到SSD比机械硬盘4分钟快了近10倍左右!
2、第二种方案iostat命令
[root@ssd mnt]# yum install pcp-import-iostat2pcp -y
[root@ssd mnt]# iostat -m
Linux 3.10.0-327.el7.x86_64 (ssd) 2017年04月10日 _x86_64_ (1 CPU)

avg-cpu: %user %nice %system %iowait %steal %idle
10.26 0.00 19.57 0.34 0.00 69.83

Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 111.85 2.94 6.09 2758 5706
dm-0 28.90 0.75 2.40 704 2250
dm-1 1498.83 2.19 3.67 2048 3436
总结:可以看到iowait的时候几乎趋近与0,牛逼了!令人奇怪的是,写速率居然比读速度快,这个真是让我意外呀~

3、第三种方案hdparm命令,缓存是提高读性能的,缓冲是提供缓冲写性能的
[root@ssd mnt]# yum install hdparm -y
Timing cached reads: 3632 MB in 2.00 seconds = 1819.26 MB/sec
Timing buffered disk reads: 850 MB in 3.00 seconds = 282.94 MB/sec
总结:从这里看出SSD很大程度提高磁盘性能的原因是buffer的存在,速度比机械硬盘快了3倍多,而cache速率相当。

 

转载于:https://www.cnblogs.com/yue-hong/p/6692876.html

Guess you like

Origin blog.csdn.net/weixin_30319097/article/details/94785121