Test bare disk write performance
Direct write, no data synchronization, no shielding of the write cache, and observe the performance. We can see that the more data written, the lower the disk write performance is obvious, indicating that this operation cannot correctly reflect the disk write performance
Among them, / dev / zero is a pseudo device, which only generates a null character stream, and does not generate IO for it. Therefore, IO will be concentrated in of files, of files are only used for writing, so this command is equivalent to writing to the test disk ability.
[root@orcadt6 opt]# dd if=/dev/zero of=/dev/vda bs=4k count=100000
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 1.21162 s, 338 MB/s
[root@orcadt6 opt]# dd if=/dev/zero of=/dev/vda bs=4k count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 30.8322 s, 133 MB/s
[root@orcadt6 opt]# dd if=/dev/zero of=/dev/vda bs=4k count=5000000
5000000+0 records in
5000000+0 records out
20480000000 bytes (20 GB) copied, 270.688 s, 75.7 MB/s
[root@orcadt6 opt]#
Test bare disk read performance
Read directly, observe performance,
/ dev / null is a pseudo device, which is equivalent to a black hole. Of will not generate IO to the device, so the IO of this command only occurs on / dev / sdb1, which is also equivalent to testing the read ability of the disk
[root@orcadt6 opt]# dd if=/dev/vda of=/dev/null bs=4k count=5000000
5000000+0 records in
5000000+0 records out
20480000000 bytes (20 GB) copied, 219.49 s, 93.3 MB/s
[root@orcadt6 opt]# dd if=/dev/vda of=/dev/null bs=4k count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 24.9427 s, 164 MB/s
[root@orcadt6 opt]# dd if=/dev/vda of=/dev/null bs=4k count=100000
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 0.353777 s, 1.2 GB/s
[root@orcadt6 opt]# dd if=/dev/vda of=/dev/null bs=4k
^C6425487+0 records in
6425486+0 records out
26318790656 bytes (26 GB) copied, 688.21 s, 38.2 MB/s
Look at the help documentation
Usage: dd [OPERAND] ...
or: dd OPTION
copies files according to operands, converts and formats.
bs = BYTES reads and writes BYTES bytes at once
cbs = BYTES converts BYTES bytes at once
conv = CONVS converts files according to a comma-separated symbol list
count = N copies only N input blocks
ibs = BYTES reads BYTES bytes at a time (default: 512)
if = FILE reads from FILE instead of stdin
iflag = FLAGS reads a list of symbols separated by comma
obs = BYTES writes BYTES bytes at a time (default: 512)
of = FILE writes to FILE instead of stdout
oflag = FLAGS according to Comma-separated list of symbols written to
seek = N skips N blocks of block size at the beginning of the output
skip = N skips blocks of N ibs size at the start of input
status = LEVEL the level of information to be printed to stderr;
'none 'Except for error messages, everything will be suppressed,
' noxfer 'will suppress the final transfer statistics, and
' progress' will show regular transmission statistics
N and BYTES can be followed by the following multiplication suffixes:
c = 1, w = 2, b = 512, kB = 1000, K = 1024, MB = 1000 * 1000, M = 1024 * 1024, xM = M
for T, P, E, Z, Y, GB = 1000 * 1000 * 1000, G = 1024 * 1024 * 1024, etc.
Each CONV symbol may be:
ascii from EBCDIC to ASCII to
EBCDIC from ASCII to EBCDIC in EBCDIC
ibm from ASCII to EBCDIC the alternating ibm
prevent recording block filled newline terminator, which is a space cbs-size
unblock newline with trailing spaces Alternatively cbs-size record
lcase the Change uppercase to lowercase
ucase Change lowercase to uppercase
sparse Try to find output
swab instead of writing to NUL input block Swap each pair of input bytes
sync Each input block is synchronously filled with NUL to ibs size; use block or unblock when used, fill in spaces Instead of NUL
excl if the output file already exists, excl fails
nocreat does not create the output file
notrunc does not truncate the output file
noerror error does not terminate the operation
fdatasync physically writes the output file data before completion, not only returns
fsync after writing the cache , but Write metadata at the same time
Each FLAG symbol may be:
append Append append mode (only meaningful for output; conv = notrunc is recommended)
direct
write directory directly using direct I / O data directory, fail if it is not a directory
dsync use synchronous I / O for data
sync, similarly, but also for Metadata
fullblock accumulates the entire input block (iflag only)
nonblock uses non-blocking I / O. Noatime does
not update access time
nocache discards cached data
noctty does not allocate from the file control terminal
nofollow does not follow the symbolic link
count_bytes will count 'count = N' Is the number of bytes (iflag only)
skip_bytes treats 'skip = N' as the number of bytes (iflag only)
seek_bytes treats 'seek = N' as the number of bytes (oflag only)
Just send the USR1 signal to the running 'dd' process.
Print the I / O statistics to standard error, then restart the copy.
$ dd if = / dev / zero of = / dev / null & pid = $!
$ kill -USR1 $ pid; sleep 1; kill $ pid
18335302 + 0 records in
18335302 + 0 records
9387674624 bytes (9.4 GB) copied, 34.6279 seconds, 271 MB / s
How to accurately test write to disk
After getting familiar with the above examples and documents, check how to actually write to disk
Analyze the parameters, conv = fdatasync Synchronize data io to disk, that is, io directly returns the disk after successful
oflag = direct, dsync bypasses the write cache and writes directly to the disk, synchronizing IO
[root@orcadt6 opt]# dd if=/dev/zero of=/dev/vda bs=4k count=10 conv=fdatasync oflag=direct,dsync
10+0 records in
10+0 records out
40960 bytes (41 kB) copied, 0.0886036 s, 462 kB/s
[root@orcadt6 opt]# dd if=/dev/zero of=/dev/vda bs=4k count=100 conv=fdatasync oflag=direct,dsync
100+0 records in
100+0 records out
409600 bytes (410 kB) copied, 0.36866 s, 1.1 MB/s
[root@orcadt6 opt]# dd if=/dev/zero of=/dev/vda bs=4k count=1000 conv=fdatasync oflag=direct,dsync
1000+0 records in
1000+0 records out
4096000 bytes (4.1 MB) copied, 4.07199 s, 1.0 MB/s
[root@orcadt6 opt]# dd if=/dev/zero of=/dev/vda bs=4k count=10000 conv=fdatasync oflag=direct,dsync
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 42.1317 s, 972 kB/s
Read performance evaluation
In the same way, we can get the reading performance. In the dd command, the larger the count, the closer to the real situation (sustained IO)
[root@orcadt6 opt]# dd if=/dev/vda of=/dev/null bs=4k count=10000 iflag=direct,dsync
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 12.4751 s, 3.3 MB/s
[root@orcadt6 opt]# dd if=/dev/vda of=/dev/null bs=4k count=1000 iflag=direct,dsync
1000+0 records in
1000+0 records out
4096000 bytes (4.1 MB) copied, 1.10571 s, 3.7 MB/s
[root@orcadt6 opt]# dd if=/dev/vda of=/dev/null bs=4k count=10 iflag=direct,dsync
10+0 records in
10+0 records out
40960 bytes (41 kB) copied, 0.00106058 s, 38.6 MB/s
[root@orcadt6 opt]# dd if=/dev/vda of=/dev/null bs=4k count=20000 iflag=direct,dsync
20000+0 records in
20000+0 records out
81920000 bytes (82 MB) copied, 27.4144 s, 3.0 MB/s