iostat stats

Use: Enter Reports Central Processing Unit (CPU) statistics and the entire system, adapters, tty devices, disks and CD-ROM / output statistics.

Syntax: iostat [-c | -d] [-k] [-t | -m] [-V] [-x [device]] [interval [count]]

Description: iostat command to monitor the system input / output devices supported, this is achieved by observing the time-related activities and their average transmission rate of the physical disk. Iostat command generated report may be used to change the system configuration to better balance between the physical disks and input adapters / output load.

     Parameters: -c is reported to the CPU usage; -d to report disk usage; -k expressed by kilobytes per second display data bytes; -m represented by an M-byte per display data; -t print reporting time; -v indicates that the print publication of this information and usage; -x device Specifies the device name of the statistics, the default for all devices; interval refers to a time interval of statistics; time interval according to statistics the number of count fingers.

iostat analytical results

 

rrqm / s: the number of read operations performed per second merge. I.e., delta (rmerge) / s

Merge number of write operations performed per second: wrqm / s. I.e., delta (wmerge) / s

r / s: second reading completion of I / O devices. I.e., delta (rio) / s

w / s: per second write completion of I / O devices. I.e., delta (wio) / s

rsec / s: the number of sectors per read. I.e., delta (rsect) / s

wsec / s: the number of sectors per second write. I.e., delta (wsect) / s

rkB / s: the number of K bytes read per second. It is half rsect / s, as each sector size is 512 bytes.

wkB / s: K bytes per second write. It is wsect half / s.

avgrq-sz: the average size of the device each time the data I / O operations (sector). I.e., delta (rsect + wsect) / delta (rio + wio)

avgqu-sz: average I / O queue length. I.e., delta (aveq) / s / 1000 (because of AVEQ milliseconds).

await: Average waiting time per device I / O operations (in milliseconds). I.e., delta (ruse + wuse) / delta (rio + wio)

svctm: average service time per device I / O operations (in milliseconds). I.e., delta (use) / delta (rio + wio)

% Util: one second how many percent of the time for I / O operations, or how much time is one second I / O queue is not empty. I.e., delta (use) / s / 1000 (use because the unit is ms)

If% util close to 100%, indicating that the generated I / O requests too, I / O system has been at full capacity, the disk may be a bottleneck.

The more important parameters

% Util: one second how many percent of the time for I / O operations, or how much time is one second I / O queue is non-empty

svctm: Average per-device I / O operations of the service time

The average waiting time per device I / O operations: await

avgqu-sz: average I / O queue length

If% util close to 100%, indicating that the i / o requests too much, i / o system at full capacity, the disk may be a bottleneck, typically greater than 70% util% i / o pressure is relatively large, there are more read speed wait. At the same time can be combined with vmstat View View b parameters (number of processes waiting for resources) and wa parameters (percentage occupied by IO wait CPU time, high pressure is higher than 30% IO).

To understand these performance indicators we look at the figure below

 

The parameters of the implementation process IO

Each of the left picture is displayed iostat performance, each of the performance metrics are shown above a broken line, which indicates that the performance metric is read from the beginning that the metering phase above the dashed line, for example, in FIG w / s start from Linux IO scheduler through the hard disk controller (CCIS / 3ware), which shows that w / s per second statistic is the number of write IO from Linux IO scheduler by the hard disk controller.

FIG binding pair during a read operation to do a IO instructions, from the incoming to the amount of the read OS Buffer Cache IO operations OS Kernel (Linux IO scheduler) actually rrqm / s + r / s, the IO request until the read after reaching the OS Kernel layer, there is every second rrqm / s one IO read operations are combined, the number transferred to the final read IO per second disk controller is r / w; the device into an operating system layer ( after the / dev / sda), the counter starts counting on the IO operation, the final results show that await, this value is what we want IO response time; svctm is after the IO operation into the disk controller until the disk controller returns the results of time it takes, this is a real IO operation time spent, and when await svctm very different, we should pay attention to the disk IO performance; and avgrq-sz is passed down a request from the OS Kernel the size of a single IO, avgqu-sz is the average size of the IO request queue in the OS Kernel.

Now we can iostat output indicators discussed above and we hook up

Device IO operations: Total IO (io) / s = r / s (read) + w / s (write) = 25.28 + 1.46 = 26.74

Average per-device I / O operations only require 0.36 milliseconds to complete, it takes 10.57 msec to complete now, because too many request issued (26.74 per second), can be calculated as the average waiting time also issued when the request if:

The average waiting time = single I / O server time * (1 + 2 + ... + total requests -1) / total number of requests

I / 0 request issued per lot, but on the average queue 4, which represents a request uniform, most of the processing is quite promptly

svctm generally less than await (because the wait time while waiting for a repeated request is calculated), and the general size svctm properties associated disk, CPU / memory load will affect them, too many requests will indirectly cause the svctm increase. await service time generally depends on the size (the svctm) and I / O queue length and I / O pattern issued the request. If svctm await close, for I / O almost no wait time; much greater than if await svctm, illustrate the response time of I / O queue is too long, resulting in slower application, if the user response time exceeds the permissible range, then you can consider replacing faster disk, adjust elevator core algorithm, optimized applications, or upgrade CPU.

Queue length (avgqu-sz) may also be used as indicators to measure the system I / O load, but the avgqu-sz unit of time in accordance with the average value, it can not reflect the instantaneous I / O flood.

I / O System vs. supermarket queue

As an example, when we line up at the supermarket checkout, how to decide which payment stage it go? When was the first team to see the number of rows of five people is better than 20 people to be fast, right? In addition to headcount, we often look at the front how many people buy things, if in front of a food purchased aunt one week, then you can consider changing to another team lined up. There is a cashier's speed, and if you run the newbie point even the money is not clear, then some

waited. In addition, the timing is also important, probably five minutes ago overcrowded checkout counter, it is now empty, this time payment, but it was great, of course, that the things that made the last five minutes than queuing to be meaningful (but I have not found anything yet boring than queuing).

I / O systems and the supermarket queue have many similarities:

Total r / s + w / s is similar to payment al

Average queue length (avgqu-sz) similar to the unit time average queuing number of people

The average service time (svctm) similar to the cashier receipts speed

The average waiting time (await) similar to the average waiting time per person

The average I / O data (avgrq-sz) is similar to what the average person how much to buy

I / O operation rate (% util) was similar to the proportion of the checkout time queue.

We can analyze the speed and response time I / O requests mode, and I / O from these data.

 

one example

# iostat -x 1

avg-cpu: %user %nice %sys %idle

16.24 0.00 4.31 79.44

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util

/dev/cciss/c0d0

0.00 44.90 1.02 27.55 8.16 579.59 4.08 289.80 20.57 22.35 78.21 5.00 14.29

/dev/cciss/c0d0p1

0.00 44.90 1.02 27.55 8.16 579.59 4.08 289.80 20.57 22.35 78.21 5.00 14.29

/dev/cciss/c0d0p2

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

The above iostat output indicates seconds with a 28.57 times the device I / O operations: delta (io) / s = r / s + w / s = 1.02 + 27.55 = 28.57 (times / sec) in which the write operation accounted body (w: r = 27: 1).

Average each device I / O operations can be completed only 5ms, but each I / O request has to wait 78ms, Why? Because I issued / O requests are too many (about 29 per second), assuming that these At the same time the request is issued, the average wait time can be calculated:

= Average waiting time of a single I / O service time * (1 + 2 + ... + total requests -1) / total number of requests

Applied to the above example: the average waiting time = 5ms * (1 + 2 + ... + 28) / 29 = 70ms, and the average waiting time of 78ms iostat given very close. This in turn indicates that I / O is initiated at the same time. Issued I / O per second lot (about 29) the request, not the average queue length (only about 2), which indicates that the arrival of the request 29 is not uniform, most of the time I / O is idle. 14.29% have a one second time I / O request queue is, that is, 85.71% of the time I / O system nothing, all 29 I / O requests are processed within 142 milliseconds Lost.

delta (ruse + wuse) / delta (io) = await = 78.21 => delta (ruse + wuse) / s = 78.21 * delta (io) / s = 78.21 * 28.57 = 2232.8, show that in the second I / O request total need to wait 2232.8ms. Therefore, the average queue length should 2232.8ms / 1000ms = 2.23, and iostat

The average queue length (avgqu-sz) was given as 22.35, Why?! Iostat because there are bug, avgqu-sz value should be 2.23 instead of 22.35.

Guess you like

Origin www.cnblogs.com/fanweisheng/p/11109027.html