iostat Parameter Description

I have been less likely to use this parameter. Now carefully study a little iostat, it so happened that the high-pressure critical server, so impress analyze Below this there is too much pressure on IO server
  # iostat -x 1 10
  Linux 2.6.18-92.el5xen 02 / 03/2009
  AVG-CPU:% System User% Nice%%% iowait IDLE Steal%
  1.10 0.00 4.82 0.07 39.54 54.46
  Device: rrqm / S wrqm / SR / SW / Rsec S / S Wsec / S avgrq-SZ-SZ avgqu the await svctm% util
  sda 0.00 3.50 0.40 2.50 5.60 48.00 18.48 0.00 0.97 0.97 0.28
  sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
  SDC 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
  SDD 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
  SDE 0.00 0.10 0.30 0.20 2.40 2.40 9.60 0.00 1.60 1.60 0.08
  SDF 12095.20 5.60 0.20 17.40 0.50 102.00 118.40 0.70 6.81 2.09 21.36
  76451.20 232.40 1.90 379.70 0.50 SDG 19.20 201.13 4.94 13.78 2.45 93.16
  rrqm / S: the number of read operations performed per second merge. I.e. Delta (rmerge) / S
  wrqm / S: merge number of write operations performed per second. I.e. Delta (wmerge) / S
  R & lt / S: second reading completion of I / O devices. I.e. Delta (Rio) / S
  W / S: second write completion of I / O devices. I.e. Delta (wio) / S
  Rsec / S: Read the number of sectors per second. I.e. Delta (rsect) / S
  Wsec / S: write the number of sectors per second. I.e. Delta (wsect) / S
  rkB / S: K bytes per second read. It is half rsect / s, as each sector size is 512 bytes. (Calculated required)
  WKB / S: K bytes per write. It is wsect half / s. (Calculated required)
  avgrq-SZ: the average size of the device each data I / O operations (sector). Delta (rsect wsect +) / Delta (Rio + wio)
  avgqu-SZ: average I / O queue length. I.e., delta (aveq) / s / 1000 ( because of AVEQ milliseconds).
  await: Average waiting time per device I / O operations (in milliseconds). I.e. Delta (Ruse Wuse +) / Delta (Rio + wio)
  the svctm: Average per-device I / O operations service time (in milliseconds). I.e., delta (use) / delta (rio + wio)
  % util: one second how many percent of the time for I / O operations, or how much time is one second I / O queue is not empty. I.e., delta (use) / s / 1000 ( as use of milliseconds)
  If% util close to 100%, indicating that the generated I / O requests too, I / O system has been at full capacity, the disk
  may be a bottleneck.
  idle less than 70% IO pressure on the larger, and generally have more speed reading wait.
  At the same time can be combined with vmstat View View b parameters (number of processes waiting for resources) and wa parameter (IO wait for the percentage of CPU time, IO high pressure higher than 30%)
  also can refer to
   svctm generally less than await (because the wait time while waiting for a repeated request is calculated), and the general size svctm properties associated disk, CPU / memory load will also its influence, too many requests will indirectly lead to an increase of svctm. await service time generally depends on the size (the svctm) and I / O queue length and I / O pattern issued the request. If svctm await close, for I / O almost no wait time; much greater than if await svctm, illustrate the response time of I / O queue is too long, resulting in slower application, if the user response time exceeds the permissible range, then you can consider replacing faster disk, adjust elevator core algorithm, optimized applications, or upgrade CPU.
  Queue length (avgqu-sz) may also be used as indicators to measure the system I / O load, but the avgqu-sz unit of time in accordance with the average value, it can not reflect the instantaneous I / O flood.
  Others a good example. (I / O system vs. the supermarket queue)
   As an example, when we line up at the supermarket checkout, how to decide which payment stage it go? When was the first team to see the number of rows of five people is better than 20 people to be fast, right? In addition to headcount, we often look at the front how many people buy things, if in front of a food purchased aunt one week, then you can consider changing to another team lined up. There is a cashier's speed, and if you run the newbie point even the money is not clear, it would be waiting awhile. In addition, the timing is also important, probably five minutes ago overcrowded checkout counter, it is now empty, this time payment, but it was great, of course, that the things that made the last five minutes than queuing to be meaningful (but I have not found anything yet boring than queuing).
  I / O queue systems and supermarkets have many similarities:
  R & lt / S + W / S is similar to the total number of payment al
  average queue length (avgqu-sz) similar to the unit time average of the number of people queuing
  average service time collection rate (the svctm) similar to the cashier
  waiting time average wait time (the await) similar to the average of the
  number average I / O data (avgrq-sz) bought something similar to an average
  I / O operation rate (% util) was similar to the proportion of time the checkout queue.
  We can analyze the speed and response time I / O requests mode, and I / O from these data.
  The following is an analysis written by someone else this output parameter
  # iostat -x 1
  AVG-the CPU: the User%%% SYS Nice IDLE%
  16.24 0.00 4.31 79.44
  Device: rrqm / S wrqm / SR / sw / S Rsec / S Wsec / S rkB / s wkB / s avgrq-sz avgqu-sz await svctm% util
  / dev / cciss / c0d0
  0.00 44.90 1.02 27.55 8.16 579.59 4.08 289.80 20.57 22.35 78.21 5.00 14.29
  / dev / cciss / c0d0p1
  0.00 44.90 1.02 27.55 8.16 579.59 4.08 289.80 20.57 22.35 78.21 5.00 14.29
  / dev / cciss / c0d0p2
  0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
  the above iostat output indicates seconds with a 28.57 times the device I / O operations: total IO (io) / s = r / s ( read) + w / s (write) = 1.02 + 27.55 = 28.57 ( times / second) accounting for a write operation wherein the body (w: r = 27: 1 ).
  Average each device I / O operations can be completed only 5ms, but each I / O request has to wait 78ms, Why? Because I issued / O requests are too many (about 29 per second), assuming that these At the same time the request is issued, the average wait time can be calculated:
  average waiting time = single I / O service time * (1 + 2 + ... + total requests -1) / total number of requests
  example of the application of the above: the average waiting time = 5ms * (1 + 2 + ... + 28) / 29 = 70ms, and the average waiting time of 78ms iostat given very close. This in turn indicates that I / O is initiated at the same time.
  Issued I / O per second lot (about 29) the request, not the average queue length (only about 2), which indicates that the arrival of the request 29 is not uniform, most of the time I / O is idle.
  14.29% have a one second time I / O request queue is, that is, 85.71% of the time I / O system nothing, all 29 I / O requests are processed within 142 milliseconds Lost.
   delta (ruse + wuse) / delta (io) = await = 78.21 => delta (ruse + wuse) / s = 78.21 * delta (io) / s = 78.21 * 28.57 = 2232.8, show that in the second I / O request total need to wait 2232.8ms. Therefore, the average queue length should be 2232.8ms / 1000ms = 2.23, while the average queue length iostat given (avgqu-sz) was 22.35, Why?! Iostat because there are bug, avgqu-sz value should be 2.23 instead of 22.35 .

Guess you like

Origin www.cnblogs.com/dongzhiquan/p/iostat.html