Load Testing Services

1. Check the number of the current physical cpu

cat /proc/cpuinfo| grep "physical id"| sort| uniq| wc -l

2. Check the current for each cup of core (Audit)

cat /proc/cpuinfo| grep "cpu cores"| uniq

3. The number of logical cpu = cpu * number of physical audit of each cpu

 

cat /proc/cpuinfo| grep "processor"| wc -l

4. Review the current system load

Use the command: w

Information began to show the first row from left were: time, system uptime, the number of users log in, the average load.

All of the second line and below the line, tell us information, which users are currently logged in, and where they are logged on, and so on.

load average: the latter three values:

The first number represents the average value of the system load in 1 minute;

The second number represents the average value of the system load within 5 minutes;

The third value represents the average value of the 15 minutes the system load.

In this sense the value of that unit of CPU time period the number of active processes. Of course, this would explain the greater the value the greater the pressure on your server. Under normal circumstances as long as the value does not exceed the number of cpu server it would not matter, if the number of 8 cpu server, then this value if less than 8, it shows the current server without pressure, or we need to look up.

5. Status Monitoring System

Use the command: vmstat

Mentioned above w view of the load on the whole system, that value can know by looking at the current system there is no pressure, but specifically where (CPU, memory, disk, etc.) can not determine the pressure. By vmstat can know specifically where there is pressure. Results vmstat print command is divided into six parts: procs, memory, swap, io, system, cpu:

1) procs display process information

r: represents the run and wait cpu time slice number of processes, if long-term is greater than the number of cpu server, then the cpu not enough;

b: represents the number of processes waiting for resources, such as waiting for I / O, memory, etc., this column if the value is greater than 1 for a long time, you need to look up;

2) memory memory-related information

swpd: a handoff to swap the amount of memory;

free: the amount of memory currently free;

buff: buffer size, (about to be written to disk);

cache: cache size, (reading from the disk);

3) swap memory swapping case

si: written by the exchange zone the amount of data memory;

so: the amount of data written to the memory by the exchange zone;

4) io disk usage

bi: the amount of read data block devices (disk read);

bo: the amount of the write data block device (writing disk);

5) system of the number of occurrences of interruption collection interval

in: represents observed at a certain time interval the device interrupts per second;

cs: indicates the number of context switches generated per second;

6) CPU usage state of the display of cpu

us: it shows the percent of the cpu time spent in user;

sy: percentage of cpu time spent display system;

id: cpu indicates the percentage of time in an idle state;

wa: = I / O wait time cpu occupied percentage;

st: cpu represents the percentage of stolen (usually 0, no concern);

note: 

We use vmstat view system status, they usually are using this form of view:

vmstat 1 5
vmstat 1

Earlier he said that every time the state of a second printing, print a total of five times, while the latter represents the state once every 1 second print has been printed, unless we end press Ctrl + c.

 

 

 

To view the server load by top command
 Before then this Linux server performance analysis, to understand the knowledge of Linux system under Load average load, load mean the uptime or top commands you can see, they may be displayed in this way: load average: 0.15, 0.14, 0.11
many people mean load will be understood: the three numbers representing the average load of the system in different time (one minute, five minutes and fifteen minutes), their number is of course the smaller the better. The higher the number, the larger load on the server, this may also be a signal server has some sort of problem
A single-core processor can be likened to get the image of a bike lane. If no vehicle in front of waiting, then you can tell by the driver behind. If many vehicles, you need to inform them may need to wait a little longer.
Therefore, some specific code indicates the current traffic situation, for example:
  0.00 represents currently do not have any traffic on the bridge. In fact this case between 0.00 and 1.00 are the same, all in all very smooth, no passing vehicles may not have to wait through.
  1.00 represents exactly within the tolerance range of the bridge. This situation is not too bad, but some traffic will be blocked, but this situation may result in slower and slower traffic.
  More than 1.00, then that bridge has been overloaded, serious traffic congestion. So how bad? For example, the case of 2.00 explanation traffic beyond the bridge can withstand twice, then it will double the excess bridge vehicle is anxiously waiting. 3.00, then the situation is even more unfavorable, indicating that the bridge is basically almost can not afford, as well as the vehicle beyond the bridge load twice as many are waiting for.
    The above conditions and load of the processor is very similar. A car across the bridge of time is like a real time processor threads. Unix system-defined processes run length of all processor cores of processing time plus thread time waiting in the queue.
    And toll collection as an administrator, you certainly want your car (operation) will not be anxiously waiting. Therefore, under ideal conditions, the average load is less than 1.00 are desirable. Of course not rule out some peak will be over 1.00, but the long run to maintain this state, it means there will be problems, this time you should be very worried.
      "So you say that the ideal load of 1.00?"
    Ah, the situation is not entirely correct. 1.00 indicates that the system does not load the rest of the resources. In reality, experienced system administrators will draw these lines in 0.70:
      "We need to be investigated rule": if long-term load your system down at 0.70, then you need to get worse before in things, take the time to understand why.
      "Now we must repair rule": 1.00. If your server system load long hovered at 1.00, then we should solve this problem immediately. Otherwise, you will receive your boss's phone the night, this not an enjoyable thing.
      "Half past three exercise rule": 5.00. If your server load exceeds the 5.00 figure, then you will lose your sleep, had to explain why this happened in the meeting, in short, do not let it happen.
    So multiple processors it? I mean it is 3.00, but the system is operating normally! Oh wow, you have a four-processor host? Then its mean load at 3.00 which is normal. In a multiprocessor system, the load is based on the average number of cores of the decision. 100% load calculation, 1.00 indicates a single processor, and 2.00 illustrate the two pairs of processors, then the host has four 4.00 processor will be described.
  Returning to our analogy of the bridge above the vehicle concerned. 1.00 I said was "a single lane road." So in the case of a single lane in 1.00, indicating that the bridge has been filled car. In a dual-processor system, this means that more than doubled the load, that is another 50% of the remaining system resources - because there is another lane passable.

Therefore, a single processor is already under load, when the load is filled to capacity dual-processor 2.00, it also doubled the resources available.

 

Guess you like

Origin www.cnblogs.com/shen-qiang/p/11647149.html