Linux CPU occupancy rate under analysis

First, there may have been a thread CPU resources

1. The first check process status ps, find out the process PID (8209).

2.jstack -l 8209> /usr/local/work/tomcat/8209.stack export PID corresponding thread information to a file

3. Thread the exported file downloads do local analysis (can open text)

4. Check the corresponding process by top -H -p 8209 command which thread occupy CPU is too high (eg: 8308)

5.printf "% x \ n" 8308 convert decimal to hexadecimal here: 2074.

6. In the search for exported stack file thread ID equals the nid = 0x2074 threads, i.e. where the thread list corresponding class

The positioning of the thread information to find the corresponding class of problems successfully, the possible reasons:

                     1. The program calculates intensive (large matrix calculation)

                     2. The program appeared in an infinite loop (while loop, hashmap infinite loop)

                     Question 3. program logic structure (database connections are not released, the connection pool deadlock, the spin lock has been occupied with memory)

Second, view disk usage is not outside the normal range. (Df)

Third, view memory usage exceeds the normal range (free)

Four, jvm memory status may be abnormal, the cause may GC frequent. The proportion of the stack can be appropriately changed.

CPU utilization: 1 - CPU idle running time / total running time

CPU load (load): refers to the occupation within a period of time and process cpu cpu number of processes waiting time (state of being awake, not wait state), depending on the CPU queue.

If the CPU utilization is low, a high load situation now appears, it may be more IO-intensive tasks.

Original Address: https: //blog.csdn.net/qiuchaoxi/article/details/81296713

Guess you like

Origin www.cnblogs.com/jpfss/p/11416512.html