Remember a springboot failure: the interface is unresponsive-"CPU 100%-" log cannot be written-"disk is full

In the morning, some friends said that the system could not be accessed,

The following records the resolution process:

The results of testing an interface are as follows

The interface is not responding

 

First look at the application log

The log will not be appended by tail -f, use htop to see that both cores of the system are 100%, and see that it is occupied by the application

I took over the analysis of how the Java program is occupied

Let's start with a total:

The server CPU usage has been very high, reaching 100%. Positioning Method 
One: Reprint: http://www.linuxhot.com/java-cpu-used-high.html 1.jps Get the PID of the Java process. 2.jstack pid >> java.txt Export the thread stack with high CPU usage. 3.top -H -p PID Check which thread of the corresponding process occupies too much CPU. 4.echo "obase = 16; PID" | bc Convert the PID of the thread to hexadecimal, uppercase to lowercase. 5. Find the PID of the thread converted into hexadecimal in the Java.txt exported in the second step. Find the corresponding thread stack. 6. Analyze what business operations the thread stack with high load is. Optimize procedures and deal with problems. Method two: 1. Use top to locate the PID of the process that occupies high CPU top. Use the ps aux | grep PID command 2. Get the thread information and find the thread that occupies high CPU ps -mp pid -o THREAD, tid, time | sort- rn 3. Convert the required thread ID to the hexadecimal format printf "% x \ n" tid 4. Print the stack information of the thread jstack pid | grep tid -A 30 Method 3: 1. Confirm the process ps that is over occupied ef | grep mem- * View the pid of the process top -Hp PID View the jstack of a process PID -l 21113 View the stack of the thread

 Specific operation

jps Get the PID of the Java process.

 8783

jstack pid >> java.txt export thread stack with high CPU usage

Use jstack to print the threads in the java pid found above

Print thread stack information

No abnormalities? ? ?

Go back to the log and specify the last 100 lines

Wow, a log write error was reported before log writing stopped, the log could not be written in, it is probably a disk problem

Sure enough, full

See who's occupied, and look inside

After deleting the space-consuming application logs, I found that the partition disk capacity of / a ** is back, but the root directory is still 99%. From the result of du -sh / *, / tmp takes up more, see / tmp mount Whether the point is on the same partition as /: df -h / tmp The result is as follows

After clearing the temporary files in the / tmp folder

Ok, restart the application, normal

My God, it turns out that this big circle is just a disk problem. From looking at the application log, it can be seen that the log will not be appended. . .

Guess you like

Origin www.cnblogs.com/timseng/p/12718999.html