Locate the problem of front-end POST request reporting 500 (from interface to nginx to server)

1. Problem background

The background is a long time ago project. The front end suddenly reported that a certain POST interface would report 500 when saving specific content, and no content was returned. Record the process of locating the problem.

2. The process of locating the problem

1. Based on the request data, the length is about 15,000+. The first thing that comes to mind is the length of the database field.

After querying, it was found that the longer field is defined as text type, and the maximum length is 65,535 (2 to the 16th power – 1) characters, ruling out this conjecture.

2. It is suspected to be a problem with the backend interface logic. (No problem after running for several years, most likely not)

I wrote a unit test to simulate user information, and then used the same data to request related methods. There was no problem, and the interface logic problem was eliminated.
Insert image description here

3. I suspect that the framework has limited the amount of data requested by the post interface. (Because the normal business of this interface should only accept data below 10k)

The pigx framework was used, and all services involved in the request link were checked. No relevant configurations of maxPostSize and max-http-post-size were found, ruling out framework limitation issues.

4. Back to the issue of data volume.

Use postman to simulate the request, and request the specific fields of the input parameters in the form of "aaaaaa" with fixed lengths of 8000 and 12000 respectively. It is found that 8000 data can be requested normally, but 12000 cannot, so it is determined that the problem is the amount of post data.

5. The backend has been checked to see if it is a problem with nginx restricting post requests.

The default data size of Tomcat Post request is 2M, and the default data size of Nginx Post request is 1M. Requests exceeding this data amount will be directly rejected, but the current limit is 10k. I checked the sizes of client_max_body_size and client_body_buffer_size in nginx.conf. , and found that the parameter that limits the size is 2000M (although it is not very reasonable, it is not a problem with this configuration).

Insert image description here

6. It is basically certain that it is a problem with nginx, but I don’t know where the problem lies.

Query nginx's assess.log and error.log to see logs related to error messages.

Insert image description here

I found that the nginx log information printed 22G. Oh my god, I suddenly seemed to have figured out the cause of the problem. I quickly looked at the server’s memory:

Insert image description here

7. Final reason: The disk is full! ! ! !

nginx needs disk space to forward data exceeding 10k, and the disk just happened to be full this morning, so checking the nginx error log only ended this morning, and no relevant error information could be found. Of course, once the problem is found, it can be easily solved:

Delete nginx's assess.log and error.log, then restart nginx, free up disk space, and retest the relevant interfaces to return to normal. Done!

3. Summary

In the end, I found the real reason and felt that a lot of useless work was done in the middle, but many abnormal situations need to be implemented step by step. Solving the problem is not time-consuming, but how to locate the problem is time-consuming.

Guess you like

Origin blog.csdn.net/xinleiweikai/article/details/131709598