jmeter pressure test-"java.net.SocketException: Socket closed" solution

 

premise:

Today, when doing a jmeter pressure test on an interface, I found the phenomenon:

The number of threads = 100, the number of cycles = 20, but when the request reaches more than 40 requests, an error is
reported in the response result of the subsequent request-    success ---
there is one error in the middle of the error is successful, and the rest are all behind The CPU that reported the error only rose to 6-7%

Divided into two errors:

Error 1:

java.net.SocketException: Socket closed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.executeRequest(HTTPHC4Impl.java:850)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:561)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:67)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1282)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1271)
at org.apache.jmeter.threads.JMeterThread.doSampling(JMeterThread.java:627)
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:551)
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:490)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:257)
at java.lang.Thread.run(Unknown Source)

Baidu learned that the cause of java.net.SocketException: Socket closed error is usually that the connection timeout is  not set.

Solution:

The problem can be solved by the following methods.

If Use KeepAlive is checked in the Basic of HTTP Request Sampler, then it is recommended to go under the Advanced tab:

1. Implementation is selected as HttpClient4

2. Connect in Timeouts generally sets a value of 10 to 60 seconds, which indicates the idle timeout time of the connection, to avoid the disconnection caused by the Keep-Alive Header that does not receive the response from the pressure test terminal

The unit of this value is milliseconds: 15s * 1000 = 15000s

 

After setting by the above method, the pressure test again, this error will still occur

Baidu again,

https://cwiki.apache.org/confluence/display/jmeter/JMeterSocketClosed?spm=a2c4g.11186623.2.16.41ff41eaJzLjlR

 

 

 

 

 

 

Error 2:

<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.10.3</center>
</body>
</html>



We have not verified:
The solution that Baidu first arrived:
Generally, it is caused by the slow response of the nginx default fastcgi process, but there are other situations. Here I summarize some solutions for your reference.

 

  1. Situation 1: The nginx default fastcgi process response buffer is too small

           In this case, the fastcgi process is suspended. If the suspend processing of the fastcgi service team is not very good, it may prompt a "504 Gateway Time-out" error.

  2.  

    Case one solution:

           The default fastcgi process response buffer is 8K, we can set it a bit bigger, in nginx.conf, add: fastcgi_buffers 8 128k

           This means setting the fastcgi buffer to 8 blocks of 128k.

  3.  

    Case one solution (improvement):

           After the above method is modified, if there is still a problem, we can continue to modify the timeout parameter of nginx and increase the parameter a bit, such as setting it to 60 seconds:

           send_timeout 60;

           After the adjustment of these two parameters, the result no longer prompts the "504 Gateway Time-out" error, indicating that the effect is quite good, and the problem is basically solved.

  4.  

    Case two: PHP environment configuration problem

           Here we need to modify the configuration of php-fpm and nginx. In this case, the "504 Gateway Time-out" error message will also appear.

  5.  

    Solution for situation two (php-fpm configuration modification):

          Change max_children from 10 to 30 before, this operation is to ensure that there are sufficient php-cgi processes can be used.

          Change the request_terminate_timeout from the previous 0 seconds to 60 seconds, so that the timeout time of the php-cgi process to process the script is increased to 60 seconds, which can prevent the process from being suspended to improve utilization efficiency.

  6.  

    Solution for situation two (modified nginx configuration):

          In order to reduce the number of fastcgi requests and try to keep the buffers unchanged, we need to change several configuration items of nginx as follows:

          Changed fastcgi_buffers from 4 64k to 2 256k;

          Change fastcgi_buffer_size from 64k to 128k;

          Change fastcgi_busy_buffers_size from 128k to 256k;

          Change fastcgi_temp_file_write_size from 128k to 256k.

  7.  

    In case two, the solution is modified, we need to reload the configuration of php-fpm and nginx, and then test. After that, no "504 Gateway Time-out" error was found, and the effect is still good!

 

https://jingyan.baidu.com/article/6fb756ecbf4774241858fb9a.html

 

Guess you like

Origin www.cnblogs.com/yiyaxuan/p/12673496.html