Stress test on server using Jmeter

Hello everyone, I am the 32nd student of the Beijing Branch of the IT Cultivation Institute. I am an honest and kind programmer. Today I will share with you how to use Jmeter to effectively stress test your server to optimize the server effect:

1. Jmeter

Jmeter is a test software whose entire code is composed of java. It is mainly used to test the concurrency, response time, and throughput of the server. The interface is as follows:


We can write some http requests by ourselves, or we can use other auxiliary tools to complete the script recording. Here we recommend a recording script software Badboy, export the jmx file and put it into the Jmeter software, and the step1 and step2 shown in the above figure will appear directly. can be run.

Since we are going to carry out a stress test, we must understand the actual meaning of some parameters,

(1) Test parameters of Jmeter:


Number of threads: The number of connections established between the machine and the test website. Because there is still a waiting time, the number of threads may not be smooth.

Waiting time: My own understanding is that the last thread starts after X seconds, let's draw a picture to understand:


We can think of these as our threads. In the absence of loops, a link will stop after receiving data from the beginning to the end. If we want to maintain our concurrency, we need to set a reasonable number of loops and Waiting time, so that our server can maintain all threads concurrently for a period of time in the middle to achieve a reasonable test result.

Number of cycles: It is the number of cycles of each thread. There is a pit here. If you use badboy to record the script, there is a number of cycles in step


We need to adjust this number of cycles instead of the number of cycles on the original graph, which is invalid for step.

Scheduler: It is convenient for us to automate test scripts. The startup time and duration can be understood literally. I will not comment here without using the scheduler.

(2) Parameters of aggregate report:


samples: this should be the number of requests

average, median: average time and median time

90%line, 95%line, 99%line: First clarify the meaning of xx%line, that is, the percentile. If 100 requests are arranged in ascending order of time, and I am the 90th, then my request time is 90%. The translation means that 90% of the data takes less time than mine. This indicator represents the experience value of most users under high concurrency.

min, max: The shortest time and longest time are the first and the hundredth place, and the above time is in milliseconds.

error: A request with a status code other than 200 (bad request). There are many reasons, which can be viewed in detail.

throughput: throughput, the number of requests processed per second

recieved kb/s: the amount of data obtained

The more important ones are 90% line, error, and throughput. When we do stress testing, we need to make various adjustments based on the actual situation of the server and web page, not only according to the data but also according to the performance of the server. We need to pay attention to whether it is tps. The higher the better, the faster the 90% line is not the better, because this cannot represent the best state of your server, and the parameters need to be adjusted several times to get the correct aggregation report.

(3) Jmeter plugin

Some images that come with Jmeter may not meet our needs. Here we can use Jmeter's plug-ins. Jmeter Plugins downloads some plug-ins that are convenient for us to test, as shown below:


In this test, I mainly use three reports: aggregate report, jp@gc - Transactions per Second and result tree.

Second, the actual test

Here are the three pages I want to test:


Homepage: The homepage contains a large number of static files, very little dynamic data interacting with the database, and only 4 randomly selected to display students will be a waste of time.


Student list page, this page has no static resources, and directly takes out a large list of 100 data from the database.


Students query the page, this page has only one set of data, memcached is key-string cache, redis is key-map cache.

(1) Homepage pressure test results:

No cache, no dynamic and static separation, no load balancing:

Memcached cache, with dynamic and static separation, no load balancing (15 threads, 20 loops, 4 samples):



Server bandwidth:


Memcached cache, with dynamic and static separation, with load balancing (15 threads, 20 loops, 4 samples):



Server bandwidth:


Redis cache, with dynamic and static separation, with load balancing (15 threads, 20 loops, 4 samples):



The top command checks the server memory and CPU usage:


Server bandwidth:


Redis cache, with dynamic and static separation, no load balancing (15 threads, 20 loops, 4 samples):



Server (1M) bandwidth:


Redis cache, with dynamic and static separation, with load balancing (15 threads, 20 loops, 4 samples):

(2)学生列表(100个)压测结果:

无缓存、无负载均衡(15线程、20循环、4次样本):


memcached缓存、有负载均衡(15线程、20循环、4次样本):


服务器(1M)带宽情况:


memcached缓存、无负载均衡(15线程、20循环、4次样本):


服务器(1M)带宽情况:


redis缓存、无负载均衡(15线程、20循环、4次样本):


redis缓存、无负载均衡(20线程、20循环、1次样本):

由于卡了一组线程没回来所以暂时采取15*20的参数。

服务器(1M)带宽情况:


redis缓存,有负载均衡(15线程、20循环、4次样本):


服务器(1M)带宽情况:


(3)查询单个学生压测结果:

无缓存、无负载均衡(20线程、50循环、6次样本):


memcached缓存,无负载均衡(20线程、50循环、6次样本):


服务器(1M)带宽情况:

memcached缓存,有负载均衡(20线程、50循环、6次样本):


服务器(1M)带宽情况:


redis缓存,无负载均衡(20线程、50循环,6次样本):


服务器(1M)带宽情况:


redis缓存、有负载均衡(20线程、50循环、6次样本):


服务器(1M)带宽情况:



测试总结:

在合适的线程*循环数的情况下服务器带宽均能跑到0.9-1.1,这里也是一个最大的瓶颈,网络因素也最大程度的影响了测试结果,下面分析一下测试报告:

(1)主页

由于主页的特殊性,缓存与否影响不大,甚至可能会造成速度减慢,重点应该放在负载均衡与动静分离的处理上:

 此处没有做无动静分离的情况,因为一开始没有动静分离15*20有的时候都跑不完。

如上图所测,在无负载均衡的情况下,吞吐量只有1.8,每秒接受数据量33.26,90% 2138 

在有负载均衡的情况下,吞吐量均达到了5以上,每秒接受数据量90-100,然而90%变慢,相应的90%-100%之间的速度大大提升,速度加快一倍

结论:在大量静态页面+少量动态数据的构成页面中,我们可以只配置动静分离+负载均衡就能大幅度的提高服务器吞吐量,但是90%线会稍微变慢,相应的,90%-100%速度会大幅度加快,而缓存对这种页面影响十分小。

(2)100个对象的list

无负载均衡、无缓存:90% 3244 TPS:2.9 

无负载均衡、memcached缓存:90%:2273 TPS:5.2

无负载均衡、redis缓存:90%:2195 TPS:5.0

可以看出缓存的效果,90%线减少,TPS增大明显,但是这里有一点要说,虽然redis缓存90%线低于memcached90%线,但是90%-100%的处理时间要比memcached稍微长一些,不能排除是网络因素导致。

负载均衡对此处的影响:

memcached:90%增加,95%增加,99%减少,吞吐量5.2-5.8,略微增加

redis:90%不变,95%不变,99%减少,吞吐量5.0-4.7,略微减少

结论:在查询大量数据的时候,应该使用缓存,但是也要考虑缓存序列化与反序列化消耗的时间,也要善于利用redis的map存储特性,负载均衡对此处影响不大,而且网络有波动会对测试结果造成很大的影响,不能断言负载均衡的效果,结合服务器的负载情况,当开启两个tomcat部署项目的时候内存消耗分别为30%和20%应该还有很大提升空间,但是受限于1M带宽。

(3)单独查询一个学生

无负载均衡无缓存:90%:218  TPS:104.3  

memcached缓存:90%:219  TPS:118

redis缓存: 90%:224 TPS:62

结论:此处redis做了map缓存导致pojo类和map类转换(反射过程)消耗了大量时间,tps明显降低,而使用memcached则是直接存储String类型,90%无变化(数值太小),TPS略微上升,对小数据的缓存我们应该谨慎使用,但是还是建议使用redis和memcached的普通类型存储,而不建议使用map。

负载均衡对此处的影响:

memcached:90%线无变化,TPS降低

redis:90%线无变化,TPS上升:62.2-84.7

结论:负载均衡下分担了POJO-MAP及MAP-POJO的转换压力,加快了速度,但是吞吐量还是比不上直接存储的速度,但是memcached这个降低没太搞懂。

(4)缓存穿透:模拟失败,设置为空的时候查询不存在的key,90%线差不多为40,TPS200多,但是没有穿透情况下的数据,原因为各种环境因素制约。

(5)在维护数据的同时查询:由于没有使用互斥锁,查询到了脏数据的情况。




Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325762210&siteId=291194637