Guli Mall Notes + Stepping on the Pit (11) - Performance stress testing and tuning, JMeter stress testing + jvisualvm monitoring performance + dynamic and static separation of resources + modifying heap memory

Table of contents

1. JMeter stress test

1.1 Performance indicators of stress test

1.2 JMeter installation

1.3 JMeter Chinese configuration

1.4 JMeter pressure measurement example

1.4.1 Add thread group

1.4.2 Add HTTP request

1.4.3 Add listener

1.4.4 Start pressure test

1.4.5 View Analysis Results 

1.5 Error resolution JMeter Address Already in use, Windows port access mechanism

2. Performance monitoring

2.1 Review the jvm memory model

2.2 Review Heap

2.2.0 concept 

2.2.1 Garbage collection process

2.3 Use jconsole to monitor local and remote applications

2.4 jvisualvm (more powerful than jconsole)

2.4.0 start jvisualvm

2.4.1 What jvisualvm can do

2.4.2 Install Visual GC plug-in

2.5 Monitoring indicators

2.5.1 Middleware monitoring indicators

2.5.2 Database monitoring indicators

3. Stress testing and optimization

3.1 Pressure test 

3.1.1 Pressure testing nginx

3.1.2 Pressure test gateway

3.1.3 Pressure testing services without business

3.1.4 Rendering of the first-level menu on the stress test homepage (thymeleaf disables caching) 

3.1.5 Rendering of the first-level menu on the stress test homepage (thymeleaf enables caching)

3.1.6 Rendering of the first-level menu on the stress test homepage (open cache, add index to database, and close log)

3.1.7 Acquisition of three-level classification data for stress testing

3.1.8 Acquisition of pressure test three-level classification data (adding index)

3.1.9 Three-level classification data acquisition for stress testing (optimizing business)

3.1.10 Three-level classification data acquisition (using redis as cache)

3.1.11 Acquisition of full data on the homepage (including static resources)

3.1.12 Nginx+Gateway

3.1.13 Gateway+simple service

3.1.14 Conclusion, the impact of middleware on performance

3.2 Optimization, dynamic and static separation

3.2.1 What should be separated from static and dynamic

3.2.2 Store static resources in Nginx

3.2.3 "static" before the static resource path of files in templates

3.2.4 Configure path mapping of static resources in nginx

3.2.5 Testing

3.3 Optimization, simulate memory crash and downtime

3.4 Optimizing the acquisition of three-level classification data

3.4.1 Current issues

3.4.2 Turn multiple queries of the database into one


1. JMeter stress test

Stress testing examines the maximum load that the system can withstand under the current hardware and software environment and helps to find out where the system bottlenecks lie. The pressure test is to maintain the online processing capacity and stability of the system within a standard range, so as to know what to expect.

Using stress testing, we can hopefully find many kinds of bugs that are more difficult to find with other testing methods. There are two types of bugs: memory leaks , and concurrency and synchronization issues .

Memory leak: Memory leak (Memory Leak) refers to the heap memory that has been dynamically allocated in the program is not released or cannot be released for some reason, resulting in a waste of system memory, resulting in serious consequences such as slowing down the running speed of the program or even system crashes.

Concurrency and Synchronization:

An effective stress testing system will apply the following key conditions: repetition , concurrency , magnitude , random variation .

1.1 Performance indicators of stress test

  • Response Time (Response Time: RT)
    Response time refers to the time it takes for the user to initiate a request from the client to the end when the client receives the response returned from the server. The less response time the better.

  • HPS (Hits Per Second): The number of hits per second, the unit is hits per second. [not particularly important]

  • TPS (Transaction per Second): The number of transactions processed by the system per second, the unit is pen/second.

  • Qps (Query per Second): The number of queries processed by the system per second, the unit is times/second.
    For Internet business, if some business has one and only one request connection, then TPS=QPS=HPS. Generally, TPS is used to measure the entire business process, QPS is used to measure the number of interface queries, and HPS is used to represent the number of requests to the server. Click Request.

  • Regardless of TPS, QPS, HPS, this indicator is a very important indicator to measure the system processing capacity, the bigger the better, according to
    experience
    general: the financial industry: 1000TPS~5000OTPS, excluding Internet-based activities
    Insurance industry: 100TPS~10000OTPS , excluding Internet-based activities
    Manufacturing industry: 10TPS~5000TPS
    Internet e-commerce: 1000OTPS~1000000TPS
    Internet medium-sized website: 1000TPS~
    50000TPS Internet small website: 50OTPS~10000TPS

  • The maximum response time (MaxResponse Time) refers to the maximum time from the user sending a request or instruction to the system responding (response).

  • The minimum response time (Mininum ResponseTime) refers to the minimum time for the user to send a request or command to the system to respond (response).

  • 90% Response Time (90% Response Time) refers to the response time of all users sorted, and the 90% response time.

  • From the outside, the performance test mainly focuses on
    the throughput of the following three indicators : the number of requests and the number of tasks that the system can handle per second.
    Response time : The time it takes for a service to process a request or a task.
    error rate . The percentage of requests in a batch that resulted in errors.

1.2 JMeter installation

JMeter download address

Run the batch file jmeter.bat

image-20211101101801200

1.3 JMeter Chinese configuration

image-20211101102029824

1.4 JMeter pressure measurement example

1.4.1 Add thread group

Right-click "test plan" to add:

image-20211101104037076

image-20211101104400679

Detailed explanation of thread group parameters:

  • Number of threads: number of virtual users. A virtual user occupies a process or thread. How many virtual users are set here is also how many threads are set.
  • Ramp-Up time: how many seconds it takes for all 200 threads to start and complete . If the number of threads is 10 and the preparation time is 2, it takes 2 seconds to start 10 threads, that is, to start 5 threads per second.
  • Number of loops: The number of times each thread sends a request. Checking "Forever" will continue the stress test. If the number of threads is 10 and the number of loops is 100, then each thread sends 100 requests. The total number of requests is 10*100=1000. If "Forever" is checked, then all threads will send requests all the time, and stop running the script once it is selected.
  • Delay Thread creation until needed: Delay thread creation until needed.
  • Scheduler: Set the start time and end time of thread group startup (when configuring the scheduler, you need to check the number of cycles as forever)
  • Duration (seconds): test duration, will override end time
  • Startup Delay (seconds): The test delays the startup time, which will override the startup time
  • Startup time: Test the startup time, the startup delay overrides it. When the startup time has passed, the current time will also override it when manually simply testing.
  • End Time: The end time of the test, the duration will override it.

1.4.2 Add HTTP request

image-20211101104507326

image-20211101104933592

1.4.3 Add listener

image-20211101105027448

View the result tree:

image-20211101105131273

The result tree can view the success, failure and response of each request. 

  • summary report

image-20211101105202063

 Sample: Number of send requests. Number of samples = number of threads * number of loops

Average: Average response time.

Median: The median of all sample response times.

3. Aggregate report

image-20211101105229519

1.4.4 Start pressure test

start up:

Whether to save this test plan:

1.4.5 View Analysis Results 

Summary report: 

Summary graph:

Clear all reports:

Result analysis :

  • Confirm the error rate with development to determine whether errors are allowed or within what range the error rate is allowed;

  • Throughput throughput The number of requests per second is greater than the number of concurrent requests, and it can be increased slowly; if the throughput is less than the number of concurrent requests when the machine performance of the stress test is good, it means that the number of concurrent requests cannot be increased any more. Slowly decrease to find the optimal number of concurrency;

  • After the pressure test is over, log in to the corresponding web server to check the CPU and other performance indicators and analyze the data;

  • For the maximum tps, continuously increase the number of concurrency, and when the tps reaches a certain value, it starts to decline, then that value is the
    maximum tps.

  • Maximum concurrency: The maximum concurrency and maximum tps are different probabilities. Generally, the concurrency will increase continuously. After reaching a certain value, if the server requests timeout, then this value can be considered as the maximum concurrency.

  • There is a performance bottleneck in the pressure test process. If the cpu, network and cpu viewed by the task manager of the press machine are all normal and do not reach more than 90%, it means that there is a problem with the server, but there is no problem with the press machine.

  • Considerations affecting performance include : database, application, middleware (tomact, Nginx), network and operating system, etc.

  • First consider whether your application is CPU-intensive (space for time) or IO-intensive (time for space)

1.5 Error resolution JMeter Address Already in use, Windows port access mechanism

image-20211101221805884

Cause

The problem with the port access mechanism provided by windows itself .
Windows provides ports 1024-5000 for TCP/IP connections and takes four minutes to recycle them. This caused us to fill up the port when we ran a large number of requests in a short period of time .

Solutions

Expand the ports offered to TCP/IP connections

Reduce cycle time

Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters

  • Right click on parameters, add a new DWORD named MaxUserPort

image-20211101222707498

  • Then double-click MaxUserPort, enter the value data as 65534, and select decimal as the base (if it is distributed operation, both the control machine and the load machine need to do this)

image-20211101222843778

  • Right click on parameters, add a new DWORD named TCPTimedWaitDelay

image-20211101223112662

  • Reboot the system and test

2. Performance monitoring

  • Considerations affecting performance include : database, application, middleware (tomact, Nginx), network and operating system, etc.

  • First consider whether your application is CPU-intensive (space for time) or IO-intensive (time for space)

2.1 Review the jvm memory model

The Java source file is compiled into a .class bytecode file, and the .class bytecode file is loaded into the JVM by the JVM's class loader, and all data is in the "runtime data area". Performance optimization is mainly to optimize the "heap" in the "runtime data area" .

When the data is in the "runtime data area", the execution engine of the JVM is responsible for execution, and performs operations such as calling methods in the virtual machine stack, pushing them into the stack, and popping them out of the stack. The native method stack calls the native library, and the program counter records which line the program goes to.

image-20211031200619881

  • Program Counter Program Counter Register:
    • The record is the address of the virtual machine bytecode instruction being executed,
    • This memory area is the only area that does not specify any OutOfMemoryError in the JAVA virtual machine specification
  • Virtual Machine: VM Stack
    • Describes the memory model of JAVA method execution. Each method will create a stack frame when executing, which is used to store information such as local variable table, operand stack, dynamic link, method interface, etc.
    •  The local variable table stores various basic data types and object references known at compile time
    •  If the stack depth requested by the thread is not enough, a StackOverflowError exception will be reported
    •  An OutOfMemoryError will be reported if the capacity of the stack dynamic expansion is not enough
    •  The virtual machine stack is thread-isolated, that is, each thread has its own independent virtual machine stack
  • Native methods: Native Stack
    • The local method stack is similar to the virtual machine stack, except that the local method stack uses the local method
  • Heap: Heap
    • Almost all object instances allocate memory on the heap

detailed model

image-20211031200905101

2.2 Review Heap

2.2.0 concept 

All object instances and arrays are allocated on the heap . The heap is the main area managed by the garbage collector , also known as the "GC heap"; it is also the place we consider most in optimization.


The heap can be subdivided into:

  • new generation

    • Eden space

    • From Survivor space

    • To Survivor space

  • old generation

    • PermGen/Metaspace

    • Before Java8, the permanent generation was managed by jvm. After java8, the metaspace directly uses physical memory. Therefore, by default, the size of the metaspace is limited only by local memory.

2.2.1 Garbage collection process

image-20211031201113805

The process of creating an object and putting it in the heap memory:
1. Put the newly created object in the eden area first , put it in the Eden area if it can be put down, and MinorGC once if it can’t be put down;
2. Then judge whether it can be put down, if it can be put down, put it in To the surviving area, if we can’t put it down, we think it is a big object, judge whether it can be put down in the old generation , and put it in the old generation if it can’t be put down, if it still can’t be put down, execute FullGC, [FullGC will trigger MinorGC first]
3. If The old generation couldn't fit OOM, so it reported an error of memory overflow exception Outgoing Java Objects.

Minor GC: Clean up the young generation space (including Eden and Survivor areas), release all inactive objects in Eden, and try to put some active objects in Eden into the Survivor area if the Eden space is still not enough for new objects after release .

Survivor area: The Survivor area is used as the intermediate exchange area between Eden and the old generation. When there is enough space in the old generation, the objects in the Survivor area will be moved to the old generation, otherwise they will be kept in the Survivor area.

Major GC: Clean up the space in the old generation. When the space in the old generation is not enough, the JVM will perform major gc in the old generation.

Full GC: Clean up the entire heap space, including the young generation and the old generation space. Full GC is ten times slower than Minor GC and should be avoided as much as possible.

Old objects:
1. Put it in the survivor area, if it can be put in the to area [then from and to change identities] [if the threshold exceeds 15, put it in the old generation] 2.
If the survivor can't put it down, judge whether it can be put down in the old generation, Can't let go of executing FullGC
3. If the old generation can't let go of OOM exception

Detailed process

image-20211031201216029

2.3 Use jconsole to monitor local and remote applications

Jdk's two small tools, jconsole and jvisualvm (upgraded version of jconsole), can be started through the command line to monitor local and remote applications . Remote applications require configuration

Direct cmd input jconsole

Enter the jconsole page and select the gulimall commodity module:

Home page situation

image-20211031203018071

memory condition

image-20211031203446300

2.4 jvisualvm (more powerful than jconsole)

2.4.0 start jvisualvm

Connection process:

2.4.1 What jvisualvm can do

Monitor memory leaks, track garbage collection, run time memory, cpu analysis, thread analysis...

image-20211101095646712

Thread state: 

  • running: running
  • Sleep: sleep
  • wait: wait
  • Resident: Idle threads in the thread pool
  • Watch: Threads that are blocked, waiting for a lock

2.4.2 Install Visual GC plug-in

  1. cmd to start jvisualvm, click "Tools" - "Plugins", click "Check Latest Version" to see if an error is reported
  2. Install Visual GC without saving
  3. Restart jvisualvm after installation

image-20211101093437530

503 error, plug-in installation failure problem:

reason:

  1. It may be because the version of the update link configuration is wrong
  2. If you have an agent, you must close it.

image-20211101093711929

  • Check your jdk version

My version is 281xxx

image-20211101093830346

  • Open the URL https://visualvm.github.io/pluginscenters.html

Find the corresponding version copy link

image-20211101094043497

  • link to edit configuration

image-20211101094207050

After installing the plug-in, restart jvisualvm and the effect is as follows:

You can see the entire GC process in real time.

2.5 Monitoring indicators

2.5.1 Middleware monitoring indicators

image-20211102223020701

  • The number of currently running threads cannot exceed the set maximum. Generally, when the system performance is good, it is more appropriate to set the minimum value of 50 and the maximum value of 200 for the number of threads.
  • The number of currently running JDBC connections cannot exceed the set maximum value. Generally, when the system performance is good, it is more appropriate to set the minimum value of JDBC to 50 and the maximum value to 200.
  • The GC frequency should not be frequent, especially the FULL GC. In general, when the system performance is good, it is more appropriate to set the JVM minimum heap size and maximum heap size to 1024M respectively.

2.5.2 Database monitoring indicators

image-20211102223143419

  • The smaller the SQL time consumption, the better, generally at the microsecond level.
  • The higher the hit rate, the better, generally not less than 95%.
  • The lower the number of lock waiting times, the better, and the shorter the waiting time, the better.

3. Stress testing and optimization

3.1 Pressure test 

3.1.1 Pressure testing nginx

Dynamically view the status of each container of doker 

#动态查看doker各个容器的状态
docker stats

Nginx not under pressure test status:

image-20211102221033326

mem usage is the memory usage

net i/o network data transmission

JMeter stress test:

 Add a sampler: visit the home page

image-20211102220552840

After the 50-thread pressure test, it can be seen that Nginx mainly wastes CPU:

image-20211102221308812

Concluded that nginx is CPU intensive

Stress test content Number of pressure testing threads Throughput/s 90% response time 99% response time
Nginx 50 2335 11 944

3.1.2 Pressure test gateway

After JMeter adds the request run: 

image-20211102223818641

jvisualvm monitoring gateway

image-20211102224738695

Stress test content Number of pressure testing threads Throughput/s 90% response time 99% response time
Nginx 50 2335 11 944
Gateway 50 10367 8 31

Monitor cpu and memory

It is concluded that the gateway is also cup-intensive

Monitor gc

It is found that the gateway is constantly performing light gc and occasionally performing heavy gc

Although the number of light gc is much greater than the number of heavy gc, the time spent is not much

get conclusion:

The size of the memory area can be appropriately adjusted to avoid performance degradation caused by too many gc times

image-20211102224812541

3.1.3 Pressure testing services without business

Add a non-business service: 

gulimall-product/src/main/java/site/zhourui/gulimall/product/web/IndexController.java

//压测简单服务(无任何业务逻辑):
	@ResponseBody
    @GetMapping("/hello")
    public String hello() {
        return "hello";
    }

image-20211102222249599

Stress test content Number of pressure testing threads Throughput/s 90% response time 99% response time
Nginx 50 2335 11 944
Gateway 50 10367 8 31
simple service 50 11341 8 17

3.1.4 Rendering of the first-level menu on the stress test homepage (thymeleaf disables caching) 

image-20211105145210580

in conclusion: 

Stress test content Number of pressure testing threads Throughput/s 90% response time 99% response time
Nginx 50 2335 11 944
Gateway 50 10367 8 31
simple service 50 11341 8 17
Rendering of the first level menu on the home page 50 270(db,thymeleaf) 267 365

3.1.5 Rendering of the first-level menu on the stress test homepage (thymeleaf enables caching)

Thymeleaf has a certain increase in throughput after enabling cache

Stress test content Number of pressure testing threads Throughput/s 90% response time 99% response time
Nginx 50 2335 11 944
Gateway 50 10367 8 31
simple service 50 11341 8 17
Rendering of the first level menu on the home page 50 270(db,thymeleaf) 267 365
Home page rendering (cache enabled) 50 290 251 365

3.1.6 Rendering of the first-level menu on the stress test homepage (open cache, add index to database, and close log)

  • open cache

      thymeleaf:
        cache: true
    
  • parent_cidAdd index to pms_category table

image-20211106164127889

  • close log
logging:
  level:
    site.zhourui.gulimall: error

Database optimization (indexing) can improve performance a lot

Stress test content Number of pressure testing threads Throughput/s 90% response time 99% response time
Nginx 50 2335 11 944
Gateway 50 10367 8 31
simple service 50 11341 8 17
Rendering of the first level menu on the home page 50 270(db,thymeleaf) 267 365
Home page rendering (cache enabled) 50 290 251 365
Home page rendering (open cache, optimize database, close log) 50 700 105 183

3.1.7 Acquisition of three-level classification data for stress testing

Mainly due to the reduced throughput caused by the database, it is too slow

localhost:10001/index/catalog.json

image-20211106154643409

Stress test content Number of pressure testing threads Throughput/s 90% response time 99% response time
Nginx 50 2335 11 944
Gateway 50 10367 8 31
simple service 50 11341 8 17
Rendering of the first level menu on the home page 50 270(db,thymeleaf) 267 365
Home page rendering (cache enabled) 50 290 251 365
Home page rendering (open cache, optimize database, close log) 50 700 105 183
Three-level classification data acquisition 50 2(db)

3.1.8 Acquisition of pressure test three-level classification data (adding index)

parent_cidAdd index to pms_category table

image-20211106164127889

Throughput has been improved to a certain extent

Stress test content Number of pressure testing threads Throughput/s 90% response time 99% response time
Nginx 50 2335 11 944
Gateway 50 10367 8 31
simple service 50 11341 8 17
Rendering of the first level menu on the home page 50 270(db,thymeleaf) 267 365
Home page rendering (cache enabled) 50 290 251 365
Home page rendering (open cache, optimize database, close log) 50 700 105 183
Three-level classification data acquisition 50 2(db)
Three-level classification data acquisition (indexing) 50 8

3.1.9 Three-level classification data acquisition for stress testing (optimizing business)

1)、优化业务逻辑:
1、一次性查询出来
2、将下面查库抽取为方法,不是真的查库baseMapper.selectList(new QueryWrapper().eq(“parent_cid”, level1.getCatId()));抽取为一个方法

将第一次查询的数据存起来,封装一个方法查询这个数据,就不会重复查询数据库

idea抽取方法

选中右键:refacto=》extract=》Method

优化业务后吞吐量有了质的飞越,说明业务对性能的影响也挺大的

压测内容 压测线程数 吞吐量/s 90%响应时间 99%响应时间
Nginx 50 2335 11 944
Gateway 50 10367 8 31
简单服务 50 11341 8 17
首页一级菜单渲染 50 270(db,thymeleaf) 267 365
首页渲染(开缓存) 50 290 251 365
首页渲染(开缓存、 优化数据库、 关日 志) 50 700 105 183
三级分类数据获取 50 2(db)
三级分类数据获取(加索引) 50 8
三级分类( 优化业 务) 50 111 571 896

3.1.10 三级分类数据获取(使用redis作为缓存)

吞吐量也有比较明显的提升

压测内容 压测线程数 吞吐量/s 90%响应时间 99%响应时间
Nginx 50 2335 11 944
Gateway 50 10367 8 31
简单服务 50 11341 8 17
首页一级菜单渲染 50 270(db,thymeleaf) 267 365
首页渲染(开缓存) 50 290 251 365
首页渲染(开缓存、 优化数据库、 关日 志) 50 700 105 183
三级分类数据获取 50 2(db)
三级分类数据获取(加索引) 50 8
三级分类( 优化业 务) 50 111 571 896
三 级 分 类 ( 使 用 redis 作为缓存) 50 411 153 217

3.1.11 首页全量数据获取(包括静态资源)

之前的压测都没有导入静态资源

压测内容 压测线程数 吞吐量/s 90%响应时间 99%响应时间
Nginx 50 2335 11 944
Gateway 50 10367 8 31
简单服务 50 11341 8 17
首页一级菜单渲染 50 270(db,thymeleaf) 267 365
首页渲染(开缓存) 50 290 251 365
首页渲染(开缓存、 优化数据库、 关日 志) 50 700 105 183
三级分类数据获取 50 2(db)
三级分类数据获取(加索引) 50 8
三级分类( 优化业 务) 50 111 571 896
三 级 分 类 ( 使 用 redis 作为缓存) 50 411 153 217
首页全量数据获取 50 7(静态资源)

3.1.12 Nginx+Gateway

。。 

3.1.13 Gateway+简单服务

网关yml暂时添加hello路径便与测试:

此时访问http://localhost:88/hello就是跳过Nginx,访问网关和商品服务。 

压测内容 压测线程数 吞吐量/s 90%响应时间 99%响应时间
Nginx 50 2335 11 944
Gateway 50 10367 8 31
简单服务 50 11341 8 17
首页一级菜单渲染 50 270(db,thymeleaf) 267 365
首页渲染(开缓存) 50 290 251 365
首页渲染(开缓存、 优化数据库、 关日 志) 50 700 105 183
三级分类数据获取 50 2(db)
三级分类数据获取(加索引) 50 8
三级分类( 优化业 务) 50 111 571 896
三 级 分 类 ( 使 用 redis 作为缓存) 50 411 153 217
首页全量数据获取 50 7(静态资源)
Nginx+Gateway 50
Gateway+简单服务 50 3124 30 125

3.1.14 结论,中间件对性能的影响

  • SQL 耗时越小越好, 一般情况下微秒级别。
  • 命中率越高越好, 一般情况下不能低于 95%。
  • 锁等待次数越低越好, 等待时间越短越好。
压测内容 压测线程数 吞吐量/s 90%响应时间 99%响应时间
Nginx 50 2335 11 944
Gateway 50 10367 8 31
简单服务 50 11341 8 17
首页一级菜单渲染 50 270(db,thymeleaf) 267 365
首页渲染(开缓存) 50 290 251 365
首页渲染(开缓存、 优化数据库、 关日 志) 50 700 105 183
三级分类数据获取 50 2(db)
三级分类数据获取(加索引) 50 8
三级分类( 优化业 务) 50 111 571 896
三 级 分 类 ( 使 用 redis 作为缓存) 50 411 153 217
首页全量数据获取 50 7(静态资源)
Nginx+Gateway 50
Gateway+简单服务 50 3124 30 125
全链路简单服务 50 800 88 310
  • 中间件越多, 性能损失越大, 大多都损失在网络交互了;
  • 业务期间需要考虑的问题:
    • 数据库(MySQL 优化)
    • 模板的渲染速度(开发时关闭缓存,上线后一定要缓存)
    • 静态资源

3.2 优化,动静分离

3.2.1 什么要动静分离

image-20211106170516846

为什么要进行动静分离?

未分离的项目静态资源放在后端,无论是动态请求还是静态请求都会来到后台,这极大的损耗了后台Tomcat性能(大部分性能都用来处理静态请求)

动静分离后,后台只会处理动态请求,而静态资源直接由nginx返回。

3.2.2 静态资源存到Nginx里

1.在nginx的html目录下新建一个static目录用来存放静态资源

mkdir /mydata/nginx/html/static
cd /mydata/nginx/html/static

2.将gulimall-product的静态资源复制到该目录,并将本地静态资源删除

image-20211106171453399

3.2.3 templates里文件静态资源路径前“static”

开发期间关掉thymeleaf缓存:

商品模块的yml里

gulimall-product/src/main/resources/templates/index.html

将原来的index/xxx路径修改为/static/index/xxx

image-20211106171931297

3.2.4 在nginx配置静态资源的路径映射

vim /mydata/nginx/conf/conf.d/gulimall.conf 

配置如下内容

#监听gulimall.com:80/static
location /static {
    root  /usr/share/nginx/html;       #资源在这个文件夹下匹配
}

注意:静态路径要配置在“/” 路径上面,防止覆盖。

image-20211106173405714

3.2.5 测试

刷新后静态资源加载成功

image-20211106173554342

3.3 优化,模拟内存崩溃宕机问题

JMeter每秒200个线程:

让老生代内存饱满:

发现线上服务崩溃:

堆内存溢出,线上应用内存崩溃宕机

image-20211106175056213

原因:

服务分配的内存太小,导致新生代,老年代空间都满了,gc 后也没有空间

解决方案:

调大堆内存

-Xmx1024m -Xms1024m -Xmn512m

  • -Xms :初始堆大小
  • -Xmx :最大堆大小
  • -Xmn :堆中新生代初始及最大大小

image-20211106175433167

3.4 优化三级分类数据获取

3.4.1 Current issues

Every time the elements in the first-level classification list are traversed, the database must be checked once.

3.4.2 Turn multiple queries of the database into one

Extraction method:

CategoryServiceImpl

 After extraction:

 Modify the extraction method, no need to query the database:

    private List<CategoryEntity> getParent_cid(List<CategoryEntity> selectList,Long parentCid) {
        List<CategoryEntity> categoryEntities = selectList.stream().filter(item -> item.getParentCid().equals(parentCid)).collect(Collectors.toList());
        return categoryEntities;
    }

 test:

JMeter stress testing found a lot of performance optimization.

Guess you like

Origin blog.csdn.net/qq_40991313/article/details/129792958