High concurrency at the front and rear end summary of common solutions

High concurrency scenarios, mainly to solve several problems:

① longer response time of the request, how to reduce request response time, enhance the user experience.

② safety data in a highly concurrent multi-threaded scenarios, due to the race condition, instruction reordering and other factors, it is prone to data insecurity, need to be avoided.

③ highly concurrent scenario, the server likely to cause overload, single point of failure.

...

In order to improve the problems caused by high concurrent Much has been a lesson, now we begin the summary, although in the past dozens of years have been summarized, but dated overall total, fragmentation, system summary here again.

Front:
1 compression source code and pictures
JavaScript source code files can be compressed using confusing way, CSS files, source code common compression, JPG images can be compressed to 50% to 70% depending on the quality, PNG can use some open source software compression compression, such as 24 into 8-color color, remove some of the PNG format information.

2. reasonable choice of image format
If the picture on the use of more number of colors JPG format, if you use a smaller number of color images in PNG format.

3. Combined static resources
including CSS, JavaScript, and small picture, reduce HTTP requests. A large part of users to access because of this one and get the maximum benefit.

4. Turn on the server Gzip compression
which is very effective for text resources, image resources compression touching ...

5. Use CDN to accelerate static resources
using CDN acceleration or some public libraries to use a static address resources provided by third parties (such as jQuery, normalize.css). On the one hand increase the amount of concurrent downloads, and other sites on the other hand can be cached. Here recommend a free open-source CDN to accelerate web site: https: //www.bootcdn.cn/

The delay cache static resources save time
so visitors frequently visit the Web site will be able to more quickly access. However, here to modify the way the file name, make sure that when the resource update, the user will pull the latest content.

7. The CSS on the page header, the JavaScript on the bottom of the page
so it does not block page rendering, so long blank page appears.

8. The front-end frame have tried to use some high-performance, such as Vue, Angular.
Here as an example to Vue, Vue powerful and have some advanced features, such page rendering can be very fast, so that part of the scene at a high concurrent increase latency reduction .

①Vue generally single page application, so if there is a very large component (think more than 1M) it is necessary to be asynchronous processing components, that is, using arrow function when the component is registered to optimize syntax:

# Sync component incorporated manner:
Import the Detail from '@ / Pages / Home / Home'
 
{
      path: '/',
      name: 'Home',
      Component: Home
}
 
 
# asynchronous component incorporated manner:
{
      path: '/ City',
      name : 'City',
      the Component: () => Import ( '@ / pages and the / City / City')
}
so that when the page is first loaded it will not load for too long, when to use asynchronous loading pressure and can reduce server IO faster pages loads when you first visit.

Use the arrow ② ES6 with a timer function, limiting the number of listeners refresh rate, such as we used to ordinate change triggered when scrolling the screen, if not to limit, the function will be involved in the implementation of high-frequency, impact performance.

③ using <keep-alive> </ keep-alive> infrequently change some of the components of the cache, so that the refresh will not be repeated again render the page, thereby increasing the speed of loading.

④ Vue can use some advanced features, such as server-side rendering, to improve the rendering speed of the page.

9. Try to use some high-performance WEB server, such as NGINX.
①nginx access performance for static resources is very high, can handle more than 20,000 concurrent second.

② rational allocation nginx, for Nginx performance tuning, such as to make use of epoll mode, open gzip compression, caching static resources.

10. Use sprite
at page but too small picture page layout has been stable, consider using sprite, reduction request, improving performance.

The front section will summarize so much, after all I was doing back-end, front-end portion of the know so little, but also hope that spectators can let me know in the comments at.

Rear:
1. expansion

Expansion includes a horizontal expansion and vertical expansion, vertical expansion is carried out at the hardware level extensions, such as increasing the amount of memory a CPU, a vertical expansion of the difficulty and the challenge is relatively small, but higher costs, especially after reaching a certain level, but there are limits, can not be infinitely extended horizontal expansion is through technical means, distribute requests to different servers, in order to reduce the pressure of a single server, such as Nginx can be used for back-end load balancing projects (common), there are some uses DNS and even hardware to complete, small companies generally do not use.

2. Cache

In most cases, the pressure will be concentrated in the server database, reducing the number to access the database, you can reduce the pressure on the server. Therefore, under high concurrency scenarios, Cache role is crucial, there is a common cache objects level cache, and cache databases such as memcache, redis, encache etc. by some data changes infrequently, but more frequent query interface to add caching to reduce access to the back-end database, a substantial increase in server performance.

3. Message Queuing

Decoupling the message queue can be applied, it can be used for clipping, while asynchronous operation request response time can be reduced, reducing the pressure of a single server, in some scenarios may be blended quarz / elastic-job improve server utilization, server avoid too busy during the day, at night too idle.

4. Application of Split

When expanded to a certain extent applied, can be removed by the module to function even application split, split, good control over the particle size at this stage to be split, and then choose the appropriate rpc / micro service framework, to be split into the system would be more easy to manage and maintain, adopting distributed can also improve performance, but the corresponding risk and complexity and requirements for personnel / technology will rise.

The flow restrictor

A single server processing capacity is always limited, particularly to withstand many QPS by Apache Bench, jmeter measured out and other tools, and then the expected maximum amount of concurrency can be estimated roughly, a combination of both a single server can roughly estimate whether it can withstand peak concurrent, if not, limiting the need for services, in order to protect the server downtime will not overload, current limit can be achieved by redis other high-performance database, of course, also be used zuul spring-cloud to achieve , provided that your project is a micro-service architecture.

6. Service downgraded / fuse

By service degradation can reduce the server load, the fuse mechanism may play a protective role, a single server in the case of overloaded, and denies all new requests, so as to gradually return to normal, spring-cloud provides hystrix assembly, easy the realization of service degradation / blown.

7. Database separate read and write, sub-libraries sub-table

When the optimized are optimized, but still bear the appropriate database concurrency, this time on the need for separate read and write to the database, or even horizontal and vertical resolution, to avoid overloading a single library, such as excessive data of a single table problems affect database performance. when this scenario, you can use the database to do Mycat separate read and write, to take a master multi-slave, multi-master multi-database architecture from, the data is too large for a single table of cases, the need for sub-libraries and even sub-sub-sub-library table .Mycat I have been summarized in table Mysql chapter, are interested can see.

8. Using multithreading, concurrent programming.

JUC provides a number of concurrent container and tools to help us carry out a variety of efficient and safe operation at high concurrency scenarios, take full advantage of multi-core CPU performance, increase server performance and reduce time-consuming request.

9. The production shut down unnecessary log Print

In a production environment, especially under high concurrency scenarios, log printing system will slow down the response time, tying up server resources, so you can turn off some unnecessary log, error log level to level. The actual pressure measurements indicate the move can significantly improve QPS.


Guess you like

Origin blog.51cto.com/14614647/2450500