Common solutions to Java high concurrency

1. What do we mean by high concurrency about concurrency?

In the Internet age, high concurrency usually means that at a certain point in time, there are many accesses coming at the same time.

 

High concurrency, usually concerned about system indicators and business indicators?

  • QPS : Queries per second, in a broad sense, usually refers to the number of requests per second

  • Response time : the time it takes from the time the request is sent to the time it receives the response, for example: it takes 100ms for the system to process an HTTP request, and this 100ms is the response time of the system

  • Bandwidth : To calculate the bandwidth size, you need to focus on two indicators, peak traffic and the average size of the page 

  • PV : Page View, i.e. page views or clicks, usually focusing on the number of pages visited within 24 hours, i.e. "daily PV"

  • UV : UniQue Visitor, that is, the number of visiting users after deduplication, usually focusing on users who visit within 24 hours, that is, "Daily UV"

 

2. About three common optimization schemes for dealing with large concurrency

【Database cache】

Why use cache?

The purpose of caching data is to allow clients to rarely or not access the database, reduce disk IO, increase concurrency, and improve the response speed of application data.

 

【CDN acceleration】

What is a CDN?

The full name of CDN is Content Delivery Network. The CDN system can redirect the user's request to the service node closest to the user in real time according to comprehensive information such as network traffic and the connection of each node, load status, and distance to the user.

 

Advantages of using a CDN?

The essence of CDN is memory cache, which can be accessed nearby. It improves the access speed of enterprise sites (especially sites with a large number of pictures and static pages), accelerates the network across operators, and ensures that users on different networks get good access quality.

 

At the same time, it reduces the bandwidth of remote access, shares network traffic, and reduces the load of the original site WEB server.

 

[Server clustering and load balancing]

What is Layer 7 Load Balancing?

Layer 7 load balancing is a load balancing based on application information such as the http protocol. The most commonly used is Nginx, which can automatically remove abnormal backend servers, upload files using asynchronous mode, support multiple allocation strategies, and can allocate weights. Allocation is flexible.

 

Built-in policies: IP Hash, weighted round robin

Extension strategy: fair strategy, general hash, consistent hash

 

What is a weighted round robin strategy?

First, all requests are distributed to high-weight machines, and until the weight of this machine is lower than that of other machines, requests are distributed to the next high-weight machine, which reflects both weighted weight and polling.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325148610&siteId=291194637