Java high concurrency access

Common means to improve the efficiency of access under high concurrency

 

First of all, we must understand where the bottleneck of high concurrency is

1. It may be that the server network bandwidth is not enough

2. Maybe the number of web thread connections is not enough

3. The database connection query may not go up.

 

Depending on the situation, the solution is different.

1. Increase network bandwidth, DNS domain name resolution and distribution of multiple servers.

2. Load balancing, front proxy server nginx, apache, etc.

3. Database query optimization, read-write separation, table partitioning, etc.

 

 

Content that often needs to be processed under high concurrency:

1. Try to use cache, including user cache, information cache, etc., spend more memory for cache, which can greatly reduce the interaction with the database and improve performance.

2. Use tools such as jprofiler to identify performance bottlenecks and reduce additional overhead.

3. Optimize database query statements and reduce directly generated statements using tools such as hibernate (only time-consuming queries are optimized).

4. Optimize the database structure, do more indexes, and improve query efficiency.

5. The function of statistics should be cached as much as possible, or statistics related reports should be counted on a daily basis or at regular intervals to avoid the function of statistics when needed.

6. Use the places where static pages can be used as much as possible to reduce the parsing of the container (try to generate static html for dynamic content to display).

7. After solving the above problems, use server clusters to solve the bottleneck problem of a single machine.

 

 

Common concurrent synchronization case analysis

    Case 1: The case of the booking system, there is only one ticket for a certain flight, assuming that 1w people open your website to book tickets, and ask you how to solve the concurrency problem (it can be extended to any high-concurrency website to consider

               concurrent read and write problems)

    The problem is that 1000 people come to visit. Before the tickets go out, everyone must see that there are tickets. It is impossible for one person to see the tickets when others can't see them. Who can grab it depends on the person's "luck" (net

             network speed, etc.)

The second consideration is concurrency, 1w people click to buy at the same time, who can make a deal? There is only one ticket in total.

First of all, we can easily think of several schemes related to concurrency:

Lock synchronization synchronization refers more to the application level. Multiple threads come in and can only access one by one. In java, it refers to the syncrinized keyword. There are also two levels of locks, one is the pair mentioned in java

Like locks, it is used for thread synchronization; another level is the locks of the database; if it is a distributed system, obviously only the locks on the database side can be used to achieve.

Assuming that we use a synchronization mechanism or a database physical lock mechanism, how to ensure that 1w individuals can still see the tickets at the same time will obviously sacrifice performance, which is not desirable in high-concurrency websites. After using hibernate we

Another concept is proposed: optimistic lock , pessimistic lock (ie traditional physical lock);

Optimistic locking can solve this problem. Optimistic locking means using business control to solve concurrency problems without locking the table, so as to ensure the concurrent readability of the data and the exclusivity of the saved data, ensuring

While verifying performance, it solves the problem of dirty data caused by concurrency.

How to implement optimistic locking in hibernate:

Premise: Add a redundant field to the existing table, version number, long type

principle:

1) Only the current version number "= database table version number can be submitted

2) After the submission is successful, the version number version ++

The implementation is very simple: add an attribute optimistic-lock="version" to the ormapping, the following is a sample fragment

<hibernate-mapping>

<class name="com.insigma.stock.ABC" optimistic-lock="version" table="T_Stock" schema="STOCK">

Case 2. How do you consider the stock trading system, banking system, and large data volume?

First of all, for the stock trading system's quotation table, a quotation record is generated every few seconds, and there will be one per day (assuming one quotation every 3 seconds), the number of stocks × 20 × 60 * 6 records, and the number of records in this table in one month.

How much? After the number of records in a table in oracle exceeds 100w, the query performance is very poor. How to ensure the system performance?

Another example, China Mobile has hundreds of millions of users, how to design the table? Does it all exist in a table?

Therefore, for a large number of systems, table splitting must be considered - (the table names are different, but the structure is exactly the same), several common ways: (depending on the situation)

1) Divided by business, such as the table of mobile phone numbers, we can consider the table starting with 130 as a table, another table starting with 131, and so on.

2) Use oracle's table splitting mechanism to split the table

3) If it is a trading system, we can consider splitting it according to the time axis, one table for the data of the day, and another table for the historical data. The reports and queries of historical data here will not affect the day's trading.

Of course, after the table is split, our application has to be adapted accordingly. Simple or-mapping may have to be changed. For example, some businesses have to go through stored procedures, etc.

Also, we have to consider caching

The cache here refers not only to hibernate, but hibernate itself provides a first-level and second-level cache. The cache here is independent of the application and is still a memory read. If we can reduce the frequent access to the database

Q, that must be a huge benefit to the system. For example, in the commodity search of an e-commerce system, if the commodity of a certain keyword is frequently searched, it can be considered that this part of the commodity list is stored in the cache (memory

Go), so that you don't have to access the database every time, the performance is greatly increased.

A simple cache can be understood as making a hashmap by yourself, using the frequently accessed data as a key, the value is the value searched from the database for the first time, and the next time you access it, you can read it from the map, instead of

Read database; professional ones currently have independent caching frameworks such as memcached, etc., which can be independently deployed as a caching server

 

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326733268&siteId=291194637