A simple understanding of Java concurrent programming

Table of contents



foreword

The evolution of a high-concurrency system should be gradual, with the purpose and driving force of solving the problems in the system.

Therefore, not all concurrent system designs must pursue high traffic.

concept

  • Concurrency:
    There are two or more threads at the same time. If the program is running on a single-core processor, multiple threads are swapped in or out of memory alternately. These threads "exist" at the same time, and each thread is in the process of execution A certain state of , if running on a multi-core processor, at this time, each thread in the program will be assigned to a processor core, so it can run at the same time

  • High Concurrency:
    High Concurrency (High Concurrency) is one of the factors that must be considered in the architecture design of Internet distributed systems. It usually refers to the design to ensure that the system can process many requests in parallel at the same time.

When talking about concurrency: Multiple threads operate on the same resource to ensure thread safety and use resources reasonably
. Execute a large number of operations in a short period of time,
such as resource requests, database access), improve program performance (if high concurrency is not handled well, it will not only lead to poor user experience, but also may cause server downtime, OOM, etc. )

1. About high concurrency and large traffic

There are three common methods: horizontal expansion, caching, and asynchronous
horizontal expansion : divide and conquer is a common high-concurrency system design method, which uses distributed deployment to divide the traffic and let each server bear part of the concurrency and traffic.
Cache : It is equivalent to widening the path to improve the performance of the system.
Asynchronous : Asynchronous processing; let the request return first, and notify the requester when the data is ready.

2. Expansion

1、Scale-up

Vertical expansion, that is, buying better hardware to improve the concurrent performance of the system

2、Scale-out

Multiple low-performance machines form a distributed cluster to jointly resist the impact of high concurrent traffic

If you only consider a single machine, you will have no brains to expand vertically, but in most cases we will choose horizontal expansion; including one master and multiple slaves of the database, sub-database and sub-table, and storage sharding are all its practical application solutions.

cache

Can caching greatly improve system performance?

  • We know that data is stored in persistent storage. Generally, persistent storage uses disks as storage media. Ordinary disk data is composed of robotic arms, magnetic heads, shafts, and disks. Disks are divided into tracks, columns, and so on. Surfaces and sectors.
  • Disks are storage media, and each disk is divided into multiple concentric circles, and information is stored in the concentric circles, and these concentric circles
    are magnetic tracks. When the disk is working, the disk is rotating at a high speed, and the mechanical arm drives the magnetic head to move along the radial direction to read the required data on the track. We call the time it takes for the head to find information to be the seek time.
    insert image description here

Cache is actually very semantic, it can be redis cache, distributed cache, local cache, CPU cache and so on.

asynchronous

So what is synchronous and what is asynchronous? Taking a method call as an example, a synchronous call means that the caller will block and wait for
the logic execution in the called method to complete. In this way, when the called method takes a long time to respond, it will cause the caller to be blocked for a long time, and under high concurrency, the overall system performance will decrease or even an avalanche will occur.
Asynchronous calls are just the opposite. The caller can return to execute other logic without waiting for the execution of the method logic to complete. After the
called method is executed, the result is fed back to the caller through callbacks, event notifications, etc.

Take 12306 as an example: using an asynchronous method, the backend process will throw the request into the message queue, and at the same time quickly respond to the user, telling the user that we are queuing for processing, and then release resources to process more requests. After the booking request is processed, the user is notified of the success or failure of the booking.
After the processing logic is moved to the asynchronous processing program, the pressure on the Web service is reduced, and the resources occupied are less. Naturally,
more user booking requests can be received, and the system's ability to withstand high concurrency is also improved.

insert image description here

Summarize

Relevant knowledge reserveinsert image description hereinsert image description here
insert image description here

What we need to pay attention to is that when dealing with high concurrency and large traffic, the system can bear the impact of traffic by adding machines.
As for the solution to be adopted, specific problems need to be analyzed in detail. Asynchronous thread processing, checking update time, etc.

References: https://juejin.cn/post/6844903752579678222?from=search-suggest
and geek time

Guess you like

Origin blog.csdn.net/qq_41810415/article/details/132587980