Do not understand these concepts, and dare to write a resume "familiar with" Java high concurrency?

High concurrency

It is one of the factors that must be considered in the design of the Internet distributed system architecture. It usually refers to ensuring that the system can simultaneously process massive requests in parallel

Synchronous and asynchronous

Synchronous: Send a request, wait for the return, and then send the next request. Submit the request -> wait for the server to process -> return after processing, the client browser can't do anything during this period

Asynchronous: Send a request without waiting for the return, and you can send the next request at any time. Submit request -> server processing (the browser can still do other things at this time) -> finished processing Insert picture description here
1233.
As you can see from the above figure, following the real-time trajectory, it is executed synchronously step by step. In asynchronous, when an asynchronous process is called After it is sent, the caller cannot get the result immediately. In fact, a thread is started to execute this part of the content. After the thread is processed, the caller is notified to process it through status, notification and callback.

Concurrency and parallelism

Insert picture description here
11 On a
single-core CPU (single processor), there can only be concurrency but no parallelism. Parallel exists in multi-processor systems, and concurrency can exist in both single-processor and multi-processor systems. Concurrency can exist in single-processor systems because concurrency is an illusion of parallelism. Parallel requires that multiple programs can be executed simultaneously Operation, and concurrency just requires the program to pretend to perform multiple operations at the same time (each small time slice performs one operation, multiple operations quickly switch execution

Critical section

Insert picture description here
The 111
critical section is used to represent a common resource or shared data, which can be used by multiple threads, but at a time, only one thread can use it. Once the critical resource is occupied, other threads want to use this resource. You have to wait.

This is where we often need to add locks in programming, such as the Synchronized keyword or the Lock interface.

Blocking and non-blocking

Blocking and Non-Blocking are usually used to describe the interaction between multiple threads. For example, if a thread occupies a critical region resource, then all other threads that need this resource must wait and wait in this critical region. Will cause the thread to hang, this situation is blocking. If the thread occupying the resource has been unwilling to release the resource, then all other threads blocked on this critical section cannot work.

Non-blocking allows multiple threads to enter the critical section at the same time.

Deadlock, starvation, livelock

Deadlock: Refers to a phenomenon in which two or more processes (or threads) are competing for resources and waiting for each other during execution. If there is no external force, they will not be able to advance. At this time, the system is said to be in a deadlock state or the system has a deadlock. These processes that are always waiting for each other are called deadlock processes.

Mutually exclusive conditions: thread access to resources is exclusive. If a thread pair occupies a resource, other threads must be in a waiting state until the resource is released.
Request and hold conditions: Thread T1 has at least kept one resource R1 occupied, but it makes a request for another resource R2. At this time, resource R2 is occupied by other threads T2, so the thread T1 must also wait, but it is on its own The retained resource R1 is not released.
Non-deprivation conditions: the resources that a thread has acquired cannot be deprived by other threads before it is used up, and can only be released by itself after it is used up.
Loop waiting condition: When a deadlock occurs, there must be a "process-resource circular chain", namely: {p0,p1,p2,...pn}, the process p0 (or thread) waits for the resources occupied by p1, and p1 waits for p2 Resources occupied, pn waits for resources occupied by p0. (The most intuitive understanding is that p0 is waiting for the resources occupied by p1, and p1 is waiting for the resources occupied by p0, so the two processes wait for each other

Livelock : Refers to thread T1 can use resources, but it is very polite, let other threads use resources first, thread T2 can also use resources, but it is gentleman, also let other threads use resources first. So you let me, I let you, the last two threads can't use resources.

I met a girl on the street, she happened to be walking in the opposite direction of you, and when she met you head-on, you both wanted to let each other pass. You move to the left, and she also moves to the left, the two still can't pass. When you move to the right, she moves to the right, and the cycle continues.

Starvation : If thread T1 occupies resource R, thread T2 requests to block R again, so T2 waits. T3 also requests resource R. When T1 releases the blockade on R, the system first approves T3's request, and T2 is still waiting. Then T4 requests to block R again. After T3 releases the blockade on R, the system approves T4's request again... and T2 may wait forever.

There are two roads, A and B, which are full of vehicles. Road A is blocked for the longest time, while Road B is relatively blocked for a shorter time. At this time, the road in front has been cleared, and the traffic police signaled to road B according to the principle of best allocation. Vehicles passed first, and one after another on road B. The longest queue time on road A did not pass. You can only wait for the traffic police to issue instructions to let road A pass through when there is no traffic on road B. This is
ReentrantLock shows the unfair lock mechanism provided in the lock (of course, ReentrantLock
also provides a fair lock mechanism, and the user decides which lock strategy to use according to the specific usage scenario). Unfair lock can improve throughput but is inevitable Will cause starvation of some threads.

Concurrency level

Divided into blocking and non-blocking (non-blocking is divided into barrier-free, no lock, no waiting)

block

When a thread enters the critical section, other threads must wait

Barrier-free

Accessibility is one of the weakest non-blocking scheduling.
Free access to the critical area. When there is
no competition, the operation
is completed within a limited step. When there is competition, the data is rolled back

Compared with non-blocking scheduling, blocking scheduling is a pessimistic strategy. It will think that modifying the data together is likely to change the data. Non-blocking scheduling is an optimistic strategy. It believes that if you modify the data, it may not change the data. But it is a lenient entry and strict exit strategy. When it finds that a process has a data contention in the critical area and a conflict occurs, then the barrier-free scheduling method will roll back this data.

In this barrier-free scheduling method, all threads are equivalent to taking a snapshot of the current system. They will always try to take the snapshots until they are valid.

no lock

It is barrier-free. It is
guaranteed that one thread can win.
Compared with barrier-free, accessibility does not guarantee that the operation will be completed when there is competition, because if it finds that every operation will conflict, it will keep trying. If the threads in the critical area interfere with each other, all threads will be stuck in the critical area, and system performance will have a great impact.

The lock-free adds a new condition to ensure that one thread can win each competition, which solves the problem of accessibility. At least to ensure that all threads are executed smoothly.

The following code is a typical lock-free calculation code in Java

while (!atomicVar.compareAndSet(localVar, localVar+1)) {
    
    
    localVar = atomicVar.get();
}

No waiting

Lock-free
requires that all threads must be completed within a limited number of steps
without starvation

The premise of no-waiting is based on lock-free. Lock-free only ensures that the critical section must be entered and exited. However, if the incoming priority is high, then some threads with low priority in the critical section may occur Hunger can't get out of the critical zone. Then no waiting solves this problem, it guarantees that all threads must be completed within a limited number of steps, naturally there is no hunger.

No waiting is the highest level of parallelism, which can make the system reach its optimal state. A typical case of no waiting: there is only a reader thread and no writer thread, so this must be no wait. If there is both a reader thread and a writer thread, and before each writer thread, a copy of the data is copied, and then the copy is modified instead of the original data. Because the copy is modified, there is no conflict, then the modification process is also No waiting. The last thing that needs to be synchronized is to overwrite the original data with the written data. Since the wait-free requirement is relatively high and it is more difficult to implement, the use of lock-free will be more widely used.

Two important laws about parallelism

Both laws are related to speedup

Amdahl's Law

**Amdahl's Law (Amdahl's Law): Defines the calculation formula and theoretical upper limit of the speedup ratio after the serial system is parallelized (speedup ratio=system time before optimization/system time after optimization)
**
a program (or An algorithm) can be divided into the following two parts according to whether it can be parallelized:

The part that
can be parallelized The part that cannot be parallelized

Suppose a program processes files on disk. A small part of this program is used to scan paths and create file directories in memory. After doing this, each file is handed over to a separate thread for processing. The scanning path and the creation of the file directory cannot be parallelized, but the process of processing files can.
Insert picture description here
Increasing the number of CPU processors does not necessarily play an effective role. Increase the proportion of modules that can be parallelized in the system, and increase the number of parallel processors reasonably to get the largest speedup with the smallest investment.

Gustafson Law

Gustafson's Law (Gustafson): Explain the relationship between the number of processors, serial ratio and speedup. Insert picture description here
11
As long as there is sufficient parallelization, the speedup ratio is proportional to the number of CPUs.

Learn Java programming with zero foundation, I recommend joining my Java learning corner .

Guess you like

Origin blog.csdn.net/weixin_49794051/article/details/112216900