Synchronized principle and optimization process

Table of contents

foreword

1. The synchronized feature

1.1 Atomicity

1.2 Visibility

1.3 Orderliness

1.4 Reentrant

Second, the usage of synchronized

2.1 Modification method

Decorate static methods

2.2 Modified code block

Three, the characteristics of synchronized

Fourth, the working process of synchronized locking

4.1 Bias lock

4.2 Lightweight lock (spin lock)

4.3 Heavyweight locks

Five, other synchronized optimization

5.1 Lock Elimination

5.2 Lock coarsening

Summarize

foreword

If a certain resource is shared by multiple threads, in order to avoid thread unsafe factors due to thread preemption of resources, we need to synchronize threads. Synchronized is to ensure that there will be no bugs due to preemptive execution of threads. It is an important feature in concurrent programming. Then let's understand the underlying principle of synchronized.

1. The synchronized feature

1.1 Atomicity

The so-called atomicity means that a block of code or a statement is either executed in its entirety without being interrupted during execution until all operations are executed, or they are not executed at all.

Guaranteeing atomicity is a very necessary operation in concurrent programming. In java, some simple assignment statements are themselves atomic operations, such as int a = 10; this type of operation itself is an instruction of the CPU at the bottom layer, even in multiple In the thread environment, there will be no atomicity problems.

But operations such as i++; i+=1 ; are not atomic.

They are divided into load (reading) add (calculation) and save (assignment) at the bottom of the operating system.

In a multi-threaded environment, this operation will cause serious problems.

How to solve it? synchronized appeared.

Synchronized can guarantee that there will be no bugs in the above problems. Synchronized will first perform the lock operation during the calculation operation. After the lock is completed, the lock will not be released until the execution is completed. After the lock is completed, other threads can no longer The synchronized locked code block or a certain operation has been operated. Until synchronized releases the lock.

Interview question: About the difference between synchronized and volatile

The biggest difference between synchronized and volatile is atomicity Here, synchronized can guarantee atomicity, while volatile cannot guarantee atomicity, and volatile can guarantee memory visibility. Currently synchronized also guarantees memory visibility.

1.2 Visibility

Visibility means that when multiple threads access a resource, the status and value information of the resource are visible to other threads.

Both synchronized and volatile can guarantee memory visibility.

When synchronized acquires a lock for a resource, it locks the resource. If other threads want to operate on the resource, they have to wait for the lock to be released, and then lock it.

When synchronized acquires the lock, the state of the lock is visible to other threads, and before the lock is released, the currently modified variable or some value will be written into the main memory to ensure the visibility of the resource. Main memory is shared between threads. When other threads operate again, they will read the data in the main memory.

1.3 Orderliness

Sequence refers to the order in which the program is executed according to the code.

Synchronized can also guarantee the order. In Java, the compiler or processor is allowed to reorder the instructions of the code without affecting the execution result of the entire program.

Instruction reordering will have no effect in a single-threaded environment, but it will cause problems in a multi-threaded environment.

Synchronized ensures that only one thread can operate at a certain moment, which also ensures order.

1.4 Reentrant

Synchronized also has reentrant features.

When a thread tries to operate the lock object held by other threads, it will block and wait.

Synchronized locks an object twice without unlocking it, which will not cause deadlock.

This situation is a reentrant lock. In layman's terms, it means that a thread that owns the current lock object can still apply for the current lock object repeatedly.

If you want to know more about thread safety issues, you can refer to this article

https://blog.csdn.net/qq_63525426/article/details/129832560?spm=1001.2014.3001.5501

Second, the usage of synchronized

Synchronized can modify static methods and ordinary methods, and can also modify and define code blocks, but in the final analysis, there are only two types of resources it locks: one is an object and the other is a class .

2.1 Modification method

class test{
    int a = 100;
    
    public synchronized() void add() {
        a++;
    }
}

To modify the ordinary method, synchronized is to lock the current test object.

Decorate static methods

Lock the current class object

class test{
    int a = 100;
    static int b = 1;
    public synchronized void add() {
        a++;
    }
    public static synchronized void add1() {
        b++;
    }
}

2.2 Modified code block

class test{
    int a = 100;
    static int b = 1;
   
    Object object = new Object();
    public void add2() {
        synchronized (object) {
            a+=b;
        }
    }
}

The above code is to lock the object object. If other threads also want to lock the object, they will wait for blocking.

 Blocking occurs when multiple threads lock on the same object.

Three, the characteristics of synchronized

  • At the beginning, it is an optimistic lock. If there are frequent lock conflicts, program a pessimistic lock

  • It is a lightweight lock at first, and if the lock is held for a long time, it will be converted to a heavyweight lock

  • There is a high probability that the implementation of lightweight locks will use the spin lock strategy

  • is an unfair lock

  • is a reentrant lock

  • not read-write lock

Fourth, the working process of synchronized locking

Synchronized is divided into lock-free, biased lock, lightweight lock, and heavyweight lock status, which will be upgraded according to the actual situation.

4.1 Bias lock

During the running of the program, the JVM optimizes it. The specific optimization is as follows:

  • During the running of the program, the biased lock allows a certain thread to mark the lock. This mark is not a real lock.
  • If there are no other threads competing for the lock with this thread during the entire program running, there is no need to really lock it at this time.
  • If other threads compete for this lock, then our previous marking will work, and we will immediately upgrade this biased lock to a lightweight lock. Other threads can only block and wait.

That is, the overall efficiency of the program is guaranteed, and the thread safety problem is also guaranteed.

4.2 Lightweight lock (spin lock)

At this time, more and more threads are joining the lock competition. For example, there are 10 threads, but there is only 1 lock object. At this time, it means that only one thread can acquire the lock object, and other threads can only wait. 9 threads spin and wait.

The spin lock is implemented based on CAS.

The remaining 9 threads will spin and wait. The spin speed is very fast, and the CPU resource consumption is particularly large. In this case, it was upgraded to a heavyweight lock.

4.3 Heavyweight locks

After upgrading to a heavyweight lock, the thread will block and wait in the kernel.

It means that the thread will temporarily give up the CPU, and the kernel will perform subsequent scheduling.

Five, other synchronized optimization

5.1 Lock Elimination

It is an optimization done during the compilation phase. Specifically, when the program is compiled, it will check whether the code is executed against a thread and whether it is necessary to perform a locking operation. If it is not necessary to perform a locking operation and add the lock again, then at this time The lock will be automatically removed at compile time.

Not necessary, no lock.

Note: synchronized should not be abused, it should be specific things, specific analysis. Choose an appropriate locking scheme.

There is only one lock rule: multiple threads lock on the same object, and lock competition will occur.

5.2 Lock coarsening

The lock slang can also be called the granularity of the lock: it is how much code is contained in the synchronized code block.

The more codes: the coarser the granularity.

Less code: finer granularity.

Generally, when we write code, we will try our best to make the granularity of locks finer. (The less code executed serially, the more code executed concurrently).

Only in this way can the overall efficiency of the program be guaranteed to the greatest extent.

However, in a certain scenario, frequent locking and unlocking is required, and the compiler optimizes this operation into a coarser-grained lock.

Because every time you lock and unlock, there is an overhead, especially after the lock is released, re-locking requires re-competition.

Each lock competition may introduce a certain amount of waiting overhead, and the overall efficiency may be reduced at this time.

Summarize

Synchronized is a very important knowledge point in Java concurrent programming.

I hope that through the above description, I have an overall understanding of synchronized. If there is something wrong, please forgive me.

Guess you like

Origin blog.csdn.net/qq_63525426/article/details/130125946