[JavaEE] Complex Synchronized keywords in Java

Table of contents

 1. The characteristics of synchronized

(1) Mutual exclusion

(2) Refresh memory

(3) Reentrant

Second, the use of synchronized

(1) Modify common methods

(2) Modified static method

(3) Modified code block

Three, synchronized lock mechanism

(1) Basic features

(2) Locking work process

1. Bias lock

2. Lightweight lock

3. Heavyweight lock

(3) Optimize operation

1. Lock Elimination

2. Lock coarsening

Fourth, the difference between synchronized and volatile


 

 1. The characteristics of synchronized

(1) Mutual exclusion

         synchronized achieves atomicity through mutual exclusion (one of the four major characteristics of thread safety)

        Synchronized will have a mutual exclusion effect. When a thread executes to the synchronized of a certain object, other threads will block and wait if they also execute to the same synchronized object. At the same time, only one thread can own the lock and execute the code.
1. Enter the code block modified by synchronized, which is equivalent to locking.
2. Exit the code block modified by synchronized, which is equivalent to unlocking.

synchronized void increase(){//Enter the method, which is equivalent to 
        count++ for the current object;

}//The completion of execution is equivalent to "unlocking" the current object

        The lock used by synchronized is stored in the Java object header. The bottom layer of synchronized is implemented using the mutex lock of the operating system.
        It can be roughly understood that when each object is stored in memory, there is a piece of memory representing the current "locked" state. If it is currently in the "unlocked" state, it can be used, and it needs to be set to the "locked" state when using it. If it is currently in the "locked" state, other people cannot use it and can only wait in line. A thread is locked first, and other threads can only wait for the thread to be released.

        important point:

        For each lock, a waiting queue is maintained inside the operating system. When this lock is occupied by a certain thread, other threads try to lock it, but it cannot be added, and it will block and wait until the previous thread is unlocked, and the operating system wakes up a new thread, and then acquires this lock. Lock.

        After the previous thread unlocks, the next thread will not be able to acquire the lock immediately. Instead, it is up to the operating system to " wake up ". This is also part of the work of the operating system thread scheduling.
        Suppose there are three threads ABC, thread A acquires the lock first, then B tries to acquire the lock, and then C tries to acquire the lock again, at this time both B and C are waiting in the blocking queue. But when A releases the lock, although B comes before C, B may not be able to acquire the lock, but competes with C again, and does not first-come-first-served rule .

(2) Refresh memory

        Synchronized can guarantee memory visibility by adding locks and reducing locks.

The working process of synchronized:
1. Acquire the mutex
2. Copy the latest copy of the variable from the main memory to the working memory
3. Execute the code
4. Refresh the changed value of the shared variable to the main memory
5. Release the mutex
so synchronized also guarantees memory visibility.

(3) Reentrant

         synchronized can only have certain constraints, and cannot completely prohibit instruction reordering. The synchronized synchronization block is reentrant for the same thread, and there will be no problem of locking itself.

// Locking for the first time, lock successfully
lock();
// Locking for the second time, the lock is already occupied, blocking and waiting.
lock();

        For locking itself, it means that a thread does not release the lock, and then tries to lock it again.

        According to the previous setting of the lock, it will block and wait when the lock is added for the second time. The second lock cannot be acquired until the first lock is released. The release of the first lock is also done by this thread. As a result, this thread has been blocked and waited, and the unlocking operation cannot be performed. At this time, it will deadlock. Such a lock is called a non-reentrant lock.

        The synchronized in Java is a reentrant lock , so the above problems will not occur. The interior of the reentrant lock contains two information , " thread holder " and " counter ". If a thread finds that the lock has been occupied by someone when it is locked, and the occupant happens to be itself, then it can still continue to acquire the lock and let the counter increment. When unlocking, the lock is actually released when the counter is decremented to 0, and the lock can only be acquired by other threads at this time.

Second, the use of synchronized

(1) Modify common methods

Locked SynchronizedDemo object

public class SynchronizedDemo {
    public synchronized void methond() {
    }
}

(2) Modified static method

An object of the SynchronizedDemo class that locks
 

public class SynchronizedDemo {
    public synchronized static void method() {
    }
}

(3) Modified code block

explicitly specify the object of the lock

lock the current object

public class SynchronizedDemo {
    public void method() {
    synchronized (this) {
        }
    }
}

lock object

public class SynchronizedDemo {
    public void method() {
        synchronized (SynchronizedDemo.class) {
        }
    }
}

        It should be noted that two threads compete for the same lock to generate blocking waiting. Two threads try to acquire two different locks respectively, and there will be no competition.

Three, synchronized lock mechanism

(1) Basic features

        Only consider JDK 1.8, lock working process: JVM divides synchronized locks into lock-free, biased lock, lightweight lock, and heavyweight lock states. It will be upgraded in turn according to the situation.
1. At the beginning, it is an optimistic lock. If there are frequent lock conflicts, it will be converted to a pessimistic lock.
2. At the beginning, it will be a lightweight lock implementation. If the lock is held for a long time, it will be converted to a heavyweight lock.
3. Implement lightweight The spin lock strategy that is most likely to be used for magnitude locks
4. It is an unfair lock
5. It is a reentrant lock
6. It is not a read-write lock

(2) Locking work process

1. Bias lock

        The first thread that tries to lock will first enter the biased lock state.
        Biased locks are not really "locked", but just make a "biased lock mark" in the object header to record which thread the lock belongs to. If there are no other threads to compete for the lock in the future, then no other synchronization operations are required, avoiding the overhead of locking and unlocking. If there are other threads competing for the lock in the future, because which thread the current lock belongs to has been recorded in the lock object just now, it is easy to identify whether the thread currently applying for the lock is the previously recorded thread, then cancel the original biased lock state, Enter the general lightweight lock state. Biased locks are essentially equivalent to " delayed locking ". If you can lock it without locking it, try to avoid unnecessary locking overhead . But the mark that should be done still has to be done, otherwise it will be impossible to distinguish when the lock needs to be real.

        What is a biased lock is often asked in interviews. Biased locks are not really locked, but just record a mark in the object header of the lock, recording the thread to which the lock belongs. If no other threads participate in the competing lock, then the locking operation will not be actually performed, thereby reducing program overhead. Once it really involves other thread competition, cancel the biased lock state and enter the lightweight lock state.

2. Lightweight lock

        As other threads enter into competition, the biased lock state is eliminated and enters a lightweight lock state (adaptive spin lock).
The lightweight lock here is realized through CAS.
        Check and update a block of memory (such as null => the thread reference) through CAS. If the update is successful, the locking is considered successful. If the update fails, the lock is considered to be occupied, and the spin-like waiting continues (without giving up the CPU).
The spin operation keeps the CPU idling all the time, which wastes CPU resources. Therefore, the spinning here will not continue forever, but will stop spinning after a certain time and number of retries. Also known as "adaptive"

3. Heavyweight lock

        If the competition becomes more intense, the spin cannot quickly acquire the lock state, and it will expand into a heavyweight lock. The heavyweight lock here refers to the mutex provided by the kernel.
        To execute the locking operation, enter the kernel state first. Determine whether the current lock is already occupied in kernel mode. If the lock is not occupied, the lock is successfully acquired, and the user mode is switched back to. If the lock is occupied, the lock fails. At this time, the thread enters the waiting queue of the lock and hangs. Waiting to be woken up by the operating system. After a series of operations, the lock was released by other threads, and the operating system also remembered the suspended thread, so it woke up. This thread, try to reacquire the lock.

(3) Optimize operation

1. Lock Elimination

The compiler + JVM judges whether the lock can be eliminated. Eliminate it if you can.

Lock elimination: Synchronized is used in the code of some applications, but it is not in a multi-threaded environment. (eg StringBuffer) At this time, each append call will involve locking and unlocking. But if this code is only executed in a single thread, then these locking and unlocking operations are unnecessary, and some resource overhead is wasted in vain. Elimination is possible.
StringBuffer sb = new StringBuffer();
sb.append("a");
sb.append("b");
sb.append("c");
sb.append("d");

2. Lock coarsening

If there are multiple locks and unlocks in a piece of logic, the compiler + JVM will automatically coarsen the lock.

Lock granularity: coarse and fine

        In the actual development process, the use of fine-grained locks is expected to allow other threads to use the lock when the lock is released. But there may not actually be other threads to grab this lock. In this case, the JVM will automatically coarsen the lock to avoid frequently applying for and releasing the lock. For example, explain work tasks to subordinates: Method 1: Make a phone call, explain task 1, and hang up. Make a phone call, give task 2, and hang up. Make a phone call, give task 3, and hang up. Method 2: Make a phone call, explain task 1, task 2, and task 3, and hang up the phone. Obviously, the second method is a more efficient solution. This is a process of lock coarsening.

Fourth, the difference between synchronized and volatile

        Both synchronized and volatile are Java keywords, and both solve thread safety

 way, so they are often asked together during the interview.

        The two are actually not related.

        synchronized:

        1. Bind a bunch of codes together by locking and unlocking to ensure atomicity.

        2. Through locking and unlocking, memory visibility is ensured.

        3. There are certain constraints on the reordering of instructions.

        volatile:

        1. Atomicity cannot be guaranteed.

        2. Ensure memory visibility.

        3. Instruction reordering is prohibited.       

       Although synchronized can guarantee thread safety in most cases. But you can't use synchronized under any circumstances. Synchronized is to pay a certain price. Synchronized is guaranteed by locking and unlocking. Therefore, when other threads cannot grab the lock, the thread will block. The thread will give up the CPU, and after giving up, the time to be called again is uncertain. When using synchronized, high performance is given up to a certain extent. Using volatile will not cause thread blocking, but it also has a certain impact on performance, but it is not as big as synchronized.

        Multithreading is used for efficiency. Using synchronized means giving up certain efficiency. The two need to be balanced.      


5b9c87cc0cac4e38a60af01e2373e886.png

 

 

Guess you like

Origin blog.csdn.net/m0_63372226/article/details/128907998