AbstractQueuedSynchronizer Node document translation

    I. Overview  

 The waiting queue is a variant of the "CLH" (Craig, Landin, and Hagersten) lock queue. CLH locks are usually used for spin locks. Instead, we use them as blocking synchronizers, but use the same basic strategy, which is to save some control information about the thread in the predecessor of its node. The "status" field in each node tracks whether the thread should be blocked. A signal is sent when the node's predecessor is released. Otherwise, each node of the queue acts as a monitor for a specific notification style, which contains a single waiting thread. Although the status field does not control whether the thread is granted a lock, etc. The head node thread in the queue may try to get it, but there is no guarantee of success. It only grants the right to compete. Therefore, the currently released competitor thread may need to wait again.

      Inserting into the CLH queue requires only one atomic operation on the "tail", so there is a simple atomic demarcation point, that is, it is never queued to queue. Likewise, leaving the team only involves updating the "head". However, it takes more effort to determine who the node’s successor is, partly because it has to deal with possible cancellations due to timeouts and interruptions.

      The "prev" link (not used in the original CLH lock) is mainly used to handle cancellation. If a node is cancelled, its successor node (usually) will be relinked to the uncancelled predecessor node. For an explanation of a similar mechanism in the case of spin locks, please refer to the paper by Scott and Scherer at http://www.cs.rochester.edu/u/scott/synchronization/.

      We also use the "next" link to implement the blocking mechanism. The thread ID of each node is kept in its own node, so the predecessor traverses the next link to determine which thread it is, thereby notifying the next node to wake up. It is determined that the successor must avoid competing with the newly queued node to set the "next" field of its predecessor node. When necessary, this problem can be solved by checking backwards from the "tail" of the atomic update when the node's successor appears to be empty. (Or in other words, the next link is an optimization, so we usually don't need to scan backwards.)

      The cancellation introduces some conservatism into the basic algorithm. Since we have to poll other nodes for cancellation, we may miss whether a cancelled node is in front of us or behind. To solve this problem, the successor must always be cancelled when the contract is cancelled, so that they can stabilize on the new predecessor, unless we can determine the cancellation of a predecessor who will bear this responsibility.

      The CLH queue requires a virtual head node to start. However, we will not create them during the build process, because if there is no contention, it will be futile. Instead, construct nodes and set the head and tail pointers during the first contention.

      The threads waiting for the condition use the same node, but use additional links. The condition only needs to link nodes in a simple (non-parallel) link queue, because they can only be accessed when they are dedicated. While waiting, insert the node into the condition queue. After receiving the signal, the node will transfer to the main queue. The special value of the status field is used to mark the queue where the node is located.

Two, attributes

int waitStatus  represents the status of the node, including the status 

status value meaning
SIGNAL -1 The successor nodes of this node will or have been blocked, so when the current node is released or cancelled, its successor nodes must be unparked. To avoid contention, acquire methods must first indicate that they need a signal, then retry the atomic acquisition, and then block on failure
CONDITION -2 The node is currently in the condition queue. Before transmission, it will not be used as a synchronization queue node, and the state will be set to 0 at that time. (The use of this value here has nothing to do with other uses of the field, but it simplifies the mechanism)
PROPAGATE -3 releaseShared should be propagated to other nodes. This setting is set in doReleaseShared (only for the head node) to ensure that the propagation continues even if other operations are performed due to intervention
CANCELLED 1 The node was cancelled due to timeout or interruption. The node will never leave this state. Especially the thread with the cancellation node will never block again
  0 Indicates that the current node is in the sync queue, waiting to acquire the lock
  • Node prev: Link to the previous node of the current node/thread to check waitStatus. It is allocated during the enqueue period, and it is only emptied when leaving the team (for GC considerations). Similarly, after canceling the predecessor, we will short-circuit and find an uncancelled predecessor. This will always exist, because the root node will never be canceled: only after successful acquisition, the node becomes the root. The canceled thread will never be successfully acquired, and the thread will only cancel itself, not any other nodes
  • Node next: Link to the successor node, and the current node/thread will disband it when it is released. It is allocated during the queuing process, adjusted when the canceled predecessors are bypassed, and cleared when leaving the queue (for GC considerations). The enq operation does not allocate the next field of the predecessor until it is appended, so seeing an empty next field does not necessarily mean that the node is at the end of the queue. However, if the next field appears to be empty, we can scan the previous one from the tail to check again. The next field of the cancelled node is set to point to the node itself instead of null to make the work of isOnSyncQueue easier
  •  Thread thread: The thread that queues the node. It is initialized during construction and left blank after use.
  •  Node nextWaiter: Link to the next node waiting for the condition, or link to the special value SHARED. Since the condition queue is only accessed when it is saved in exclusive mode, we only need a simple link queue to save the node while it is waiting for the condition. Then transfer them to the queue to get them again. And since the conditions can only be mutually exclusive, we use special values ​​to represent the shared mode to save the fields.

 

 

Guess you like

Origin blog.csdn.net/sinat_33472737/article/details/105645486