Summary of java knowledge points

Thread class:

 1.start is executed on the main thread, run is a new thread to be executed

 2. Synchronized locks the current instance of the class, and static synchronized locks the class.

     

pulbic class Something(){
         public synchronized void isSyncA(){}
         public synchronized void isSyncB(){}
         public static synchronized void cSyncA(){}
         public static synchronized void cSyncB(){}
     }

 

 

      x.isSyncA() and Static.classSyncA() do not affect each other.

 3.yield() is to make a thread concession, give up the lock, and continue to compete for resources with other threads.

 4.join() makes the main thread wait for the child thread to finish executing before continuing.

 5.interrupt() interrupts the thread. If the thread is in a blocked state, isInterrupted will return true and an InterruptedException will be thrown. If it is in a running state, calling interrupt() just resets the flag to false and throws an exception.

Each thread has a Boolean property associated with it that represents the thread's interrupted status . The interrupt state is initially false;

 

concurrent:

    Based on cas mechanism (compare and swap). Based on hardware level control. If CAS has three operands : memory value V, old expected value A, and value B to be modified, if and only if expected value A and memory value V When the same, modify the memory value to B and return true, otherwise do nothing and return false

 

gather:

Hashmap: Both key and value can be null. The bottom layer is implemented by array and linked list. The default value of loadFactor is 0.75, the default size of size is 16, the size of each resize is doubled, and then all values ​​are allocated by hash. When traversing, entrySet is more efficient than keySet high.

 Because each entry node saves a linked list and accesses it sequentially, if you use keyset, you must first find the key, which is stored in the linked list, and the traversal speed is very slow. In jdk1.8, if the number of linked list nodes is greater than 8, it will be converted to red Black tree.

hashset: The bottom layer is implemented by hashmap.

hashTable: The default initial size of HashTable is 11, and then it is expanded to the original 2n+1 each time.

linkedhashMap: The bottom layer is also the implementation of hashmap, but the header of the Entry type is added, and the tail variable is used as the order in which the linked list records are added. If the accessOrder flag is true, it will be sorted according to the access order.

TreeMap: The bottom layer is implemented by a red-black tree. It is stored by key comparison, so it is ordered.

linkedhashSet: The bottom layer is implemented by linkedhashMap.

concurrentMap:

        Before jdk1.7, segment objects were used to implement map. Each segment object has a hashEntry[] array object. The hashentry object is the node object that constitutes the linked list. The range of locking is a segment object .

        When the put operation is performed, it will first find the segment object you belong to, and then hash to find which position belongs to the hashEntry[] array in the put method of the segment object.

        jdk1.8 cancels the segment object, but directly uses the Node<K, v>[] object to implement it. The range of locking is added to each Node object. In jdk1.8, if the number of linked list nodes is greater than 8, it is converted into Red black tree.

 

copyOnWriteArrayList: The internal structure is also implemented with an array, but the modifier volatile is added, and the addition and deletion operations will be locked. When traversing, only the copy will be traversed, so it is thread-safe.

copyOnWriteArraySet: The bottom layer is implemented by copyOnWriteArrayList.

ConcurrentSkipListMap: The data structure is shown in the figure below. The skip list is divided into many levels, and each level can be regarded as an index of data. The meaning of these indexes is to speed up the speed of searching for data in the skip list. The data of each layer is ordered, the data of the previous layer is a subset of the data of the next layer, and the first layer (level 1) contains all the data; the higher the level, the greater the jump, and the data included less. The jump table contains a header. When it searches for data, it searches from top to bottom and from left to right. The bottom layer uses cas to achieve thread safety.

 

 

 

 

ArrayBlockingQueue: A thread-safe bounded blocking queue implemented by an array. The bottom layer is an array, which is implemented by ReentrantLock and Condition. The put method will block, offer will return true or false, add is implemented by offer, and offer returns false to throw an exception.

 

LinkedBlockingQueue: LinkedBlockingQueue is a blocking queue implemented by a singly linked list. The queue sorts elements by FIFO (first in, first out), new elements are inserted at the tail of the queue, and queue fetch operations get elements at the head of the queue. It uses two different locks for head and tail (fetch and add operations), so throughput is usually higher than array-based queues.

 

LinkedBlockingDeque: LinkedBlockingDeque is a bidirectional concurrent blocking queue implemented by a doubly linked list. The blocking queue supports both FIFO and FILO operation modes, that is, simultaneous operations (insertion/deletion) can be performed from the head and tail of the queue.

ConcurrentLinkedQueue: A thread-safe queue is implemented through cas.

PriorityBlockingQueue: The priority blocking queue, the bottom layer is also implemented by adding ReentrantLock to the array, and the stored data is sorted and inserted by the binary method to achieve the priority.

 

 

Lock:

ReentrantLock:  

        The calculation is added by 1 through CAS. If other threads want to acquire the lock and the counter is > 1, it will enter the waiting queue.

         It can only be held by one thread lock at the same time point; and reentrant means that the ReentrantLock lock can be acquired multiple times by a single thread. ReentraantLock manages all threads that acquire the lock through a FIFO waiting queue. In ReentrantLock, the Sync object is included; moreover, Sync is a subclass of AQS; more importantly, Sync has two subclasses FairSync (fair lock) and NonFairSync (unfair lock). ReentrantLock is an exclusive lock. As for whether it is a fair lock or an unfair lock, it depends on whether the sync object is an "instance of FairSync" or "an instance of NonFairSync". Other threads that fail to acquire will enter the waiting queue until they are interrupted by themselves.

 

Condition: The role of Condition is to control the lock more precisely. Condition is similar to Object object, but can be controlled more precisely.

 

             Object       Condition  
sleep          wait await
 wake up thread     notify signal
 wake up all threads    notifyAll signalAll

   //A lock can be divided into multiple Conditions, so it can be controlled more precisely.   

final Lock lock = new ReentrantLock();
final Condition notFull  = lock.newCondition();
final Condition notEmpty = lock.newCondition();

 

 

LockSupport: LockSupport.park(Thread) ,unpark(Thread).The difference between Thread.wait() and Thread.wait() is that the synchronization lock must be acquired through synchronized before waiting blocks the thread.

CountDownLatch: First latch.await() blocks, wait for enough countDown() calls to wake up the process.

Parameters are initialized during initialization

CyclicBarrier: How many threads or tasks are allowed to wait to the barrier state before continuing to execute.

Semaphore: There are a certain number of resources in total, which are used by multiple threads. If the resources are insufficient, they will always block.

 

 

 

jdk thread pool architecture:

    HashSet<Work> is a set of threads, work corresponds to a thread, and there is a BlockQueue queue in the thread pool to save tasks.  corePoolSize represents the core pool size, and  maximumPoolSize represents the maximum pool size.

 -- If at this time, the number of threads running in the thread pool < corePoolSize, create a new thread to process the request.
 -- If at this time, the number of threads running in the thread pool is > corePoolSize, but < maximumPoolSize; new threads are created only when the blocking queue is full.

    

    The thread pool has four states: RUNNING, SHUTDOWN, STOP, TIDYING, TERMINATED

    Once the thread pool is created, it is in the RUNNING state, 

    After calling shutdown, it becomes the shutdown state, and the unfinished tasks continue to be executed. After the execution, it becomes the tidyinng state.

    After calling shutdownNow, it becomes the stop state, interrupts the unfinished task, and becomes the tidyinng state after the interrupt.

     The tidyinn state becomes the terminal state after calling the terminal.

 

    The thread pool includes 4 rejection policies, they are: AbortPolicyCallerRunsPolicyDiscardOldestPolicy and DiscardPolicy

AbortPolicy          -- When a task added to the thread pool is rejected, it will throw a RejectedExecutionException.
CallerRunsPolicy  -- When a task added to the thread pool is rejected, the rejected task will be processed in the thread pool of the thread pool that is currently running.
DiscardOldestPolicy -- When a task added to the thread pool is rejected, the thread pool discards the oldest unprocessed task in the waiting queue, and then adds the rejected task to the waiting queue.
DiscardPolicy  -- When a task added to the thread pool is rejected, the thread pool will discard the rejected task.

 

   The performance of ScheduledThreadPoolExecutor is better than Timer, because the timer is executed by a single thread. The scheduling of the Timer class is based on absolute time, not relative time, so the Timer class is sensitive to changes in the system clock

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326109408&siteId=291194637