Java multi-threading and concurrency knowledge summary

Kenji quickly find work, and organize their own little common interview questions ...
References: "Java multi-threaded programming core technology", "Java concurrent programming of art"

1. Processes and Threads

Process is the underlying operating system architecture, is a program execution, is an independent unit of system resource allocation and scheduling.
A thread is the smallest unit of scheduling the operating system, understood as a separate sub-tasks running in the process, is called lightweight processes. A process has at least one thread.

2. Using multithreading

First, inheritance Thread class, the second is to achieve Runnable interface. Because Java does not support multiple inheritance, so in order to change this limit, you can use the Runnable interface means to achieve multi-threading technology.

3. Thread safe and non-thread-safe

Thread safety is the value of an instance variable is obtained through the synchronization process, and will not appear dirty read phenomenon.
Occurs when a non-thread-safe is multiple threads on the same object instance variables concurrent access, the result is a "dirty read" that is to take the data is actually turn over. (Multiple threads of the same object instance variable of the same operation, the value will be changed, a value is not synchronized).

4.synchronized keyword

Heavyweight locks, a method may be modified or used in the form of sync blocks, primarily to ensure that multiple threads at the same time, only one thread in a sync block or method, to ensure the visibility and access to the exclusive variable thread. Each object in Java can be used as a lock, synchronized with the lock is in advance of the presence of Java objects.
the synchronized keyword is added to a static method of class Class static locked; on the non-static static method is applied to the object lock; for block synchronization method, the object is synchronized lock brackets configuration.

5.volatile keyword

If a field is declared as volatile, Java thread memory model ensures that all threads see the value of this variable is the same. Can be used to modify fields (member variables), is to visit any of the variables in the program are required to obtain informed from the shared memory, it changes must be synchronized back to the shared memory refresh, which ensures visibility to all threads of variable access. The most fatal flaw is not supported atomic.

6. Lock status

Lock total of four states, the level from low to high: no lock status, tend to lock status, lock status lightweight, heavyweight lock status. Lock can upgrade but can not downgrade.

1. biased locking: locking and unlocking no additional consumption; if there is lock contention between threads, will bring additional locks revocation consumption; applies to only one thread synchronization block access to the scene.
2. Lightweight lock: competing threads will not be blocked; if not always lock competing threads, using spin will consume CPU; suitable for the pursuit of the response time of applications.
3. heavyweight locks: Thread competition will not use spin, do not consume CPU; thread is blocked, slow response times; suitable for the pursuit of throughput applications.

7. Thread the state

  1. NEW: first test state, Threading Building, but also not called start () method;
  2. RUNNABLE: operating status;
  3. BLOCKED: blocked state, indicates that the thread blocked in the lock;
  4. WAITING: wait state, entering the state indicates that the current thread to wait for another thread to make some specific action (notification or interrupt);
  5. TIME_WAITING: Timeout waiting state, may return its own at a specified time;
  6. TERMINATED: terminating the state that the thread has finished.
    Here Insert Picture Description

8. Wait / notification mechanism

A thread calls the object O wait () method enters a wait state while another thread object O B calls notify () or notifyAll after () method, the thread A notified () method returns the object O from the wait and then performs subsequent operations.

9.Lock interface provides key features synchronized not available

  1. Non-blocking attempt to acquire a lock: the current thread tries to acquire the lock if the lock is not the time to get other threads, then successfully acquire and hold the lock;
  2. Can interrupt access to the lock: synchronized with different thread can acquire the lock had interrupt response, when the acquired lock the thread is interrupted, the interrupt will be thrown, while the lock is released;
  3. Timeout acquire a lock: acquiring the lock before the specified deadline, if the deadline is still unable to acquire the lock is returned.

10.ReentrantLock reentrant lock

The lock is capable of supporting a thread locking duplication of resources, but also supports fair and unfair selection when acquiring a lock.
Fair locks: in absolute time, request to be taken to the lock must first be met, which is the longest-waiting highest priority thread acquires the lock. Fair locks can reduce the probability of "hunger" occurs, there is no non-high-efficiency fair locks.
Unfair lock: lock acquisition order is not in accordance with the order of requests. Although it may cause thread "hunger", but very few of thread switching, ensuring greater throughput.

11.ReentrantReadWriteLock Read-Write Lock

Read-Write Lock maintains a lock, a lock and a read-write lock. When a write lock is acquired, subsequent read and write operations are blocked after write lock is released, all the operations continue. Write lock is possible to provide better than exclusive lock concurrency and throughput.
Lock downgrade means a write lock downgrade called a read lock, hold on (currently owned) write locks, and then to get a read lock, then release the write lock previously owned.

12.LockSupport Tools

LockSupport defines a common set of static methods, these methods provide a basic thread blocks and wake-up functions.
It defines a set of methods beginning with the park used to block the current thread, and unpark way to wake up a blocked thread.

13.Condition Interface

Must obtain a Condition of newCondition Lock () method.

Lock lock = new ReentrantLock();
Condition condition = lock.newCondition();

After calling await () method, the current thread releases the lock and this wait, while the other thread calls Condition object signal () method to inform the current thread, the current thread only () method returns from await, and before returning to have been acquired lock.
Condition can be achieved through the interface bounded queue. Bounded Queue: When the queue is empty, the queue acquisition operation will block acquisition thread until there are new elements in the queue when the queue is full, the queue insertion thread insert operation will block until the queue a "vacancy" .

14.ConcurrentHashMap

ConcurrentHashMap is thread safe and efficient HashMap. Concurrent programming using HashMap may result in a program loop, using thread-safe HashTable efficiency and very low (HashTable access all threads must compete for the same lock).
Lock ConcurrentHashMap segmentation technique: first data segments by storing, for each piece of data and with a lock, when a thread holding the lock access time wherein a data segment, the other segment of the data segments can also be accessible to other threads.
Segment ConcurrentHashMap is made HashEntry arrays and arrays. Segment is a reentrant lock, the lock role in ConcurrentHashMap; HashEntry for storing the key data. A ConcurrentHashMap contains a Segment array, an array contains a Segment HashEntry array. Segment is a list structure and the array, HashEntry is an element of a linked list structure. Each Segment HashEntry a guard element array, when the data array HashEntry modified, it must first obtain the corresponding Segment lock.

15. The blocking queue

7 blocking queue

  1. ArrayBlockingQueue: a structure consisting of an array bounded blocking queue.
  2. LinkedBlockingQueue: a linked list of structures bounded blocking queue.
  3. PriorityBlockingQueue: a support prioritization of unbounded blocking queue.
  4. DelayQueue: Use a priority queue unbounded blocking queue implementation.
  5. SynchronousQueue: a blocking queue element is not stored.
  6. LinkedTransferQueue: a list structure consisting of unbounded blocking queue.
  7. LinkedBlockingDeque: a linked list structure consisting of two-way blocking queue.

16.Fork / Join Framework

The task is divided into a large number of small tasks, each small frame final summary task results to obtain a large task results.
Work-stealing algorithm: steal a task from another thread queue for execution. Advantages: full use of parallel computing threads, reducing the competition between threads. Disadvantages: In some cases there is still competition, only one task, such as double-ended queue; algorithm will consume more system resources.
Fork / Join two classes used to complete the task and perform the task segmentation and merging the results.
ForkJoinTask: RecursiveAction did not return for the results of the task, RecursiveTask have returned for the results of the task.
ForkJoinPool: ForkJoinTask need to be performed by ForkJoinPool

17. Thread Pool

Thread pool benefits: lower resource consumption; improve the response speed; improve manageability thread.
Thread pool process:

  1. Determine whether the core thread pool thread pool threads are on a mission. If not, create a new worker thread to perform the task. If the core thread pool threads are on a mission, then enter the next process.
  2. Thread pool to determine whether the work queue is full. If the work queue is not full, the task will be stored in the newly submitted job queue. If the work queue is full, then enter the next process.
  3. Determine whether the thread pool thread pool threads are in working condition. If not, create a new worker thread to perform the task. If full, then to the saturation strategy to deal with this task.

Create a thread pool:

new ThreadPoolExecutor(corePoolSize,maximumPoolSize,keepAliveTime,unit,workQueue,handler)

corePoolSize: basic thread pool size;
maximumPoolSize: thread pool maximum number, the maximum number of threads in the thread pool can be created;
keepAliveTime: After working thread is idle thread pool to maintain the survival time;
Unit: keepAliveTime units;
workQueue: task queue, save the task for blocking queue waiting to be executed;
Handler: deny policy, queues and thread pools are full, we must adopt a strategy to deal with the new task submission.
Four strategies:
AbortPolicy: direct throw an exception;
CallerRunsPolicy: only use threads to run tasks where the caller;
DiscardOlderPolicy: discard queue recent task and execution of the current task;
DiscardPolicy: no treatment, discarded.
Two methods may be used to submit the task to the thread pool, execute () to submit the task no return value, submit () to submit the task required for the return value.

18.3 kinds of types of ThreadPoolExecutor

FixedThreadPool: thread pool that reuses a fixed number of threads.
SingleThreadExecutor: Executor that uses a single worker thread.
CacheThreadPool: creates a new thread pool threads as needed. corePoolSize is 0, maximumPoolSize set Integer.MAX_VALUE;

19. Deadlock

Different threads are waiting to be impossible to release the lock, causing all the tasks can not stay.
JDK comes with a tool can be used to detect whether there is a deadlock. First, enter cmd tool, and then enter the JDK installation folder bin directory and execute jps command, get thread id, and then execute the command jstack see the results.

20. Concurrent Tools

CountDownLatch: allowing one or more threads waiting for other threads to finish.
CyclicBarrier: Let a set of threads to reach a synchronization point is blocked until the last thread arrives, the barrier will open the door, all barriers blocked thread will continue to run.
CountDownLatch counter only be used once, and may be used CyclicBarrier counter reset () method to reset.
Semaphore: used to control the number of threads simultaneously access a particular resource, which is coordinated by the various threads, in order to ensure rational use of public resources.
Exchanger: for data exchange between threads. If the first thread to execute exchange () method, which has been waiting for a second thread also performs exchange () method, when two threads have reached the synchronization point, these two threads can exchange data, will produce this thread out of the data is passed to the other side.

21.ThreadLocal

Thread variable, is a TheadLocal object key, object memory structure of any value. That is, a thread can query to a value bound to this thread in accordance with a ThreadLocal object.

22.sleep()、join()、wait()、yield()

join () action is destroyed waits for that thread, internal wait () implemented method of performing the lock release;
wait () after execution of the current thread lock is released, a method in the class Object, or only in the sync block synchronization method use;
sLEEP () method function is to allow the currently executing thread within a specified number of milliseconds to sleep (suspended), is the thread class method, sleep () method does not release the lock;
yield () method of action is to abandon the current CPU resources, it will give other CPU-intensive tasks to the execution time.

23.Java memory model

JMM defines an abstract relationship between the thread and the main memory: shared variables between threads is stored in the main memory, each thread has a private local memory, local memory is stored in the thread to read / write shared variables copy.
1) A thread A bar updated local memory shared variables to be flushed to main memory;
2) Thread B to read into the main memory before the thread A has updated the shared variable.

Released two original articles · won praise 1 · views 33

Guess you like

Origin blog.csdn.net/weixin_45341772/article/details/105178667