[Top] Java thread related issues and various locks

Thread related problems in JAVA

  1. Thread Basics

Process: A process refers to an ongoing program. When a program enters the memory for execution, it becomes a process and has a certain degree of independence.

Thread: A thread is the execution unit in a process and is responsible for the execution of the process. There are multiple threads in a process, and multiple threads are executed in memory through time-sharing scheduling and preemptive scheduling.

Single thread: Also known as synchronous thread (different from thread synchronization, thread synchronization refers to ensuring thread safety), and the code is executed from top to bottom.

Multithreading: Also known as asynchronous threads, multithreading has three characteristics: atomicity, visibility, and orderliness.

Atomicity: Data processing in the same thread is either performed or not performed. It is necessary to ensure that all data is consistent during the execution of the thread.

Visibility: When a thread modifies the value of a shared variable, other threads immediately know the modified value. (See the first paragraph of multi-threading in point 4 for details)

Orderliness: The execution order in a thread follows the code from top to bottom.

There are many knowledge points about threads, such as thread state, thread pools, multithreading security issues and concurrency, and lock objects.

Several ways to create a thread: 1. Inherit Thread and override the run() method

2. Implement the Runable interface, rewrite the run() method in the interface, newThread (pass in the Runable interface implementation class)

3. Anonymous inner class, new Thread (override the run method), newRunable (override the run method)

4.Callable<T>, similar to Runable, but will return the processing result, T is the return result type

5. Thread pool provides threads

  2. Thread status

1. New state: After the thread is created, the thread has not been used to execute code.

2. Ready state: In a broad sense, the ready state refers to the state that is about to enter the running state after the new thread starts.

3. Blocking state: Active sleep and waiting states, passive blocked states are all blocking states

4. Running state: The thread obtains the CPU running time and enters the execution state.

5. Dead state: The thread execution ends or the run method is terminated abnormally without a catch.

  3. Thread pool

The thread pool can be regarded as a simple framework using the factory design idea, and almost all asynchronous concurrent multi-threaded programs can use the thread pool.

Thread pool functions: 1. Reduce resource consumption, and reduce memory consumption caused by thread creation and destruction by reusing created threads.

2. Improve the response speed. When the task arrives, it can be executed immediately without waiting for the creation of the thread.

3. Improve the objective rationality of threads. Threads cannot be created without restrictions, which will consume system resources and reduce the stability of the system. Using thread pools can uniformly allocate, tune and monitor threads.

       Java provides a class for the thread pool, java.util.concurrent.ThreadPoolExecutor, which belongs to the core class of the thread pool. It implements two interfaces, Executor and ExecutorService . ExecutorService is used to execute the submitted tasks, and Executor is used to configure the factory.

     Thread pool related configuration issues:

A. Core thread pool and thread pool maximum size. ThreadPoolExecutor adjusts the thread pool size by setting the number of core threads with setCorePoolSize(int) and setting the maximum number of threads in sequence with setMaximumPoolSize(int). When CorePoolSize<MaximumPoolSize, a new thread will be created when the queue is full. When CorePoolSize=MaximumPoolSize, the thread pool is of a fixed size.

B. On-demand construction. By default, core threads are initially only created and started when needed by new tasks, which can be dynamically overridden using the methods restartCoreThread() or prestartAllCoreThreads().

C. Create a new thread. The thread pool creates new threads using the Executors.defaultThreadFactory() method. New threads have NORM_PRIORITY priority and non-daemon status by default.

D. Thread activity time in the thread pool. The non-core threads in the thread pool will be destroyed when the idle time exceeds a certain time to reduce the consumption of system resources in the inactive state. If the core thread queue is full, a new thread will be created automatically.

E. Queue. If the number of running threads is less than the number of core threads, Executor prefers to add new threads instead of queuing; if the number of running threads is equal to or more than the number of core threads, Executor prefers to request to join the queue and does not add new threads; if it cannot request to join the queue, it creates new threads, When the number of threads created is greater than the maximum number of threads in the thread pool, the task will be rejected.

                 Executor encapsulates four thread pools: CachedThreadPool creates a cacheable thread pool. If the length of the thread pool exceeds the processing requirement, it can flexibly recycle idle threads. If there is no recovery, new threads are created. FixedThreadPool creates a fixed-length thread pool, which can control the threads that exceed the maximum concurrent number of threads to wait in the queue. ScheduledThreadPool creates a fixed-length thread pool that supports scheduled and periodic task execution. SingleThreadExecutor creates a single-threaded thread pool that only uses a single worker thread to execute tasks, ensuring that all tasks are executed in the specified order (FIFO, LIFO, priority).

  4. Multi-threaded concurrency

There is a main memory area in the JVM. The data in the main memory area is shared by all threads. At the same time, each thread has its own working memory. The data in the working memory is copied from the main memory area. The data is manipulated and then transferred to the main memory area. Threads cannot directly access each other and rely on the main memory area. A thread A copies data X from the main memory area, and processes X in its own working memory, but does not update the processed X to the main memory in time, and thread B reads the main memory in the memory. The original data X in memory. At this time, thread B does not know that thread X has changed. This situation is called a thread safety problem.

In order to solve the above-mentioned thread safety problem, java proposes a thread synchronization mechanism: synchronization code blocks and synchronization methods and an interface Lock.

Synchronized code block:

Synchronized(lock object) {may produce thread-safe code}

The lock object in the synchronized code block can be any object (not to be confused with the interface Lock ), but to solve the security problem of multi-threading, the same lock object should be used.

Synchronization method:

public Synchronized void method(){may produce thread-safe code}

The lock object in the synchronized method is this

Interface Lock: When locking lock() and unlocking unlock() appear in different scopes, they are more flexible than synchronized code blocks and synchronized methods, and care must be taken to ensure that all code executed while holding the lock uses try-finally or protected by try-catch to ensure the lock is released if necessary.

Deadlock: It refers to an ancestor phenomenon caused by preempting resources or communicating with each other during the execution of multiple or more processes. If there is no external force, they will not be able to advance. At this time, the system is in a deadlock state, or a deadlock occurs. This process of waiting for each other is called a deadlock process.

  5. Lock mechanism

1. Pessimistic lock: Pessimistically believe that each operation will cause the problem of loss of update, that is to say, that others will modify it, and an exclusive lock will be added every time the data is queried. In this way, if someone wants to take it, it will be rejected until it gets the lock. Usage: add for update after the SQL statement, such as select * from order for update; principle: only one connection is allowed to operate, and when a thread enters, it will be locked, and other threads must wait for the lock to be released before they can be used. less effective.

2. Optimistic locking: optimistically believe that each query will not cause the update to be lost, and use version field control. Principle: Each database has a version number version. When reading data, the version field will be read together. Each time the data is updated, the version value will be +1. When we submit the update, it will be determined whether the version is what we have taken The version of the data, if not, the data is considered to be out of date, and the update will fail. The second implementation method of optimistic locking is to add a time field similar to version in the table (this field does not need to be specially added, and the original field in the table can also be used), the principle is the same as that of version.

3. Reentrant lock: Take the thread as the unit, when a thread acquires the object lock, this thread can acquire the lock on the object again, but other threads cannot. Principle: By associating a request count for each lock and a thread that occupies it, when the count is 0, the lock is considered to be unoccupied, and when a thread requests an unoccupied lock, the jvm will record the thread and assign it to the thread. The counter is set to 1. If the same thread requests the lock again, the counter will increase by 1. Each time the thread exits the synchronized block, the counter value will be 0 and the lock will be released.

4. Read-write lock (a type of mutual exclusion lock): There is no problem with two threads reading a resource at the same time, but if one thread wants to update a resource, other threads cannot perform read and write operations, that is to say, read and read coexist, read Writing and writing do not coexist.

5. CAS lock-free: The CAS lock-free mechanism does not lock a piece of code, which is more efficient than locking. The CAS system has three parameters, V/E/N. V represents the data to be updated, E represents the expected value, and N represents the new value. When a thread acquires a data V, the thread changes the operated data V to data N, and when V=E, changes the data V to data N, if V!=E, the current thread does not update or re-operate . For example, the database has a value = 1, and thread A changes the value to 2. When submitting, the expected value of the thread is that the value in the database is 1, and the submission can be successful. If the database value is not 1, it means that other threads have changed it. data, the submission was unsuccessful.

6. Spin locks: Spin locks are similar to mutex locks. The difference is that in the mutex lock mechanism, the execution thread of the lock executes, and the waiting thread for the call goes to sleep, while in the spin lock mechanism, the waiting thread for the call will There it waits (puts the waiting calling thread in an empty loop and lets it go around in circles there, two circles, three or four circles) until the lock's execution thread releases the lock. Spinlocks are a waste of resources.

7. Segment lock: Segment lock is a design of lock. It is equivalent to locking a certain segment of a piece of data and refining the granularity of the lock. If you want to obtain the operation information of the entire piece of data, you need to obtain all segment locks for statistics.

Classification of locks:

According to the order of lock acquisition: fair locks and unfair locks. A fair lock means that multiple threads acquire locks in the order in which they apply for locks; an unfair lock means that the order in which multiple threads acquire locks is not in the order in which they apply for locks. Unfair locks have higher throughput than fair locks.

According to the permissions held by the thread, the lock is divided into: exclusive lock and shared lock. An exclusive lock means that the lock can only be held by one thread; a shared lock means that the lock can be held by multiple threads. Among the following read-write locks, read-read locks are shared locks, and read-write locks are exclusive locks.

According to the perspective of concurrent synchronization: pessimistic locking and optimistic locking

According to the status of the lock (only for Synchronized): biased lock, lightweight lock, heavyweight lock. Biased lock means that when a thread acquires a lock, it will automatically acquire the lock next time. The reentrant lock mechanism belongs to the biased lock, which is used to reduce the cost of acquiring the lock. Lightweight locks refer to biased locks acquired by another thread, and other threads will try to acquire the lock through the spin lock mechanism. Heavyweight locks refer to lightweight locks. After the spinning thread spins for a certain number of times, it will block, and the lock will become a heavyweight lock.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324599040&siteId=291194637