Review: Concurrent Programming

https://yanglinwei.blog.csdn.net/article/details/103760834

Multi-threaded running status

Insert image description here


Thread classification: Divided into user threads and daemon threads (the difference lies in whether the two threads will also stop when the main thread stops), setDeamon(true);


Several ways to create threads: Inherit Thread, implement Runnable (recommended), and use anonymous inner classes.


Line method:

  • join: Give it to whoever joins. The number of time slices allocated to a thread determines how much processor resources the thread uses, and also corresponds to the concept of thread priority.
  • yield: Pause the currently executing thread and execute other threads (may have no effect).

Thread safety: When multiple threads share global variables or static variables at the same time and perform write operations, data conflicts may occur, which is a thread safety issue. However, data conflicts will not occur during read operations.


Solution to thread safety issues: Use synchronized or use lock.


Built-in lock: Each java object can be used as a lock to achieve synchronization and become a built-in lock (synchronized). The synchronized keyword has two uses:

  • Modified methods: Modify ordinary methods using this lock, modify static methods using the bytecode file object of the current class.
  • Modify the code block: the lock granularity is smaller, and the lock object is not necessarily this.

Multi-thread deadlock: Synchronization nested synchronization, resulting in the lock being unable to be released.


ThreadLocal: ThreadLocal provides an independent copy of the variable for each thread that uses the variable, so each thread can change its own copy independently without affecting other threads. the corresponding copy.


Three major characteristics of multithreading:Atomicity (either all executed, or none executed ), Visibility (if a thread modifies a variable, another thread can also see it), Ordering Property (Generally speaking, in order to improve the efficiency of program operation, the processor may optimize the input code. It does not guarantee that the execution order of each statement in the program is consistent with the order in the code, but it does It will ensure that the final execution result of the program is consistent with the result of the sequential execution of the code)


Volatile: solves the problem of visibility between threads, forcing the thread to go to "main memory" to get the value every time it reads the value.

  • Features: Ensure the visibility of this variable to all threads (visibility), prohibit instruction reordering optimization (orderliness)
  • Differences from synchronized: There is almost no difference in reading performance, but writing is time-consuming. volatile guarantees visibility, but not atomicity.

Reordering: Compilers and processors may reorder operations. When reordering, they will respect data dependencies and will not change the data dependencies. The order in which two operations are performed.

  • In a single-threaded program: Reordering operations with control dependencies will not change the execution results (this is also the as-if-serial semantics that allow operations with control dependencies Reason for reordering)
  • In multi-threaded programs: Reordering operations with control dependencies may change the execution results of the program.

Shared Memory Model (JMM): Defines that one thread is visible to another thread. Shared variables are stored in main memory, and each thread has its own local memory. When multiple threads access a data at the same time, the local memory may not be refreshed to the main memory in time, so thread safety problems will occur, as shown below:

Insert image description here


Communication between multiple threads: Refers to multiple threads operating the same resource, but with different operations (for example: production and consumption threads)

  • wait ( belongs to Object class): The currently executing thread must be temporarily suspended and released Resource locks allow other threads to have a chance to run;
  • notify/notifyall ( belongs to Object class): wakes up the thread in the lock pool and makes it run. wait and notify must be used in synchronized; when used generally, they must be the same lock resource;
  • sleep (belongs to Thread class): The program pauses execution for the specified time and gives up the CPU to other threads, but its monitoring status is still maintained. When the specified time is up, it will It will automatically resume operation. During the call to sleep() method, the thread will not release the object lock.
  • Lock (new after jdk1.5): Provides synchronization functions similar to the synchronized keyword, but requires manual acquisition and release of locks when used. The Lock interface acquires the lock before the specified deadline. If the lock cannot be acquired after the deadline, it returns.
  • Condition: similar to that in traditional threading technology, Object.wait() (condition.await()) and Object.notify()(condition.signal()) function.

Concurrent package related classes:

  • CountDownLatch (counter): A task A needs to wait for the other 4 tasks to complete before it can be executed. At this time, you can use CountDownLatch is used to implement this function;

  • CyclicBarrier (barrier): It can be regarded as a barrier. All threads must be present before they can pass this barrier together;

  • Semaphore (counting semaphore): Set a threshold, multiple threads compete to obtain the license signal, and return it after making their own applications. After the threshold is exceeded, the thread applying for the license signal will will be blocked;

  • Queue (queue): JDK provides two sets of implementations for concurrent queues, one is based on ConcurrentLinkedQueueBlockingQueue interface, both inherit from Queue.

  • ConcurrentLinkedQueue (non-blocking queue): Suitable for queues in high concurrency scenarios. It achieves high performance in high concurrency through a lock-free method. Generally, ConcurrentLinkedQueue has good performance. For BlockingQueue. It is an unbounded thread-safe queue based on link nodes. The elements of this queue follow the first-in-first-out principle. The head is added first and the tail is added recently. The queue does not allow null elements.

  • BlockingQueue (blocking queue): When the queue is empty, the thread that obtains the element will wait for the queue to become non-empty. When the queue is full, the thread that stores the element will wait for the queue. Available. Divided into:

     【ArrayBlockingQueue】:一个有边界的阻塞队列,它的内部实现是一个数组 
    
     【LinkedBlockingQueue】:大小的配置是可选的,如果我们初始化时指定一个大小,它就是有边界的,如果不指定,它就是无边界的。说是无边界,其实是采用了默认大小为`Integer.MAX_VALUE`的容量 。它的内部实现是一个链表)
    
      【PriorityBlockingQueue】:是一个没有边界的队列
    
      【SynchronousQueue】:内部仅允许容纳一个元素,当一个线程插入一个元素后会被阻塞,除非这个元素被另一个线程消费。
    

Thread Pool: Designed for a sudden burst of threads. A limited number of fixed threads serve a large number of operations, reducing the time required to create and destroy threads. thereby improving efficiency.


Line course branch class

  • newCachedThreadPool: Create a cacheable thread pool. If the length of the thread pool exceeds processing needs, idle threads can be flexibly recycled. If there is no way to recycle, a new thread will be created.
  • newFixedThreadPool: Create a fixed-length thread pool that can control the maximum number of concurrent threads. Exceeding threads will wait in the queue.
  • newSingleThreadExecutor**: **Create a single-threaded thread pool, which will only use a unique worker thread to execute tasks, ensuring that all tasks are executed in the specified order (FIFO, LIFO, priority).
  • newScheduleThreadPool**:** Create a fixed-length thread pool to support scheduled and periodic task execution.
  • newSingleThreadScheduledExecutor: **Creates a single-threaded executor that can be scheduled to run commands after a given delay or periodically.

How to create the previous thread pool categories:Executors.newXXX. The top-level implementation of the Executor framework is the ThreadPoolExecutor class. The newScheduledThreadPool, newFixedThreadPool, and newCachedThreadPool methods provided in the Executors factory class are actually just different constructor parameters of ThreadPoolExecutor.


Thread pool processing process:

Insert image description here


CPU-intensive: The task requires a large number ofoperations without blocking, the CPU is always. CPU-intensive tasks can only be accelerated (through multi-threading) on ​​a real multi-core CPU, and on a single-core CPU, no matter how many simulated multi-threads you open, the task cannot be accelerated because the total computing power of the CPU Those are the abilities. Run at full speed


IO intensive: The task requires a lot of IO, that is, a lot of blocking. Running IO-intensive tasks on a single thread will result in a lot of wasted CPU computing power waiting. Therefore, using multi-threading in IO-intensive tasks can greatly speed up program execution. Even on a single-core CPU, this acceleration mainly takes advantage of wasted blocking time.


Properly configure the thread pool: CPU-intensive, because running at full speed, the thread CPU takes a long time, so fewer threads are needed. IO-intensive, because IO is blocked and the CPU computing power is waiting, more threads are needed. (The summary is: The higher the proportion of thread waiting time, the more threads are needed. The higher the proportion of thread CPU time, the fewer threads are needed. )

Best line = ((线程等待时间+线程CPU时间)/线程CPU时间 ) * CPU数目


Callable and Future: The difference from using Thread and Runnable to create threads is that Callable and Future implement the operation of obtaining task results a>. Commonly used methods are as follows:

  • Ability to interrupt executing tasks: boolean cancel(boolean mayInterruptRunning)
  • Determine whether the task is completed: boolean isDone()
  • Get the result after task execution: V get()

Lock classification (same JVM):

  • Reentrant lock: divided into Synchronized (built-in lock, heavyweight lock) and ReentrantLock (display lock, lightweight lock);
  • Read-write lock: Reading and reading can coexist, reading and writing cannot coexist, and writing and writing cannot coexist. Usage: ① ReentrantReadWriteLock rwl = new ReentrantReadWriteLock(); ②Lock r = rwl.readLock(); ③ Lock w = rwl.writeLock();
  • Pessimistic lock: Pessimistically, it is believed that every operation will cause update loss, and an exclusive lock is added to every query. For example, row locks, table locks, read locks, write locks, etc. are all locked before performing operations.
  • Optimistic lock: It is believed that when operating data (update operation), other threads will not conflict with the data by default, so the lock will not be locked during each operation. , it only determines whether other threads have made modifications during the update. (Usually, add the version field, check the version before updating, and add the version when updating to check whether it has changed).
  • CAS lock-free mechanism: In the case of high concurrency, it has better performance than locked programs, and it is inherently deadlock immune. The process of the algorithm is as follows: it contains three parameters CAS(V,E,N): V represents the variable to be updated, E represents the expected value, and N represents the new value< a i=3>. Only when the V value is equal to the E value, the value of V will be set to N. If the V value and the E value are different, it means that other threads have already made updates, and the current thread will do nothing. Finally, CAS returns the current true value of V. Call the Native method compareAndSet to perform the CAS operation.
  • Spin lock: It is implemented by letting the current thread execute continuously in the loop body. When the condition of the loop is blocked byother threads< The critical section can only be entered when /span> changes. Since the spin lock only keeps the current thread executing the loop body without changing the thread state, the response speed is faster. But when the number of threads continues to increase, performance drops significantly because each thread needs to be executed and takes up CPU time. If thread competition is not intense and the lock is maintained for a period of time. Suitable for use with spin locks.

Lock classification (different JVM): If you want to ensure data synchronization in different JVMs, use distributed lock technology, the classification is as follows:

  • Database implementation: Using exclusive lock technology (rarely used);
  • Memcached: Use Memcached’s add command. This command is an atomic operation. Add can only succeed if the key does not exist, which means that the thread has obtained the lock;
  • Redis: Similar to Memcached, using the setnx command of Redis. This command is also an atomic operation, and can only be set successfully if the key does not exist;
  • Zookeeper: Use Zookeeper's sequential temporary nodes to implement distributed locks and waiting queues. The original intention of Zookeeper's design is to implement distributed lock services;
  • Chubby: The coarse-grained distributed lock service implemented by Google uses the Paxos consensus algorithm at the bottom.

Atomic class: supports thread-safe programming that unlocks locks on a single variable, equivalent to a generalizedvolatile Variables can support atomic and conditional read-modify-write operations, using the concept of lock-free, and some places directly use CAS operationsThread safety Type of a>. Common ones are: AtomicBoolean, AtomicInteger, AtomicLong, AtomicReference.


Disruptor framework: A high-performance asynchronous processing framework, or it can be considered the fastest message framework (lightweight JMS), or it can be considered an observer pattern. Implementation, or implementation of the event listening pattern. Utilizinglock-free algorithms, all memory visibility and correctness utilize memory barriers or CAS operations to achieve low latency.

Guess you like

Origin blog.csdn.net/qq_20042935/article/details/134569646