Java interview questions (concurrency)

thread

1. Briefly describe the concepts of threads, processes, and programs; and the relationship between them

program: a file containing instructions and data, stored on a disk or other data storage device

Thread: It is an execution unit smaller than a process, and multiple threads can be generated during the execution of a process

Process: a process of executing a program, the basic unit of the system to run the program

Running a program in the system is the process of a process from creation, operation to death

2. Thread life cycle and basic state

Life cycle: new, ready, running, blocked, dead

Status: New, Runnable, Blocked, Infinite Waiting, Timed Waiting, Dead

3. Understanding of thread safety, how to achieve thread safety

The essence of thread safety should be memory safety, and the heap is shared memory that can be accessed by all threads

4. The realization principle of synchronized

The synchronized keyword solves the synchronization of resource access between multiple threads, also known as "synchronization lock".

The function is to ensure that at the same time, only one thread executes the modified code block or method, so as to achieve the effect of ensuring concurrency safety.

  1. Synchronization method: implicit, no need to be controlled by bytecode instructions, jvm can know whether a method is a synchronization method from the table structure of the method constant pool

  1. Synchronized code block: After the synchronized keyword is compiled, monitorenter and monitorexit bytecode instructions will be formed before and after the synchronized code block. When executing the monitorenter instruction, first try to acquire the lock of the object. If the lock is not locked or the current thread If you already own the lock of that object, the counter of the lock will be incremented by 1, and the counter of the lock will be decremented by 1 when the monitorexit instruction is executed, and the lock will be released when it is decremented to 0. If acquiring the object lock keeps failing, the current thread will block and wait until the object lock is released by another thread.

5. The difference between synchronized and ReentrantLock
  1. Synchronized can be used to decorate ordinary methods, static methods and code blocks, while ReentrantLock can only be used for code blocks.

  1. Synchronized will automatically lock and release the lock; ReentrantLock needs to manually lock and release the lock

  1. Synchronized is an unfair lock, and ReentrantLock can be either a fair lock or an unfair lock

  1. synchronized is a keyword, ReentrantLock is a class

  1. Synchronized is a lock at the jvm level, and ReentrantLock is a lock at the API level

  1. ReentrantLock can respond to interrupts and solve deadlock problems, while synchronized cannot respond to interrupts

6. What is CAS? What are the pros and cons?

CAS is an atomic operation for synchronization and concurrency control in a multi-threaded environment. It contains three operands, memory location V, expected value A and new value B. Currently updating the value of the memory value V to the new value B only if the value of the memory location V is equal to the expected value A

Advantages and disadvantages:

Efficiency: Compared with traditional synchronization methods, CAS can perform concurrent operations without locks, avoiding mutual exclusion and blocking between threads, so it is more efficient.

No deadlock: CAS will not cause deadlock, because it does not require locking.

Ensure the atomicity of operations: CAS can ensure the atomicity of operations on the same memory location in a concurrent environment, avoiding the problem of data inconsistency.

ABA problem: When a value changes from A to B, and then to A, if the CAS check only checks whether the value is equal to A, then CAS will think that the value has not changed, which may cause some problems.

Only atomic operation of one shared variable can be guaranteed: If atomic operation on multiple variables is required, CAS cannot meet the requirement.

Spinning overhead is high: in high concurrency scenarios, if CAS fails consistently, it will keep spinning, occupying CPU resources, resulting in performance degradation.

There may be problems with the use of CPU cache: Since CAS needs to access memory, it may cause CPU cache consistency problems in a high-concurrency environment

7. How does ReentrantLock realize the fairness and unfairness of locks?

ReentrantLock decides whether to use fair locks according to the parameters passed in, and uses unfair locks by default;

Fair lock: Refers to multiple threads acquiring locks in the order in which they apply for locks. Threads directly enter the queue to queue, and the first thread in the queue can acquire the lock.

Unfair lock: When multiple threads are locked, they directly try to acquire the lock. If they can grab the lock, they will directly occupy the lock. If they cannot grab the lock, they will wait at the end of the waiting queue.

8. Under what circumstances does a deadlock occur and how to avoid it

When multiple threads compete for shared resources and cause data confusion, locks will be added before operating shared resources. Only threads that successfully obtain locks can operate shared resources. Threads that cannot obtain locks can only wait until the lock is released;

Causes a deadlock condition:

  1. A resource can only be used by one thread at a time

  1. When a thread is blocked waiting for a resource, it does not release the occupied resource

  1. A resource that a thread has acquired cannot be forcibly deprived before it is used up

  1. Several threads form a head-to-tail circular waiting resource relationship

Only need to break the fourth condition will not cause deadlock

How to avoid:

  1. Ensure that each thread locks in the same order

  1. Set a timeout for each lock

  1. Deadlock Detection Mechanism

9. Usage scenarios of volatile and ThreadLocal

volatile: It can guarantee visibility and a certain degree of order. After a shared variable is modified by volatile, it has two layers of semantics: it ensures the visibility of different threads when operating on this variable and prohibits the reordering of instructions.

scenes to be used:

real-time shared variables

Read-write lock strategy with low overhead

Singleton double check

ThreadLocal: It is a local variable of a thread, which will provide an independent copy of the variable for each thread that uses the variable, so each thread can change its own copy independently, limiting the visible range of the object to the same thread, and Copies corresponding to other threads will not be affected.

scenes to be used:

Database Connectivity

session management

10. When will ThreadLocal have a memory leak?

Memory leak: Mainly because ThreadLocal contains ThreadLocalMap. However, the object of ThreadLocalMap is in Thread. If Thread is not over, ThreadLocalMap will never be released. If many values ​​are set in ThreadLocalMap, and remove() is not manually set, It may cause a memory leak.

When the Thread has not ended, the threadLocals in the Thread will not be recycled. If the Entry stored in the threadLocals is not manually deleted, it will always exist in the threadLocals, so memory leaks will occur.

11. The difference between synchronized and volatile
  1. synchronized modified methods, variables, classes; volatile modified variables

  1. Synchronized may cause thread blocking; volatile will not cause thread blocking;

  1. Variables marked with synchronized can be optimized by the compiler; variables marked with volatile will not be optimized by the compiler

  1. Synchronized can guarantee the modification visibility and atomicity of variables; volatile can only realize the modification visibility of variables, but cannot guarantee atomicity

  1. Synchronized is to lock the current variable, only the current thread can access the variable, and other threads are blocked; the essence of volatile is to tell jvm that the value of the current variable in the register (working memory) is uncertain and needs to be read from the main memory

12. Three major characteristics of concurrency

Orderliness: The order in which the program is executed is executed in the order in which the code is executed

Atomicity: One or more operations, either all of them are executed or none of them are executed

Visibility: When a thread modifies the value of a shared variable, other threads can see the modified value

13. Parallel, concurrent, serial

Concurrency: During the same period of time, multiple instructions are executed on the CPU at the same time

Parallelism: At the same time, multiple instructions are executed on the CPU at the same time

Serial: Execute one task at a time, and then continue to execute other tasks after the execution is completed

14. How to solve the problem of concurrent access by multiple users

Distributed is to improve efficiency by shortening the execution time of a single task, while cluster is to improve efficiency by increasing the number of tasks executed per unit time.

Clusters are mainly divided into: high availability clusters, load balancing clusters, scientific computing clusters

Distributed refers to the distribution of different services in different places, and cluster refers to the aggregation of several servers to achieve the same service; in distributed, each node can be used as a cluster, but the cluster is not necessarily distributed.

15. The way of process communication
  1. pipeline

  1. message queue

  1. amount of signal

  1. memory sharing

  1. socket

16. Thread creation method
  1. Inherit the Thread interface and rewrite the run method

  1. Implement the Runnable interface and rewrite the run method

  1. Use Callable to rewrite the call method to cooperate with FutureTask

  1. Build threads based on thread pool

 Chasing the bottom layer, all use Runnable

17. The way to terminate the thread
  1. The program ends normally

  1. Using shared variables (defining values ​​externally, breaking loops by modifying variables)

  1. Use the Interrupt method to end the thread

  1. Use the stop method to terminate the thread (not recommended, thread unsafe)

18.java lock and classification

According to the weight of the lock:

Heavyweight lock: synchronized

Lightweight lock: lock

According to whether the lock is reentrant:

Reentrant lock: ReentrantLock

Non-reentrant lock: synchronized

According to the fairness of the lock:

Fair lock: Passing true when creating ReentrantLock is a fair lock

Unfair lock: other scenarios including synchronized are unfair locks

Distinguish according to the principle of lock:

Optimistic lock: It is believed that the probability of multiple threads modifying shared resources at the same time is relatively low. It assumes that the probability of conflict is very low. Its working method is: first modify the shared resource, and then verify whether there is any conflict during this period. If there are no other threads If the resource is being modified, the operation is complete. If it is found that other threads have modified the resource, the operation is abandoned.

Pessimistic lock: Pessimistic, it is believed that the probability of multiple threads modifying shared resources at the same time is relatively high, so conflicts are prone to occur, so before accessing shared resources, lock them first

Distinguish according to the mutual exclusion of locks:

Mutex: Only one thread can access the resource protected by the mutex; lock it before accessing the shared object. Unlock operation after access is complete

Shared lock: ReentrantReadWriteLock.readLock read lock read lock does not exclude the reading behavior of other threads, but cannot write, it is a shared lock

19. The difference between tryLock() and Lock() methods in ReentrantLock

Lock: blocking lock, no return value, if the thread does not acquire the lock, it will block until the lock is acquired

tryLock: try to add a lock, it may or may not be added, this method will not block the thread, it will return true if the lock is added, otherwise it will return false; use tryLock to have a spin lock, the spin lock is relatively flexible, but consumes cpu Large, performance is better than lock.

20. The difference between sleep and wait
  1. sleep belongs to the method in the Thread class, and wait belongs to the method of the Object class

  1. sleep will automatically wake up, wait needs to be manually woken up

  1. When sleep holds a lock, the execution will not release the lock resource; after wait is executed, the lock resource will be released

  1. sleep can be executed while holding a lock or not holding a lock; wait must hold a lock to execute

21. The difference between Thread and Runnable
  1. Runnable is implemented by inheriting Thread, and can also implement multiple interfaces

  1. Runnable is an interface while Thread is a class

  1. Runnable supports resource sharing among multiple threads

22. Understanding of daemon threads

Also called a background thread, it provides services for all non-daemon threads; daemon threads must be able to be closed at any time; if you want to set a daemon thread, you must set it before the start() method; you cannot set a normally running thread as a daemon thread.

Such as: GC garbage collection thread

23. The difference between start and run
  1. start method to start the thread; run is just an ordinary method in thread

  1. The code that needs to be processed in parallel is placed in the run method; the run method is automatically called after the start method starts the thread

  1. run must be public, return type is void

Thread Pool

1. Why can't Executor be used to create a thread pool

Created by ThreadPoolExecutor, the writer can be more clear about the running rules of the thread pool and avoid the risk of resource exhaustion.

Using Executor to create may accumulate older requests or create a large number of threads, resulting in memory leaks.

2. How to customize your own thread pool according to actual needs

Specify the number of threads: if it is a cpu-intensive task, the number should be set to 1-2 times the number of cpu cores; if it is an io-intensive task, set it to more than twice.

Custom thread factory: Give the custom thread pool a personalized name, which helps us to accurately locate a specific thread pool when looking for logs. Choose an appropriate rejection strategy: Generally, an exception is thrown directly. If the task must be executed and cannot be discarded, then the thread that submitted the task is selected to execute the thread.

3. Synchronized biased locks, lightweight locks, and heavyweight locks

Biased lock: record the thread id currently acquiring the lock in the object header of the lock object, and the thread can acquire it directly if it comes again next time

Lightweight lock: It is upgraded from a biased lock. When a thread acquires a lock, the lock is a biased lock; at this time, the second thread competes for the lock, and it will be upgraded to a lightweight lock. Spin to achieve, will not block the thread

Heavyweight lock: If the lock is not obtained after too many spins, it will be upgraded to a heavyweight lock, which will block the thread

4. The principle of thread pool
  1. Threads are scarce resources, using the thread pool can reduce the number of threads created and destroyed, and each worker thread can be reused

  1. The number of worker threads in the thread pool can be adjusted according to the capacity of the system to prevent the server from crashing due to excessive memory consumption

5. Why create a thread pool? common thread pool
  1. Reduce resource consumption and improve thread utilization

  1. Improve response speed

  1. Improve thread controllability

Commonly used thread pools:

  1. single thread thread pool

  1. fixed size thread pool

  1. cacheable thread pool

  1. Unlimited thread pool size

6. Briefly describe several parameters for creating a thread pool
  1. number of threads

  1. Maximum number of threads

  1. maximum idle time

  1. time unit

  1. task queue

  1. thread factory

  1. Rejection strategy (this strategy will be executed when the task queue is full and the task is added again)

Reject policy:

Throw an exception directly (default)

directly lose the task

Discard the task that first enters the work queue to wait, and then accept the new task

Whoever calls this thread pool will execute the task, and the guaranteed task will not be lost

7. Implementation principle of thread pool
8. The process of adding worker threads to the thread pool
9. Execution process of thread pool
  1. Create a certain number of threads

  1. Add task to task queue

  1. The thread gets the task from the task queue and executes it

  1. The finished thread returns to the thread pool

10. Why does the thread pool build non-core threads for empty tasks

If the number of core threads is set to 0, when we execute addWorker, the task will be directly added to the blocking queue. At this time, there are no worker threads. When the state of the thread pool is normal, an empty task will be added to perform blocking. tasks in the queue.

11. The role of blocking queues in thread pools

The blocking queue can retain the current tasks that you want to continue entering the queue by blocking, because compared with the ordinary queue, the blocking queue has a blocking function, which is to block the current request and join when there is space in the queue;

  1. You can keep the tasks you want to continue joining the team

  1. It can ensure that when there is no task in the task queue, the thread that acquires the task is blocked, so that the thread enters the wait state and releases cpu resources

  1. The blocking queue has its own blocking and wake-up functions, and no additional processing is required. When there is no task execution, the thread pool uses the take method of the blocking queue to suspend, so as to maintain the survival of the core thread and not occupy CPU resources all the time.

12. The principle of thread thread pool reuse

The core principle is that the thread pool encapsulates Thread. Instead of creating a new thread every time a task is executed, each thread is allowed to execute a "cyclic task". There is a task waiting to be executed, if so, execute the task directly, that is, call the run method of the task, and call the run method as the same as a normal method, which is equivalent to connecting the run() method of each task in series up, so the number of threads does not increase.

13. Where is the thread pool used in the project?

The main business can be separated from other businesses; no need to wait

1. Announcements send SMS notifications asynchronously (send a task, then inject it into the thread pool and send it asynchronously)

Steps for usage:

Define the SMS sending interface

SMS sending implementation class

Build thread pool configuration

The controller module triggers

2. Record the login log (need to process the login log, but do not want to delay the login)

14. High concurrency solution
  1. html static

The page is implemented with a static page. This simplest method is actually the most effective method, because the most efficient and least consuming is a purely static html page;

2. Image server separation

图片是最消耗资源的,我们有必要将图片与页面进行分离,大型网站都会有独立的、甚至很多台的图片服务器。这样的架构可以降低提供页面访问请求的服务器系统压力,并且可以保证系统不会因为图片问题而崩溃

3.数据库集群、分库、分表

在面对大量访问的时候,数据库的瓶颈很快就能显现出来,这时一台数据库将很快无法满足应用,于是我们需要使用数据库集群或者库表散列

4.负载均衡

负载均衡将是大型网站解决高负荷访问和大量并发请求采用的高端解决办法

5.CDN加速

CDN的全称是内容分发网络。其目的是通过在现有的Internet中增加一层新的网络架构,将网站的内容发布到最接近用户的网络“边缘”,使用户可以就近取得所需的内容,提高用户访问网站的响应速度

Guess you like

Origin blog.csdn.net/qq_35056891/article/details/129652162