Java concurrent programming interview questions

Table of contents

1. Thread, process, program

2. Thread status

 3. Seven parameters of threads

4. What are the advantages and disadvantages of threads?

5. What is the difference between the start and run methods?

6. What is the difference between wait and sleep?

7. The difference between lock and synchronized

8. Is the Volatile keyword thread-safe? What is the underlying principle?

9. What are the functions and underlying principles of synchronized?

11. Is ThreadLocal thread-safe? What is the underlying principle? Will there be a memory leak?

12. What is the difference between HashMap and ConcurrentHashMap?

13. What is the difference between HashMap and HashTable?

1. Thread, process, program

Process: We call a running program a process. Each process occupies memory and CPU resources. Processes are independent of each other.

Thread: A thread is an execution unit in a process and is responsible for the execution of programs in the current process. A process can contain multiple threads. Multi-threading can improve the parallel running efficiency of programs.

Program: It is a file containing instructions and data, which is stored on a disk or other data storage device. That is to say, the program is a static code.

2. Thread status

1. New (New): When creating a thread object

2. Ready (Runnable): The thread calls the start method and has execution qualifications but no execution rights.

3. Run: After grabbing the execution right of the CPU in the ready state, it enters the running state.

4. Blocked: When acquiring the lock fails, it enters the blocked state

5. Waiting: Waiting to be awakened by the notify() method

6. Sleep: Sleep for a period of time. After the time is up, it enters the ready state.

7. Terminated: thread death

 3. Seven parameters of threads

1. Number of core threads (corePoolSize): Indicates the minimum number of threads that remain active.

2. Maximum number of threads (maximumPoolSize): Indicates the maximum number of threads allowed to be created.

3. Idle time (keepAliveTime): Indicates the time that excess idle threads can remain alive when the number of threads exceeds the number of core threads.

4. Blocking queue (workQueue): Represents a blocking queue used to store tasks waiting to be executed.

5. Unit: Unit representing idle time, such as milliseconds, seconds, etc.

6. Rejection strategy (rejectedExecutionHandler): Indicates the strategy adopted when the task cannot be submitted to the thread pool for execution.

When a thread task comes in, it first tries to create a core thread for execution. If the number of threads in the thread pool has reached the number of core threads,

And the blocking queue is also full, then it will try to create a new thread for execution.

If the number of threads in the thread pool has reached the maximum number of threads, and the blocking queue is full, then the rejection strategy will be executed.

Example: Suppose the number of core threads in the thread pool is 10, the maximum number of threads is 20, and the capacity of the blocking queue is 50.

Then when 100 tasks enter the thread pool, 10 core threads will execute the tasks immediately, and 50 tasks will enter the blocking queue.

10 more threads are created and immediately execute the remaining tasks. Finally, there are 30 remaining tasks that cannot be executed, and they will be rejected.

4. What are the advantages and disadvantages of threads?

advantage:

1. In multi-core CPU, improve program performance through parallel computing . For example, the execution of a method is time-consuming. Now the logic of this method is split into several threads for concurrent execution to improve program efficiency.

2. It can solve the time-consuming problems caused by network waiting and io response .

3. Increase the utilization rate of CPU . Improve the utilization rate of network resources

shortcoming:

1. Threads are also programs, so threads need to occupy memory. The more threads, the more memory they occupy;

2. Access to shared resources between threads will cause resource security issues;

3. There is a context switching problem in multi-threading. The CPU executes tasks cyclically through the time slice allocation algorithm, so it will occupy CPU resources.

5. What is the difference between the start and run methods?

        The thread can be started by calling the start method, and the run method is just a normal method call in the thread class, which is still executed in the main thread.

6. What is the difference between wait and sleep?

Similar points: The effect of wait(long) and sleep(long) is to make the current thread temporarily give up the right to use the CPU and enter the blocking state .

difference:

sleep is a static method of Thread, and wait() is a member method of Object, and every object has it.

The sleep method suspends the current thread, which is equivalent to sleeping for a period of time, and then wakes up automatically. However, wait() must be woken up by the notify or notifyall method , otherwise it will remain blocked.

The call to the wait method must first acquire the lock of the wait object, while sleep has no such restriction. The object lock will be released after the wait method is executed.

 Allow other threads to obtain the object lock ( I give up the CPU, but you can still use it ); if executed in a synchronized code block, the object lock will not be released ( I give up the CPU, but you can't use it ).

7. The difference between lock and synchronized

Grammatical level:

1.synchronized is a keyword , the source code is in jvm, and is implemented in c++ language

2..Lock is an interface . The source code is provided by jdk and implemented in java language.

3. When using synchronized, the lock will be automatically released by the jvm when exiting the synchronized code block . When using Lock, you need to manually call the unlock method to release the lock, otherwise it may cause deadlock and other problems.

functional level

1. Lock provides more lock mechanism choices, such as fair locks, unfair locks, reentrant locks, read-write locks and other types; while synchronized has only one type, exclusive lock (exclusive lock).

2. Lock should be used first in high concurrency scenarios to achieve better performance and flexibility; while the synchronized keyword is more suitable for simple thread synchronization scenarios and is easy to use and maintain.

8. Is the Volatile keyword thread-safe? What is the underlying principle?

What is reordering?

        When a variable is modified, if the value of the variable is not written to the main memory , other threads may read the old value of the variable , causing a program error.

There are three important features in concurrent programming:

Atomicity :

An operation or multiple operations, either all of them succeed, or all of them fail. Operations that satisfy atomicity cannot be interrupted midway

Interrupt.

Visibility :

When multiple threads jointly access a shared variable, if a thread modifies the variable, other threads can immediately see the modified value.

Orderliness :

        The sequence of program execution follows the sequence of code execution. (Because the JMM model allows the compiler and processor to optimize instruction reordering for efficiency. Instruction reordering appears as serial semantics in a single thread, and will appear out of order in multi-threads. So in multi-threaded concurrent programming , it is necessary to consider how to allow some instructions to be rearranged in a multi-threaded environment, but also to ensure order)

So how does the volatile keyword ensure visibility and order ?

Ensure visibility: When a variable is modified volatile, the JVM will forcefully refresh the latest variable value in the working memory to the main memory , which will cause the cache in other threads to become. In this way, when other threads use the cache and find that this variable in the local working memory is invalid, they will obtain it from the main memory. In this way, the obtained value is the latest value, achieving thread visibility .

Thread :

A thread is the smallest unit of program execution, and multiple threads can execute at the same time. In Java, each thread has its own working memory

Working memory :

        Each thread has its own working memory, also known as the thread's local memory . The working memory is private to the thread and is used to store information such as stack frames and local variables during thread execution

Main memory :

        Main memory is a memory area shared by all threads . All variables are stored in main memory , including static variables, instance variables, etc. Main memory is the storage area for data interaction between threads .

Guaranteed orderliness: When the compiler generates bytecode, it adds a " memory barrier " in the instruction sequence to prohibit instruction reordering , thereby ensuring orderliness. (shields CPU instruction rearrangement in a multi-threaded environment)

In short, the Volatile keyword can guarantee visibility and order, but not atomicity, so volatile is not thread-safe.

9. What are the functions and underlying principles of synchronized?

1. The Java keyword synchronized is a mechanism for implementing multi-thread synchronization, ensuring that multiple threads will not create competition and conflicts when accessing shared resources, thereby avoiding data inconsistencies. Its main principle is based on the internal lock of the Java object, that is, the monitor lock (Monitor Lock), which ensures that only one thread can access the protected code block or method at the same time .

2. When a thread tries to acquire a resource protected by the synchronized keyword, if the resource is already occupied by other threads, the thread will enter a blocked waiting state. When the thread occupying the resource releases the resource, the threads in the waiting queue will compete to obtain the resource, and only one thread will successfully obtain the resource, while other threads continue to wait.

3. The synchronized keyword guarantees visibility and atomicity. Visibility is achieved through the underlying memory barrier of the JVM, and atomicity is achieved through the mutual exclusion of monitor locks .

In the synchronized block, the thread acquires the lock, and it will clear the working memory , so that the variables used by the thread can be re-read from the main memory , and at the same time, the variables in the working memory will be written back to the main memory. In this way, other threads can read the latest value, thus ensuring visibility.

11. Is ThreadLocal thread-safe? What is the underlying principle? Will there be a memory leak?

Weak reference : As soon as the garbage collection mechanism runs, regardless of whether the JVM memory space is sufficient, the memory occupied by the object will be recycled.

ThreadLocal: Create a copy in each thread for the shared variable, and each thread can access its own internal copy variable. Thread safety is guaranteed through threadlocal.

There is a static internal class  ThreadLocalMap (similar to Map) in the ThreadLocal class, which stores each thread's variable copy in the form of key-value pairs. The key of the element in ThreadLocalMap is the current ThreadLocal object, and the value corresponds to the thread's variable copy . ThreadLocal itself does not store values, it is just saved in ThreadLocalMap as a key , but it should be noted here that it uses weak references as a key , because there is no strong reference chain, weak references may be recycled during GC . In this way, there will be some key-value pairs (Entry) with null keys in ThreadLocalMap. Because the key becomes null, we cannot access these Entries, but these Entries themselves will not be cleared. If the corresponding key is not manually deleted, the memory will not be recycled and cannot be accessed, which is a memory leak . After using ThreadLocal, remember to call the remove method.

1. Each Thread maintains a ThreadLocalMap. The key of this ThreadLocalMap is the ThreadLocal instance itself, and the value is the real value Object to be stored.

2. Each Thread thread has a ThreadLocalMap inside, and the Map stores the ThreadLocal object (key) and the variable copy (value) of the thread

3. The Map inside Thread is maintained by ThreadLocal, and ThreadLocal is responsible for obtaining and setting thread variable values ​​from the map

4. For different threads, every time they obtain the copy value, other threads cannot obtain the copy value of the current thread, forming an isolation of copies and preventing each other from interfering with each other.

Note: Without using the thread pool , even if the remove method is not called, so ThreadLocal is a weak reference , the thread's "variable copy" will be recycled by gc, that is, it will not cause a memory leak .

12. What is the difference between HashMap and ConcurrentHashMap?

First of all, HashMap and ConcurrentHashMap are both implementation classes of the Map interface . Their differences can be analyzed from the following three aspects:

Thread safety : Since HashMap does not have a locking mechanism, threads are not safe ; ConcurrentHashMap adds segment locks inside , making it thread safe .

Concurrent performance : Since ConcurrentHashMap uses a segmented lock mechanism, it can support multiple threads to read and write operations at the same time, and has better performance in high-concurrency scenarios . HashMap is not locked by default and needs to be locked manually. If multiple threads modify the same HashMap object at the same time, lock competition conditions and deadlock problems may occur .

The underlying data structure is different : the bottom layer of HashMap uses array + linked list + red-black tree , while the bottom layer of ConcurrentHashMap uses the segment lock (segment) mechanism.

The underlying segmentation lock principle of ConcurrentHashMap :

In JDK1.7 : ConcurrentHashMap uses a segmented lock mechanism to ensure thread safety ---> first divide the data into segments for storage, and then assign a lock to each segment of data. When the thread occupies the data accessed by the lock , other segment data can also be accessed. However, this method still has a performance bottleneck in high concurrency scenarios because multiple threads still need to compete for the same lock .

After JDK1.8 : In order to improve efficiency , the segmented lock design was abandoned and replaced by Node+CAS+Synchronized to ensure concurrent security implementation. ConcurrentHashMap divides the underlying data structure into multiple small buckets (Segment), each bucket An independent hash table is maintained , and different threads can access different buckets at the same time, thereby avoiding the situation where multiple threads compete for the same lock , and improving concurrency performance and efficiency. syschronized only locks the first node of the current linked list or red-black tree. As long as the hash does not conflict, the efficiency is improved.

13. What is the difference between HashMap and HashTable?

1. Thread safety :

        HashTable is thread-safe , and HashMap is not thread-safe . Because the core methods of ashTable are all added with synchronized synchronization locks , hashMap is not locked.

2. Handling of null values:

        HashMap allows keys and values ​​to be empty (null) , it can store null values ​​and use null as key-value pairs. HashTable does not allow keys or values ​​to be empty (null). If you try to store a Null value or use null as a key, a NullPointerException will be thrown.

3. Traversal method:

        HashMap can be traversed in both directions and can be deleted ; HashTable can only be traversed in one direction and cannot be deleted.

4. Initial capacity and expansion mechanism:

The default initial capacity of Hashtable is 11. Each time the capacity is expanded, the capacity becomes twice the original capacity plus one.

The default initial capacity of HashMap is 16 , and the capacity doubles each time it is expanded.

Guess you like

Origin blog.csdn.net/weixin_71921932/article/details/131568566