[Continuously updated] Day 2 of the basics of classic computer interview questions

[General] Basic Computer Classic Interview Questions Day 2

1. What is the singleton pattern and is it thread safe?

Singleton pattern is a design pattern that aims to ensure that a class has only one instance and provides a global access point. By using the singleton pattern, you can avoid creating the same object multiple times, save memory resources, and ensure the consistency of the object.

In Java, the common ways to implement the singleton pattern are to use the lazy style and the hungry style.

Lazy style : Create an instance on first use. Lazy-style thread safety depends on the specific implementation. If you use a simple lazy implementation, it is unsafe in a multi-threaded environment and may cause multiple threads to create multiple instances at the same time. Thread-safe lazy style can be achieved by locking (synchronized) or using double-checked locking (double-checked locking).

public class LazySingleton {
     
     
   private static LazySingleton instance;
   
   private LazySingleton() {
     
     
       // 私有化构造函数
   }
   
   public static synchronized LazySingleton getInstance() {
     
     
       if (instance == null) {
     
     
           instance = new LazySingleton();
       }
       return instance;
   }
}

Hungry Chinese style : Create an instance when the class is loaded. Hungry style is thread-safe because the instance is created when the class is loaded, ensuring global uniqueness. However, since the instance is created in advance, it may lead to a waste of resources.

public class EagerSingleton {
     
     
   private static final EagerSingleton instance = new EagerSingleton();
   
   private EagerSingleton() {
     
     
       // 私有化构造函数
   }
   
   public static EagerSingleton getInstance() {
     
     
       return instance;
   }
}

2. What is Vector, is it thread safe?

Vector is a synchronized container class in Java, which implements the function of dynamic arrays. It is similar to ArrayList, but is thread-safe.

Thread safety means that in a multi-threaded environment, access and operations on shared data will not conflict or produce inconsistent results. For Vector, it provides some synchronization methods, such as add(), remove() and get(), etc., which can ensure that the operation of Vector is thread-safe in a multi-threaded environment.

However, although Vector operations are thread-safe, using Vector may still cause performance problems in high concurrency situations. Because when Vector is operating, the entire collection will be locked, so other threads cannot operate Vector at the same time, resulting in a decrease in efficiency.

Therefore, if you only need to use dynamic arrays in a single-threaded environment, it is recommended to use ArrayList. If you need to use it in a multi-threaded environment, you can consider using Vector or other thread-safe container classes, such as CopyOnWriteArrayList or ConcurrentHashMap.

3. Please tell me the difference between synchronized and lock

synchronized and lock are both mechanisms used to achieve thread synchronization in Java.

  • synchronized is a keyword that can be directly applied to methods or code blocks to automatically acquire and release locks.
  • lock is an interface, and you need to manually call the lock() and unlock() methods to acquire and release the lock.

the difference:

  • How to acquire and release locks: synchronized automatically acquires and releases locks, while lock requires manually calling the corresponding method.
  • Reentrancy: Both synchronized and lock are reentrant, but lock requires manual release of the lock a corresponding number of times.
  • Waiting can be interrupted: synchronized cannot interrupt waiting, and lock provides an interruptible way to acquire the lock.
  • Condition variables: synchronized implements condition variables through wait(), notify() and notifyAll(), and lock uses the Condition interface to implement a more flexible thread waiting and wake-up mechanism.

In general, synchronized is easy to use and suitable for simple thread synchronization, while lock provides more functions and flexibility and is suitable for complex thread synchronization. In terms of performance, synchronized is more efficient for short-term synchronization operations, and lock is more advantageous for long-term synchronization.

4. Please tell me what heap memory is and how it is allocated.

The allocation of heap memory is the responsibility of the Java virtual machine. When we create an object, the Java virtual machine allocates a contiguous memory space in the heap memory to store the object's instance variables and other related information. The size of the heap memory can be specified through startup parameters, or automatically adjusted based on the physical memory size of the system.

In heap memory, objects are managed and released through the garbage collection mechanism. When an object is no longer referenced, the garbage collector automatically marks and reclaims the memory space occupied by the object so that subsequent objects can continue to be allocated in this space.

The allocation method of heap memory is dynamic, and it is allocated and released according to the needs of the program at runtime. Since the allocation and release of heap memory are time-consuming operations, the performance and algorithm of the garbage collection mechanism have a great impact on program performance and memory usage efficiency.

It should be noted that the heap memory is shared by multiple threads, and multiple threads can allocate and access objects in the heap memory at the same time. Therefore, in multi-threaded programming, you need to pay attention to the issues of access synchronization and thread safety to shared objects.

5. Please tell me what external memory is and how it is allocated.

Off-Heap Memory refers to a way of storing data in Java by using non-heap memory. Unlike heap memory, external memory is not managed by Java's garbage collection mechanism.

The allocation and release of external memory is manually controlled by the programmer, rather than automatically managed by the Java virtual machine. Under normal circumstances, the allocation and release of external memory is completed by calling local methods or the underlying interface provided by the operating system.

Common scenarios for using external memory include:

  1. A large amount of data needs to be processed, such as large arrays, image data, etc., which cannot be completely placed in the heap memory.
  2. It is necessary to use memory-resident data structures, such as cache, database connection pool, etc.
  3. It needs to interact with the underlying system, such as direct operation of files, network data transmission, etc.
  4. To improve performance and reduce the impact of garbage collection, the allocation and release of external memory is more efficient than heap memory.

6. Talk about the difference between process and thread

A process is an instance of a program and has independent memory space and system resources. A thread is the execution unit of the process and shares the same resources.

  • Resource occupation: Processes occupy independent resources, and threads share resources.
  • Switching cost: Process switching cost is high, thread switching cost is low.
  • Communication and synchronization: Inter-process communication requires specific mechanisms, and threads can directly share memory communication and synchronization.
  • Independence: processes are isolated from each other and threads share resources.
  • Fault tolerance: A process crash does not affect other processes, but a thread crash may cause the entire process to crash.

7. Talk about the understanding of multithreading

Multi-threading refers to running multiple threads in a program at the same time and executing them in parallel on a multi-core processor to improve the concurrency and efficiency of the program.

The understanding of multi-threading can be considered from the following perspectives:

  1. Concurrency and responsiveness: Multi-threading allows the program to perform multiple tasks at the same time, improves the concurrency of the program, and increases the throughput of the system. At the same time, multi-threading can improve the responsiveness of the program. By executing time-consuming operations in the background thread, the main thread can quickly respond to user operations.
  2. Resource sharing and synchronization: Multiple threads share the resources of the same process, such as memory, files, network connections, etc. But at the same time, it is also necessary to consider the synchronization problem between threads to ensure that access to shared resources is safe and to avoid problems such as race conditions.
  3. Concurrency control and thread safety: Multi-thread programming needs to deal with concurrency control issues, such as mutexes, semaphores, condition variables, etc., to ensure the safe execution of threads. Thread safety means that multiple threads accessing a shared resource will not produce incorrect results.
  4. Scheduling and priority: Multithreads are allocated and scheduled on multi-core processors through the operating system's scheduling mechanism. The priority of the thread can be specified. High-priority threads may get the opportunity to execute earlier, but the absolute order is not guaranteed.
  5. Deadlock and livelock: There are risks of deadlock and livelock in multi-threaded programming. Deadlock means that multiple threads are waiting for each other to release resources and cannot continue execution, while livelock means that although the thread will not be blocked, it will not be blocked. It is not working properly due to some kind of logic problem.

8. Talk about your understanding of thread pools

The thread pool is a thread management mechanism that can create a certain number of threads in advance and submit tasks to the thread pool for execution. The thread pool maintains a set of reusable threads, which can effectively manage and control the number of threads and improve system performance and stability.

The understanding of the thread pool can be considered from the following aspects:

  1. Thread reuse: The thread pool will create a certain number of threads when initialized and maintain them in the pool. After the task is submitted to the thread pool, the thread pool will select an idle thread to execute the task. After the task is completed, the thread will not be destroyed, but will continue to be retained in the pool for subsequent tasks, thus avoiding frequent creation and The overhead of destroying threads.
  2. Thread management and control: The thread pool can control the number of threads, thread priority, thread idle time, etc. by setting parameters. The size and configuration of the thread pool can be dynamically adjusted according to the load of the system to adapt to different application scenarios and resource requirements.
  3. Task queuing and scheduling: The thread pool will maintain a task queue, queue the tasks submitted to the thread pool in order, and select the appropriate thread to execute the task through the thread scheduling algorithm. The task queue can prevent system resources from being exhausted due to too many tasks. It can also implement task priority scheduling, task rejection policies, etc.
  4. Exception handling and monitoring: The thread pool can handle thread exceptions to prevent the entire system from crashing due to exception delivery. At the same time, the thread pool can also provide monitoring and statistical information, such as the number of active threads in the thread pool, the number of completed tasks, the length of the task queue, etc., to monitor the running status and performance indicators of the thread pool.

9. What thread pools does java provide?

  1. FixedThreadPool: A fixed-size thread pool. The number of threads is specified when it is created. The number of threads in the thread pool always remains unchanged. Applicable to scenarios that need to control the number of concurrent threads.
  2. CachedThreadPool: A cacheable thread pool. The number of threads in the thread pool is automatically adjusted as needed. If there are idle threads, they will be reused. If there are no idle threads, new threads will be created. Suitable for scenarios where a large number of short-term asynchronous tasks are executed.
  3. ScheduledThreadPool: Scheduled task thread pool, used to execute tasks after a given delay or at scheduled times. Applicable to scenarios that require scheduled task execution.
  4. SingleThreadExecutor: A single-threaded thread pool with only one worker thread executing tasks to ensure that tasks are executed in the specified order. Suitable for scenarios where tasks need to be performed in sequence.
  5. WorkStealingPool: Work-stealing thread pool, each thread maintains its own task queue, and idle threads will steal task execution from the queues of other threads to improve parallelism. Suitable for scenarios where a large number of independent tasks are performed.

These thread pools all implement the ExecutorService interface, and can be configured more flexibly and customize the behavior of the thread pool through the ThreadPoolExecutor class. By using the thread pool provided by Java, you can avoid manually creating and managing threads, improving system performance and maintainability.

10. Talk about the understanding of hashmap

ashMap is a data structure in Java that is implemented based on a hash table and is used to store the mapping relationship of key-value pairs. HashMap provides fast search, insertion and deletion operations and is one of the commonly used collection classes.

The understanding of HashMap can be considered from the following aspects:

  1. Storage structure: HashMap is implemented internally using arrays and linked lists (or red-black trees). Arrays are used to store buckets, and each bucket stores a linked list (or red-black tree). The linked list (or red-black tree) is used to resolve hash conflicts, that is, when multiple keys map to the same bucket.
  2. Key-value pair mapping: HashMap maps keys to array index positions through a hash function to achieve fast search operations. Each key-value pair is stored in a linked list (or red-black tree) in a bucket. The bucket is located by the hash value of the key, and then the corresponding value is found by comparing the equals method of the key.
  3. Hash conflicts: Since different keys may be mapped to the same bucket, HashMap needs to resolve hash conflicts. When there are too many elements in the linked list (or red-black tree), the linked list will be converted into a red-black tree to improve the efficiency of search.
  4. Thread-unsafe: HashMap is not thread-safe. Simultaneous read and write operations by multiple threads may cause data inconsistency. If you need to use it in a multi-threaded environment, you can use ConcurrentHashMap or perform external synchronization through a synchronization mechanism.
  5. Efficiency and capacity: The performance of HashMap is affected by the initial capacity and load factor. The initial capacity is the number of buckets in the hash table when it is created, and the load factor is the filling ratio that the hash table can reach before auto-scaling. Too high a load factor will lead to an increase in hash collisions, and too low a load factor will lead to waste of storage space.

In short, HashMap is a data structure used to store key-value pairs. The key is mapped to the index position of the array through a hash function to achieve fast lookup operations. It provides efficient insertion, deletion, and lookup operations, but requires attention to thread safety and reasonable capacity settings.

Guess you like

Origin blog.csdn.net/godnightshao/article/details/132741030