[Continuously updated] Basic computer classic interview questions Day 1

[General] Basic Computer Classic Interview Questions Day 1

1. The composition of jvm

  1. Class Loader: Responsible for loading compiled Java classes into the JVM and dynamically loading the required classes at runtime.
  2. Runtime Data Area: It is the memory management area of ​​the JVM, mainly including the method area, heap, stack, program counter, etc.
  3. Execution Engine: Responsible for executing bytecode instructions, which can be implemented through an interpreter, just-in-time compiler (JIT), etc.
  4. Garbage Collector: Responsible for automatically managing the allocation and release of heap memory and recycling the memory of useless objects.
  5. Native Interface: Allows Java code to call local code written in C, C++ and other languages.
  6. JNI (Java Native Interface): allows Java code to interact with local code and call methods written in local C, C++ and other languages.

2. Understanding of heap and stack in jvm

  1. Method Area: The method area is a part of the JVM and is used to store structural information of a class, such as class fields, methods, constant pools, static variables, etc. It is an area shared by all threads. In the JVM specification, there is no clear provision for the implementation of the method area. Different JVM implementations can have different implementations of the method area.
  2. Heap: The heap is the area in the JVM where object instances are stored. It is the memory area allocated for objects created in the Java program. The heap is an area shared by all threads. A heap is created when the JVM starts, and the size of the heap can be adjusted through startup parameters. The heap is divided into the young generation and the old generation, and the young generation is divided into Eden space and Survivor space (From and To).
  • Eden space: The initial allocation of objects is performed in Eden space. When Eden space is full, Minor GC is triggered and surviving objects are copied to Survivor space.
  • Survivor space: stores objects that survived the Eden space. When the Survivor space is full, Minor GC is triggered, and the surviving objects are copied to another Survivor space, and the current Survivor space is cleared at the same time.
  • Old generation: stores long-lived objects. When the old generation is full, Full GC is triggered.
  1. Stack (Stack): The stack is a thread private area in the JVM, which is used to store local variables of the method, operand stack, method return value and exception handling information. Each thread creates a stack frame when executing a method. The stack frame stores the method's local variables, operand stack and other information. As methods are called and returned, stack frames are pushed and popped.
  2. Program Counter (Program Counter): The program counter is a thread-private area in the JVM that is used to record the address of the bytecode instruction executed by the current thread. Each thread has an independent program counter, which points to the currently executing instruction, and the value of the program counter of the current thread is saved and restored when the thread is switched. The program counter is private to the thread, memory overflow cannot occur, and garbage collection is not performed.

3. Three ways to load java classes

Java class loading refers to the process of loading the bytecode file of a class into the JVM and initializing and connecting the class. There are three ways to load Java classes:

  1. Implicit loading (implicit class loading): When a program uses the new keyword to create an object or call a static method or static variable during execution, the JVM will automatically load the corresponding class. This is the most common class loading method and is also the default class loading method.
  2. Explicit loading (explicit class loading): Use the Class.forName() method to explicitly load classes. The Class.forName() method will load the class according to the fully qualified name of the provided class (including the package name) and return the corresponding Class object. This method can dynamically load classes and decide which class to load according to runtime conditions.
  3. Passive loading (passive class loading): When a class is referenced but not actually used, the class will not be loaded. The class is only loaded when it is actually used. Common passive loading scenarios include referencing the static fields of the parent class through subclasses, referencing classes through array definitions, constants will be stored in the constant pool of the calling class during compilation, and so on.

4. Tell me about your understanding of the parent delegation mechanism in Java.

The parent delegation mechanism is one of the Java class loading mechanisms. Its core idea is that when a class loader needs to load a class, it will first delegate to the parent class loader to try to load it, and only when the parent class loader cannot load the class, it will be loaded by the current class loader. In this way, the uniqueness and security of the class can be guaranteed, and repeated loading and loading of malicious code can be avoided. The parent delegation mechanism plays an important role in Java, realizing the sharing and reuse of classes.

5. Please tell me your understanding of gc.

GC (Garbage Collection) is an automatic memory management mechanism used to automatically reclaim the memory space occupied by objects that are no longer used. It completes the garbage collection process through steps such as marking, cleaning, and compaction. Programmers do not need to manually trigger it, it is automatically executed by the JVM's garbage collector. Reasonable coding and memory management can maximize the role of GC and improve program performance and stability.

6. How to expand the hashmap capacity

When HashMap inserts elements, it will determine whether it needs to be expanded based on the load factor. The load factor refers to the ratio of the number of stored elements in the hash table to the actual capacity.

When the load factor of HashMap exceeds the set threshold (default is 0.75), the expansion operation will be triggered. Expansion will create a new, larger hash table and redistribute the original elements to the new hash table to reduce hash conflicts and improve query efficiency.

The expansion process of HashMap roughly includes the following steps:

  1. Create a new hash table with twice the capacity of the original hash table. The capacity of the new hash table is generally chosen to be the power of 2 that is closest to and greater than the original capacity.
  2. Traverse each bucket in the original hash table, recalculate the hash value of the elements in the bucket and assign them to the corresponding bucket in the new hash table. This step recalculates the hash value and index position of the element to ensure that the element's position in the new hash table changes.
  3. The new hash table is set as the current hash table, and the original hash table becomes a garbage object waiting for garbage collection.

The expansion operation may have a certain impact on performance because hashes need to be recalculated and elements reallocated. In order to reduce the frequency of capacity expansion, you can control the balance between HashMap capacity and performance by adjusting the load factor. A smaller load factor will make the hash table expand faster, but will occupy more memory space. A larger load factor will reduce the number of expansions, but may cause more hash conflicts.

7. Talk about your understanding of hashtable and currenthashmap

Hashtable and ConcurrentHashMap are both thread-safe hash table implementations in Java. They have some similarities in functionality and usage, but there are some differences in internal implementation and performance.

  1. Hashtable: Hashtable is the earliest hash table implementation introduced. It is thread-safe and all operations are synchronized (implemented through the synchronized keyword). Due to synchronization, the performance of HashTable in a multi-threaded environment is relatively low, and thread safety can only be ensured by allowing only one thread to access it at the same time.
  2. ConcurrentHashMap: ConcurrentHashMap is a high-performance thread-safe hash table implementation introduced in Java 5. It achieves the efficiency of concurrent access by using segment locks (Segment). Each Segment is equivalent to a small HashTable, which can be operated independently, and different Segments can perform concurrent read and write operations. In this way, in most cases, different threads can operate different Segments at the same time, improving the efficiency of concurrent access. ConcurrentHashMap has high performance and scalability in concurrent environments.
  3. the difference:
  • Thread safety: Hashtable achieves thread safety through synchronization, while ConcurrentHashMap achieves efficient concurrent access through segment locks (Segment).
  • Performance: ConcurrentHashMap has better performance than Hashtable in a multi-threaded environment and can support higher concurrency.
  • Iterator weak consistency: Hashtable's iterator is strongly consistent, that is, it will not be modified during the iteration process. The iterator of ConcurrentHashMap is weakly consistent and can be modified during the iteration process, but will not throw a ConcurrentModificationException exception.

If you need to use a hash table in a multi-threaded environment and have high performance requirements, it is recommended to use ConcurrentHashMap.

If you are in a single-threaded environment or when performance requirements are not high, you can use HashTable.

8. Let’s talk about TCP’s three-way handshake and four-way wave.

TCP (Transmission Control Protocol) is a reliable, connection-oriented network transmission protocol. When establishing and closing a TCP connection, a three-way handshake and four waves are required.

The process of Three-way Handshake is as follows:

  1. First handshake: The client sends a TCP segment with a SYN (synchronization) flag to the server, requesting to establish a connection. At this point the client enters the SYN_SENT state.
  2. Second handshake: After receiving the client's request, the server replies to the client with a message segment with SYN and ACK (confirmation) flags. At this point the server enters the SYN_RECEIVED state.
  3. Third handshake: After receiving the server's reply, the client sends a message segment with the ACK flag to the server again, indicating that the connection has been established. At this point, the connection is established, both the client and the server enter the ESTABLISHED state, and data transmission can begin.

The process of Four-way Handshake is as follows:

  1. The first wave: the client sends a segment with a FIN (end) flag to the server, indicating that the client will no longer send data. The client enters the FIN_WAIT_1 state.
  2. The second wave: After receiving the client's end request, the server sends a message segment with an ACK flag to the client, indicating that the server has received the end request. At this point the server enters the CLOSE_WAIT state.
  3. The third wave: the server sends a segment with the FIN flag to the client, indicating that the server will no longer send data. The server enters the LAST_ACK state.
  4. The fourth wave: After receiving the end request from the server, the client sends a message segment with an ACK flag to the server, indicating that the client has received the end request. The client enters the TIME_WAIT state and waits for a period of time before closing the connection. After the server receives the ACK, it closes the connection and enters the CLOSED state.

Through the three-way handshake, the client and the server establish a reliable connection; through the four-way wave, both parties complete the data transmission and safely close the connection. This ensures reliable transmission of data and normal release of connections.

9. Talk about the difference between tcp and udp

TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are two commonly used transport layer protocols. They have the following differences in characteristics and applicable scenarios.

  1. Connectivity:
  • TCP is a connection-oriented protocol that establishes a connection through a three-way handshake and provides reliable, ordered, byte stream-oriented data transmission. TCP ensures data integrity and reliability and is suitable for scenarios that require high data accuracy.
  • UDP is a connectionless protocol that does not require establishing a connection and sends data packets directly. UDP provides a simple, congestion-free data transmission method, which is suitable for scenarios that require high real-time performance but relatively low reliability requirements.
  1. Data transmission characteristics:
  • TCP provides reliable data transmission, ensuring that data arrives in order and is not lost through the sequence number and confirmation mechanism. TCP also provides flow control and congestion control mechanisms to avoid network congestion and data loss.
  • UDP provides unreliable data transmission, and data packets may be lost, duplicated, or out of order. UDP has no flow control and congestion control mechanism, and the data transmission speed is faster, but the reliability of the data is not guaranteed.
  1. Datagram size:
  • TCP has no fixed maximum datagram size limit and can transmit large amounts of data, making it suitable for large file transfers.
  • UDP has a limited datagram size (64KB) and is suitable for transmitting smaller data packets.
  1. efficiency:
  • While TCP ensures reliability, it will introduce a large delay and the data transmission speed is relatively slow.
  • UDP does not have TCP's congestion control and retransmission mechanism, and its transmission efficiency is higher. However, due to its low reliability, it is not suitable for scenarios that require high data accuracy.

10. The difference between arrayList and LinkedList

ArrayList and LinkedList are both commonly used collection classes in Java. They have the following differences:

  1. Internal implementation structure:
  • The bottom layer of ArrayList is implemented using an array, and elements can be quickly accessed and modified through indexing.
  • The underlying layer of LinkedList is implemented using a doubly linked list. Each node contains the value of the current element and a reference to the previous and next nodes.
  1. Insertion and deletion operations:
  • ArrayList is less efficient for insertion and deletion operations because other elements need to be moved to fill the deleted or inserted positions.
  • LinkedList is more efficient for insertion and deletion operations, and only needs to modify the pointer of the node.
  1. Random access:
  • ArrayList supports random access, that is, direct access to elements through indexes, and the time complexity is O(1).
  • LinkedList does not support random access and needs to be traversed from the head node to the target location, with a time complexity of O(n).
  1. Memory usage:
  • ArrayList needs continuous storage space in memory, so when inserting and deleting elements, it may need to expand and copy the array, which takes up a lot of memory space.
  • LinkedList uses a linked list structure in memory. Each node only needs to store references to the current element and the previous and next nodes, and it takes up relatively small memory space.

If you need frequent random access and do not have high performance requirements for insertion and deletion operations, you can choose ArrayList. If frequent insertion and deletion operations are required and the performance requirements for random access are not high, you can choose LinkedList. When choosing which collection class to use, you need to weigh it based on specific application scenarios and needs.

Guess you like

Origin blog.csdn.net/godnightshao/article/details/132722601