Compilation of Java interview questions (with answers)

Table of contents

The difference between TCP and UDP

The difference between get and post

The difference between cookies and sessions

What are the basic types of Java?

What is the difference between abstract class and interface? 

Understanding the stack

The difference between == and equals

How to understand Java polymorphism?

What are the ways to create threads?

What are dirty reads, non-repeatability, and phantom reads?

Java's garbage collection mechanism

Why does TCP need three handshakes, but not two?

Why the expansion factor of hashmap is 0.75

The difference between the expansion mechanisms of hashmap1.7 and 1.8

Introduce concurrenthashmap


The difference between TCP and UDP

TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are two different transport layer protocols used to transmit data in computer networks. The main differences between them are as follows:

  1. Connectivity:

    • TCP is a connection-oriented protocol that establishes a connection before data transmission to ensure the reliability and orderliness of data. It uses a three-way handshake and four waves to establish and terminate connections.
    • UDP is a connectionless protocol, and data packets can be sent directly without establishing a connection. This means that UDP transfer speed is faster, but the reliability and sequence of data are not guaranteed.
  2. reliability:

    • TCP provides reliable data transmission, which ensures that data reaches its destination and is retransmitted when needed. If data is lost or corrupted, TCP automatically recovers it.
    • UDP does not provide reliability guarantees, data packets may be lost or arrive in a different order, and applications need to handle the integrity and sequence of data themselves.
  3. Applicable scene:

    • TCP is usually used in applications that require reliable data transmission, such as web browsing, email, file transfer, etc., and scenarios that require high data integrity.
    • UDP is commonly used for real-time applications such as audio and video streaming, online gaming, and applications that require low latency, which can transfer data faster but can tolerate some data loss.
  4. Header overhead:

    • The TCP header is larger and contains a lot of control information, so it takes up more bandwidth.
    • UDP has a smaller header, so it takes up less bandwidth and is suitable for situations where bandwidth is limited.

In short, TCP and UDP are suitable for different network application scenarios. Which protocol to choose depends on the requirements of the application, whether reliable transmission or faster transmission speed is required.

Larger TCP header : The TCP protocol contains a relatively large amount of control information in each data packet, which is used to manage and maintain the TCP connection to ensure reliable data transmission. This control information includes fields such as sequence number, confirmation number, window size, checksum, etc. Due to this extra information, the header size of TCP packets is larger, occupying part of the transmission bandwidth.

The difference between get and post

GET and POST in HTTP (Hypertext Transfer Protocol) are two commonly used request methods for transferring data between the client and the server. The main differences between them are as follows:

  1. Data transmission method:

    • GET: Attach data to the request through the URL, and the data appears in the query string of the URL in the form of key-value pairs. Therefore, GET requests expose data in the URL, visible and easily cached, bookmarked, etc.
    • POST: Include the data in the body of the request rather than exposing it in the URL. This makes POST requests more suitable for transmitting sensitive data or large amounts of data, since the data will not appear in the URL.
  2. Data size limit:

    • GET: Since the data is appended to the URL, the GET request has a limit on the size of the transmitted data, which is usually limited by the browser and server, and the security of data transmission is low.
    • POST: Since the data is included in the request body, POST requests have a relatively high data size limit and can usually transfer larger amounts of data.
  3. safety:

    • GET: Because the data is exposed in the URL, the GET request is less secure and suitable for transmitting non-sensitive data.
    • POST: POST requests include data in the request body, so they are more secure and suitable for transmitting sensitive data such as login credentials or payment information.
  4. Cacheability:

    • GET: GET requests can usually be cached by browsers because they are idempotent, i.e. multiple identical GET requests should produce the same result.
    • POST: POST requests are generally not cached by browsers because they may cause data changes or produce different results.
  5. Idempotence:

    • GET: GET requests are idempotent, and multiple identical GET requests should not change the server's state or data.
    • POST: POST requests are usually not idempotent, and multiple identical POST requests may result in server state changes or multiple submissions of the same data.

In short, both GET and POST are used for HTTP communication, but they have different characteristics in terms of data transmission method, data size limit, security, cacheability and idempotence. You should choose which request to use based on specific needs. method. GET is suitable for obtaining data, while POST is suitable for sending data and modifying data.

The difference between cookies and sessions

Cookies and Sessions are both mechanisms used to track user state and maintain session information in web applications, but there are some important differences between them:

  1. storage location:

    • Cookies: Cookies are small text files stored on the client (user's browser), sent to the client by the server and stored locally. Each time the browser sends a request to the server, relevant cookie data is automatically appended to the request header so that the server can identify the user.
    • Session: Session data is usually stored on the server, not the client. The server creates a unique identifier (Session ID) for each session, which is usually stored in a cookie, but the actual session data is stored on the server.
  2. data storage:

    • Cookies: Cookies usually contain a small amount of text data and are used to store small information about the user, such as user preferences, login credentials, etc. The size of a cookie is limited by the browser's cookie size limit.
    • Session: Session can store larger and more complex data because session data is stored on the server and is not limited by cookie size. Usually, Session is used to store user login status, shopping cart content, user session information, etc.
  3. safety:

    • Cookies: Cookies are stored on the client side and therefore can be viewed and modified by the user. While cookies can be marked as safe and HttpOnly for added security, there is still a risk of theft and misuse.
    • Session: Since Session data is stored on the server, users cannot directly view or modify session data, which improves security. But Session data on the server also needs to be properly protected to prevent unauthorized access.
  4. life cycle:

    • Cookies: Cookies can be set to expire, which can be session level (expires when the browser is closed) or persistent (expires after a period of time).
    • Session: Session usually expires after the user closes the browser or after a period of inactivity. The specific expiration policy can be controlled by the developer.

    In short, both Cookies and Sessions are used to manage user status and maintain session information in Web applications, but they have different characteristics in terms of storage location, data storage, security, life cycle, etc., and should be selected based on specific needs and security considerations. Appropriate mechanism. In practical applications, they are often used together, for example, to store the Session ID in a cookie so that the server can identify the session.

What are the basic types of Java?

Java's basic data types, also known as primitive data types, are used to store single values. Java has the following basic data types:

  1. Integer type:

    • byte: 8 bits, ranging from -128 to 127.
    • short: 16 bits, ranging from -32,768 to 32,767.
    • int: 32 bits, range from about -2.1 billion to 2.1 billion.
    • long: 64 bits, the range is very large, about -9 hundred to 9 hundred.

    Numerically, Baijing can be understood as 10^23 (or 1 followed by 23 zeros)

  2. Floating point type:

    • float: 32 bits, used to store decimals, with an accuracy of approximately 6-7 significant digits.
    • double: 64 bits, used to store double-precision decimals, with a precision of approximately 15-16 significant digits.
  3. Character type:

    • char: 16 bits, used to store a character, such as letters, numbers, symbols, etc.
  4. Boolean type:

    • boolean: Indicates a true or false value.

What is the difference between abstract class and interface? 

Abstract Class and Interface are two different concepts in object-oriented programming that are used to achieve polymorphism and code abstraction, but there are some key differences between them:

  1. How to define:

    • Abstract class: An abstract class is abstractdefined using keywords and can contain abstract methods (that is, methods without actual implementation) and concrete methods (methods with actual implementation).
    • Interface: The interface is interfacedefined using keywords. It only contains abstract methods (no method body) and no concrete methods.

    For example, I can define an abstract class like this

    Let the student inherit and call it in the main method

    The running results are as follows:

  2. Inheritance relationship:

    • Abstract class: A class can only inherit from one abstract class, which is single inheritance in Java. Abstract classes can have member variables, constructors, and non-abstract methods.
    • Interface: A class can implement multiple interfaces, that is, multiple interface implementations in Java. The interface only contains abstract methods and constant fields ( finalvariables), without constructors and concrete methods.
  3. Construction method:

    • Abstract class: It can have a constructor method to initialize the member variables of the abstract class.
    • Interface: Cannot have a constructor because interfaces cannot be instantiated.
  4. Member variables:

    • Abstract class: can contain instance variables (member variables).
    • Interface: can only contain constant fields, which are generally considered public constants.
  5. Method to realize:

    • Abstract class: Implemented through inheritance, the subclass must use extendskeywords to extend the abstract class and implement the abstract methods in the abstract class.
    • Interface: Implemented through implementation, classes use implementskeywords to implement interfaces and provide concrete implementations of all abstract methods defined in the interface.
  6. use:

    • Abstract class: Usually used to represent a basic abstract concept that can contain shared code and properties. They are often used to establish base classes in a class hierarchy.
    • Interface: used to define a set of contracts that require implementation classes to provide specific behaviors. Interfaces are often used to achieve polymorphism, allowing a class to implement multiple interfaces.

In short, abstract classes and interfaces are both important mechanisms for achieving code abstraction and polymorphism in Java, but their usage and syntax are obviously different. We should choose the appropriate abstraction method according to our needs and design goals. Generally, if you need to represent shared code and properties, use abstract classes; if you need to define a set of specifications or behavioral contracts, use interfaces. Sometimes abstract classes and interfaces can be used together to implement more complex inheritance structures.

Understanding the stack

Stacks usually follow the "last-in, first-out" (LIFO) principle, that is, the last element that enters the stack is the first to be removed, and the first element that enters the stack is the last to be removed. removed.

Here's a full description of the stack:

Stack is a common computer science data structure used to store and manage data. The stack follows the LIFO (Last-In, First-Out) principle, which means that the last element that enters the stack will be removed first, and the first element that enters the stack will be removed last.

Basic operations of the stack include:

  1. Push: When an element is placed on the stack, the element is added to the top of the stack and becomes the new top element of the stack.

  2. Pop: When the top element of the stack is removed, the element is removed from the stack and the next element becomes the new top element of the stack.

  3. View Top: The element at the top of the stack can be viewed without removing it.

  4. Empty Stack: If the stack does not have any elements, it is called an "empty stack".

Common applications of stacks include function call stacks, expression evaluation, memory management, and backtracking algorithms.

The difference between == and equals

In Java, ==and equals()are two different methods used to compare objects. They have the following differences:

  1. Object types compared:

    • ==Used to compare the references (memory addresses) of two objects, that is, to determine whether the two objects are the same object.
    • equals()Used to compare the contents of two objects, that is, to determine whether the two objects are logically equal.
  2. Scope of action:

    • ==Can be used to compare any two objects, regardless of their type.
    • equals()Methods typically need to be overridden in classes to compare the contents of objects against the class's definition. Therefore, equals()the behavior can vary depending on the class implementation.
  3. Default behavior:

    • By default, references to objects are compared, even if the contents of the two objects are the same, will be returned ==if they are different instances .==false
    • By default, equals()methods inherited from Objectthe class compare object references rather than contents. Therefore, if not overridden in the class, equals()the method behaves ==the same as .

    We can see from the equals source code of Object that by default, equals()the reference of the object is compared, not the content:

  4. Custom comparison logic:

    • By overriding the method in the class equals(), you can customize the comparison logic between objects. This means you can tell whether objects are equal based on their contents, rather than just relying on references.
    • If you want to compare basic data types (like int, doubleetc.), you can use this ==because they are value types and don't have references.

 Example:

String str1 = new String("Hello");
String str2 = new String("Hello");

System.out.println(str1 == str2);          // false,比较的是引用
System.out.println(str1.equals(str2));     // true,比较的是内容

Integer num1 = 5;
Integer num2 = 5;

System.out.println(num1 == num2);          // true,比较的是引用
System.out.println(num1.equals(num2));     // true,比较的是内容

Why does String compare not addresses? Because the String class overrides the equals method. The following is the source code of the String class:

public boolean equals(Object anObject) {
        if (this == anObject) {
            return true;
        }
        if (anObject instanceof String) {
            String anotherString = (String)anObject;
            int n = value.length;
            if (n == anotherString.value.length) {
                char v1[] = value;
                char v2[] = anotherString.value;
                int i = 0;
                while (n-- != 0) {
                    if (v1[i] != v2[i])
                        return false;
                    i++;
                }
                return true;
            }
        }
        return false;
    }

How to understand Java polymorphism?

Polymorphism in Java is an important concept in object-oriented programming, which allows objects of different classes to respond differently to the same method name. Polymorphism is one of the three major features of object-oriented programming, the other two being encapsulation and inheritance. Polymorphism allows you to handle different subclass objects through common interfaces or parent class references, thereby achieving code reuse and unified interfaces.

The following is an understanding and explanation of Java polymorphism:

  1. The concept of polymorphism: Polymorphism means that the same method or operation has different manifestations on different objects. Specifically, polymorphism allows different classes to implement the same method name, but call specific methods in different classes based on the type of object. This capability makes the code more flexible, scalable, and maintainable.

  2. How polymorphism is implemented: Polymorphism is mainly achieved through method overriding (Override) and method overloading (Overload).

    • Method override (Override): Subclasses can override the methods of the parent class and provide their own implementation. When the reference of the parent class points to the object of the subclass, calling the same method name will execute the method of the subclass.
    • Method overloading (Overload): In the same class, you can define multiple methods with the same name but different parameter lists. The compiler chooses the appropriate method to call based on the type and number of method parameters.
  3. Application of polymorphism: Polymorphism makes the code more flexible and maintainable. It is often used in the following scenarios:

    • Unified interface for methods: Polymorphism allows different classes to implement the same interface or abstract class and call these methods in a consistent manner.
    • Code reuse: Polymorphism promotes code reuse because a common parent class reference can be used to handle different child class objects.
    • Run-time binding: Polymorphism allows the actual type of an object to be determined at run-time and the corresponding methods to be called. This is called run-time polymorphism or dynamic binding.

What are the ways to create threads?

In Java, there are many ways to create threads. The following are common ways to create threads:

1. Inherited Threadclass:

  • Create a Threadsubclass that inherits from the class.
  • Override run()the method in which the execution logic of the thread is defined.
  • Create an instance of the subclass and call start()the method to start the thread.
class MyThread extends Thread {
    public void run() {
        // 线程的执行逻辑
    }
}

MyThread myThread = new MyThread();
myThread.start();

2. Implement Runnablethe interface:

  • Create a Runnableclass that implements the interface.
  • Implementation run()method, in which the execution logic of the thread is defined.
  • Create an instance of the implementation class and pass it as a parameter to Threadthe class's constructor method.
  • Call start()the method to start the thread.
class MyRunnable implements Runnable {
    public void run() {
        // 线程的执行逻辑
    }
}

MyRunnable myRunnable = new MyRunnable();
Thread thread = new Thread(myRunnable);
thread.start();

3. Use anonymous inner classes:

  • Threads can be created through anonymous inner classes.
  • Pass an anonymous inner class directly into Threadthe constructor of the class .Runnable
Thread thread = new Thread(new Runnable() {
    public void run() {
        // 线程的执行逻辑
    }
});
thread.start();

4. Use Executorframes:

  • Use java.util.concurrent.Executorinterfaces and their implementation classes to create and manage thread pools.
  • Tasks can be submitted Runnableto the thread pool for execution.
Executor executor = Executors.newFixedThreadPool(2);
executor.execute(new Runnable() {
    public void run() {
        // 线程的执行逻辑
    }
});

What are dirty reads, non-repeatability, and phantom reads?

For details, please see my blog: MySQL transaction isolation level_Humble Jingnan Mango’s Blog-CSDN Blog

Java's garbage collection mechanism

For details, please see my blog: An in-depth discussion of the Java Virtual Machine (JVM): execution process, memory management and garbage collection mechanism_Humble Jingnan Mango’s Blog-CSDN Blog

Why does TCP need three handshakes, but not two?

TCP (Transmission Control Protocol) uses a three-way handshake to establish a reliable connection instead of a two-way handshake, mainly to ensure that both parties can communicate normally and synchronize initial sequence numbers. Here are the reasons why a three-way handshake is required:

  1. Confirm that both parties are ready: Before data transfer can occur, you need to ensure that both the client and the server are ready to establish a connection. If there are only two handshakes, there is a situation where the client sends a connection request, but for some reason, the request does not arrive at the server in time, and the client mistakenly thinks that the connection has been established. If the server then sends data, the client will not recognize the data because it does not know whether the connection was successfully established. The three-way handshake ensures that both parties are ready before the connection is established.

  2. Prevents problems with old connections: If there are only two handshakes, it may cause some network delays for packets to arrive at the server when the old connection has been closed. The server may mistake these packets for new connections, causing problems. With the three-way handshake, the server can ensure that the old connection has been completely closed, avoiding this confusion.

  3. Prevent duplicate connection issues: Two handshakes may cause a closed connection to be re-established under certain circumstances, which may lead to unnecessary waste of resources. A three-way handshake reduces the likelihood of this happening.

The following is the basic process of the three-way handshake:

  1. The client sends a connection request (SYN) to the server.
  2. The server receives the request and acknowledges the client's connection request (ACK).
  3. After receiving the confirmation from the server, the client sends an acknowledgment (ACK) to the server again.

This process ensures that both parties know that the other is ready and that the sequence numbers have been synchronized and data transfer can begin. If there are only two handshakes, this two-way confirmation effect cannot be achieved, which may cause the connection to be unstable or unreliable. Therefore, TCP uses a three-way handshake to ensure the reliability and correctness of the connection.

Why the expansion factor of hashmap is 0.75

The reason why the expansion factor of HashMap is usually set to 0.75 is to find a suitable trade-off point between balancing memory usage and performance. This expansion factor determines when the HashMap is expanded to ensure that the number of elements in the HashMap is not too close to the capacity of the array, thereby maintaining good performance.

If the expansion factor is set too small, such as 0.5, then HashMap will expand more frequently because it is easy to reach the upper limit of capacity, which will add additional memory overhead and performance loss. On the other hand, if the expansion factor is set too large, such as 1.0, HashMap will only expand when the capacity is close to saturation, which may lead to an increase in hash conflicts and reduced performance.

This value of 0.75 is a compromise found between balancing memory and performance. It reduces the frequency of expansion to a certain extent without making the load factor of HashMap too high, thus maintaining good performance and memory usage efficiency. This value is chosen empirically and usually performs well in practical applications. When the number of elements in the HashMap reaches 75% of the capacity, the HashMap will automatically expand to maintain better performance.

The difference between the expansion mechanisms of hashmap1.7 and 1.8

In Java's HashMap, there are indeed some differences between versions 1.7 and 1.8, especially in the expansion mechanism. Here are the main differences between the two versions:

HashMap 1.7 expansion mechanism:

  1. Expansion trigger condition: In HashMap 1.7, expansion is triggered when a threshold is reached, which is calculated based on capacity (array size) and load factor (default is 0.75). When the number of elements reaches the capacity multiplied by the loading factor, HashMap will expand.

  2. Expansion method: In version 1.7, expansion is accomplished by creating a new array and redistributing the original key-value pairs to the new array. This process can be time-consuming because the hash value needs to be recalculated and the key-value pair moved to a new location.

  3. Expansion timing: In version 1.7, expansion will be triggered during the put operation, instead of allocating a fixed-size array space when initializing the HashMap.

The expansion mechanism of HashMap 1.8:

  1. Expansion trigger conditions: HashMap 1.8 introduces a new expansion mechanism called treeify. When the length of the linked list in the bucket reaches a certain threshold (default is 8), the linked list will be converted into a red-black tree, which can improve query performance in high conflict situations. Expansion is still triggered when the number of elements reaches the capacity multiplied by the load factor.

  2. Expansion method: Different from 1.7, the expansion method in version 1.8 is more efficient. When expanding, instead of simply redistributing key-value pairs to a new array, a larger array is created on the original array, and then the original buckets (including linked lists or trees) are distributed to the new array. of different locations. This reduces the overhead of recomputing hashes and improves performance.

  3. Expansion timing: Different from 1.7, version 1.8 will allocate a fixed size array space when initializing HashMap, rather than triggering expansion during the put operation. This can reduce the frequency of capacity expansion.

To sum up, HashMap 1.8 introduces a tree mechanism and a more efficient expansion method to improve performance and reduce memory usage. These improvements make HashMap's performance more stable under high load conditions.

Introduce concurrenthashmap

ConcurrentHashMap is a thread-safe hash table implementation in the Java collection framework. It is designed to support multi-threaded concurrent operations, so it is very efficient to use in a multi-threaded environment and provides better performance than the traditional HashMap.

Here are some important features and information about ConcurrentHashMap:

  1. Thread safety: ConcurrentHashMap is thread-safe, multiple threads can read and modify it simultaneously without requiring additional synchronization measures. This makes it ideal for use in multi-threaded applications, especially in high-concurrency environments.

  2. Segment lock design: The implementation of ConcurrentHashMap uses segment lock (Segment). The internal data structure is divided into multiple segments (Segment), and each segment has its own lock. This design allows multiple threads to operate on different segments, thereby reducing the level of lock contention and improving concurrency performance.

  3. High performance: ConcurrentHashMap performs well in high-concurrency scenarios and can provide better performance than traditional synchronous hash tables. It allows multiple threads to perform simultaneous read operations, as well as a certain degree of concurrent write operations, thereby reducing lock contention.

  4. Scalability: Due to ConcurrentHashMap's segmented lock design, it scales well to multi-core processors and large-scale multi-threaded applications without performance degradation due to lock contention.

  5. Empty key values ​​are not allowed: Unlike HashMap, ConcurrentHashMap does not allow storing keys or values ​​to be empty (null), because empty keys will make it impossible to distinguish between the case where the key does not exist and the key maps to a null value.

  6. Traversal performance: ConcurrentHashMap provides efficient traversal operations, including traversal of key sets, value sets and key-value pairs. These operations do not block during iteration, allowing concurrent reads.

In general, ConcurrentHashMap is a powerful multi-threaded concurrent collection, suitable for scenarios that require high-performance and thread-safe hash table operations. In Java concurrent programming, it is an important tool that can help developers deal with complex concurrent access problems. However, it should be noted that although ConcurrentHashMap provides efficient concurrency support, in some specific scenarios, it is still necessary to select an appropriate collection type based on actual needs.

I hope you all support me. I will compile more interview questions in the next issue!

Guess you like

Origin blog.csdn.net/m0_62468521/article/details/132977391