Java Advanced Interview Questions and Answers

foreword

No matter it is development, testing, operation and maintenance, every technician has a dream of becoming a technical master more or less. It is the pursuit of technological dreams that drives us to constantly strive and improve ourselves.

List and Set comparison, the respective subclass comparison

Contrast 1: Comparison of Arraylist and LinkedList

1. ArrayList implements a data structure based on dynamic arrays. Because the addresses are continuous, once the data is stored, the query operation efficiency will be relatively high (it is connected in memory).

2. Because the addresses are continuous, ArrayList needs to move data, so the efficiency of insertion and deletion operations is relatively low.

3. LinkedList is based on the data structure of linked list, and the address is arbitrary, so there is no need to wait for a continuous address when opening up memory space. For add and remove operations, LinedList has an advantage.

4. Because LinkedList needs to move the pointer, the query operation performance is relatively low.

Applicable scenario analysis:

Use ArrayList when you need to access data, and use LinkedList when you need to add, delete, and modify data multiple times.

Comparison 2: Comparison of ArrayList and Vector

1. Vector's methods are all synchronous and thread-safe, while ArrayList's methods are not, because the synchronization of threads will inevitably affect performance. Therefore, ArrayList performs better than Vector.

2. When the elements in the Vector or ArrayList exceed its initial size, the Vector will double its capacity, while the ArrayList will only increase its size by 50%, and so on. ArrayList will help save memory space.

3. In most cases, Vector is not used because the performance is not good, but it supports thread synchronization, that is, only one thread can write Vector at a certain time, avoiding the inconsistency caused by multiple threads writing at the same time.

4. Vector can set the growth factor, but ArrayList cannot.

Applicable scenario analysis:

1. Vector is thread-synchronized, so it is also thread-safe, while ArrayList is thread-asynchronous and unsafe. If the thread safety factor is not considered, generally using ArrayList is more efficient.

2. If the number of elements in the collection is greater than the length of the current collection array, and the data with a relatively large amount of data is used in the collection, using Vector has certain advantages.

Comparison 3: Comparison of HashSet and TreeSet

1. TreeSet is implemented by binary tree. The data in Treeset is automatically sorted, and null values ​​are not allowed.

2. HashSet is implemented by a hash table. The data in HashSet is disordered and can be put in null, but only one null can be put in, and the values ​​in both cannot be repeated, just like the unique constraint in the database.

3. HashSet requires that the placed object must implement the HashCode() method. The placed object is identified by the hashcode code, and the String object with the same content has the same hashcode, so the placed content cannot be repeated. But objects of the same class can be put into different instances.

Applicable scenario analysis:

HashSet is implemented based on Hash algorithm, and its performance is usually better than TreeSet. We should generally use HashSet, when we need sorting functionality, we use TreeSet.

The difference between HashMap and ConcurrentHashMap

1. HashMap is not thread-safe, while ConcurrentHashMap is thread-safe.

2. ConcurrentHashMap uses lock segmentation technology to segment the entire Hash bucket, that is, the large array is divided into several small segments, and each small segment has a lock on it, then in When inserting an element, you need to find out which segment segment should be inserted, and then insert it on this segment, and you also need to acquire the segment lock here.

3. ConcurrentHashMap makes the lock granularity more fine and concurrency better.

JVM memory structure

According to the JVM specification, JVM memory is divided into five parts: virtual machine stack, heap, method area, program counter, and local method stack.

1. Java virtual machine stack:

Thread private; each method will create a stack frame when it is executed, which stores the local variable table, operand stack, dynamic connection, method return address, etc.; each method from invocation to completion of execution, corresponding to a stack frame in the virtual machine Push and pop from the stack.

2. Heap:

Thread sharing; a memory area shared by all threads, created when the virtual machine starts, to store object instances.

3. Method area:

Thread sharing; a memory area shared by all threads; used to store class information, constants, static variables, etc. that have been loaded by the virtual machine.

4. Program counter:

Thread private; it is the line number indicator of the bytecode executed by the current thread. Each thread must have an independent program counter. This type of memory is also called "thread private" memory.

5. Local method stack:

Thread private; mainly serves the Native methods used by the virtual machine.

Difference between strong reference, soft reference and weak reference

Strong citation:

Only after the reference is released, the object will be released. As long as the reference exists, the garbage collector will never collect it. This is the most common New object.

Soft references:

References reclaimed by code before memory overflow. The main user of soft reference implements a function similar to cache. When the memory is sufficient, the value is directly obtained through the soft reference, without querying data from the busy real source, which improves the speed; when the memory is insufficient, this part of the cached data is automatically deleted. source to query these data.

Weak reference:

The reference collected during the second garbage collection can be obtained by fetching the corresponding data through weak references in a short time. When the second garbage collection is performed, null will be returned. Weak reference is mainly used to monitor whether the object has been marked as garbage to be collected by the garbage collector. The isEnQueued method of weak reference can return whether the object is marked by the garbage collector.

What is the core of springmvc, how is the request process handled, and how is the inversion of control implemented?

core:

Inversion of Control and Face-to-face

Request processing flow:

1. First, the user sends a request to the front-end controller, and the front-end controller decides which page controller to select for processing according to the request information (such as URL) and delegates the request to it, that is, the control logic part of the previous controller;

2. After the page controller receives the request, it performs functional processing. First, it needs to collect and bind the request parameters to an object and verify it, and then delegate the command object to the business object for processing; after processing, return a ModelAndView (model data) and logical view name);

3. The front-end controller takes back control, and then selects the corresponding view for rendering according to the returned logical view name, and passes in the model data for view rendering;

4. The front controller takes back control again and returns the response to the user.

How Inversion of Control is implemented:

Every time we use the spring framework, we have to configure the xml file, which configures the id and class of the bean.

The default bean in spring is a single instance mode, and this instance can be created through the bean's class reference reflection mechanism.

Therefore, the spring framework creates instances for us through reflection and maintains them for us.

A needs to refer to class B, and the spring framework will pass the reference of the B instance to the member variable of A through xml.

The difference between BIO, NIO and AIO

Java BIO: Synchronization and blocking, the server implementation mode is one connection and one thread, that is, when the client has a connection request, the server needs to start a thread for processing. If the connection does not do anything, it will cause unnecessary thread overhead. Of course, you can Improved by thread pool mechanism.

Java NIO: Synchronous and non-blocking, the server implementation mode is one request and one thread, that is, the connection request sent by the client will be registered on the multiplexer, and the multiplexer will start polling when there is an I/O request connected to the connection. A thread for processing.

Java AIO: Asynchronous and non-blocking, the server implementation mode is a valid request for a thread, and the client's I/O request is completed by the OS first and then notifies the server application to start the thread for processing.

The improvement of NIO over BIO is to block some invalid connections before starting the thread, reducing the waste of this part of the resource (because we all know that every time a thread is created, a certain memory space must be allocated for this thread)

The further improvement of AIO over NIO is that some temporarily invalid requests are blocked before starting the thread. For example, in the processing method of NIO, when a request comes, the thread is started for processing, but the resources required for this request are still If it is not ready, it must wait for the application resources of the backend, and the thread is blocked at this time.

Applicable scenario analysis:

The BIO method is suitable for a structure with a relatively small number of connections and a fixed number of connections. This method has relatively high requirements on server resources, and concurrency is limited to applications. It was the only choice before JDK1.4, but the program is intuitive and easy to understand, such as used in Apache before. .

The NIO method is suitable for architectures with a large number of connections and relatively short connections (light operation), such as chat servers, where concurrency is limited to applications, and programming is more complicated. JDK1.4 began to support it, such as Nginx and Netty.

The AIO method is used for architectures with a large number of connections and a relatively long connection (heavy operation), such as an album server, which fully calls the OS to participate in concurrent operations, and the programming is more complicated.

Why use a thread pool

Then you must first understand what is a thread pool

Thread pooling refers to creating a collection of threads during initialization of a multithreaded application, and then reusing these threads instead of creating new threads when new tasks need to be performed.

The benefits of using thread pools

1. The thread pool improves the response time of an application. Since the threads in the thread pool are ready and waiting to be assigned tasks, the application can use them directly without creating a new thread.

2. The thread pool saves the CLR the overhead of creating a complete thread for each short-lived task and can recycle resources after the task is completed.

3. The thread pool optimizes the thread time slice according to the processes currently running in the system.

4. The thread pool allows us to start multiple tasks without setting properties for each thread.

5. The thread pool allows us to pass an object reference containing state information for the program parameters of the task being executed.

6. The thread pool can be used to solve the problem of limiting the maximum number of threads to process a specific request.

The difference between pessimistic locking and optimistic locking, how to achieve

Pessimistic lock: a piece of execution logic plus a pessimistic lock, when different threads execute at the same time, only one thread can execute, and other threads wait at the entry until the lock is released.

Optimistic lock: a piece of execution logic plus optimistic lock, when different threads execute at the same time, they can enter the execution at the same time. When the data is finally updated, it is necessary to check whether the data has been modified by other threads (whether the version is the same as at the beginning of execution), if there is no modification Update, otherwise give up this operation.

The implementation of pessimistic locking:

 
begin;/begin work;/start transaction; (三者选一就可以) //1.查询出商品信息select status from t_goods where id=1 for update; //2.根据商品信息生成订单insert into t_orders (id,goods_id) values (null,1); //3.修改商品status为2update t_goods set status=2; //4.提交事务commit;/commit work;

Implementation of optimistic locking:

 
1.查询出商品信息select (status,status,version) from t_goods where id=#{id}2.根据商品信息生成订单3.修改商品status为2update t_goods set status=2,version=version+1where id=#{id} and version=#{version};

What is thread deadlock? How does a deadlock occur? How to avoid thread deadlock?

Introduction to deadlock:

Thread deadlock means that because two or more threads hold each other's resources, these threads are in a waiting state and cannot go to execution. When a thread enters the synchronized code block of the object, it occupies the resource, and the resource is not released until it exits the code block or calls the wait method. During this period, other threads will not be able to enter the code block. When threads hold the resources needed by each other, they will wait for each other to release the resources. If the threads do not actively release the resources they occupy, a deadlock will occur.

Some specific conditions for the occurrence of deadlock:

1. Mutual exclusion condition: The process is exclusive to the allocated resources, that is, a resource can only be occupied by one process until it is released by the process.

2. Request and hold conditions: When a process is blocked due to a request for occupied resources, it will keep the acquired resources.

3. No deprivation condition: Before any resource is released by the process, no other process can deprive it of occupation.

4. Circular waiting condition: When a deadlock occurs, the waiting process must form a loop (similar to an infinite loop), resulting in permanent blocking.

How to avoid:

1. Locking sequence:

Deadlocks can easily occur when multiple threads require the same locks but acquire them in different orders. If you can ensure that all threads acquire locks in the same order, then deadlock will not occur. Of course, this method requires you to know in advance all the locks that may be used, but sometimes it is unpredictable.

2. Locking time limit:

With a timeout, if a thread does not successfully acquire all required locks within the given time limit, it will fall back and release all acquired locks, then wait a random amount of time before retrying. But if there are a lot of threads competing for the same batch of resources at the same time, even if there is a timeout and a fallback mechanism, it may still cause these threads to try repeatedly but never get a lock.

3. Deadlock detection:

Deadlock detection means that whenever a thread acquires a lock, it is recorded in the thread and lock-related data structures (map, graph, etc.). In addition, whenever a thread requests a lock, it also needs to be recorded in this data structure. Deadlock detection is a better deadlock prevention mechanism, mainly for scenarios where sequential locking is impossible and lock timeouts are not feasible.

finally:

Want to learn Java engineering, high performance and distributed, explain the profound things in simple language. Friends of microservices, Spring, MyBatis, Netty source code analysis can join the Java advanced communication group: 725633148 , there are Ali Daniel live broadcast technology, and Java large-scale Internet technology videos to share with you for free.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325280255&siteId=291194637