[Year-end summary] 2020 java job recruitment interview questions collection, spring recruitment + autumn recruitment (with answers)

Preface

Whether it is school recruitment or social recruitment, various examinations and written examinations cannot be avoided. How to prepare for these skills is extremely important. Whether it is a written test or a test, there are rules to follow. What I mean by "rules to follow" is only to say that you can prepare in advance for technical tests. This is the principle of not fighting unprepared battles. The following is for everyone , Describes from interview preparation to the final offer and provides a very detailed list. It is recommended to read it from the beginning. If the foundation is good, you can also select the chapters you need to view

The difference between Redis and memcached;

  • Storage method: memcache will store all the data in the memory, and it will hang up after a power failure, and the data cannot exceed the memory size. Part of redis data is stored on the hard disk, which can ensure the durability of the data.

  • Data support type: Memcache supports simple data types, only supporting simple key-value, while redis supports five data types.

  • The underlying model is different: the underlying implementation between them and the application protocol for communication with the client are different. Redis directly built the VM mechanism by itself, because the general system calls system functions, it will waste a certain amount of time to move and request.

  • The size of the value: redis can reach 1GB, while memcache is only 1MB.

What garbage collection algorithms do you know?

  • Copy: First suspend the running of the program, and then copy all surviving objects from the current heap to another heap. All objects that have not been copied are garbage. When objects are copied to the new heap, they are next to each other, so the new heap remains compactly arranged, and then can be simply and directly allocated according to the aforementioned method. The disadvantage is that it wastes space and has to go back and forth between the two heaps. The second is that when the program enters a stable state, it may only produce very little garbage, or even no garbage. However, the copy collector still will all the memory. Copy from one place to another.

  • Mark-Clear: It also starts from the stack and static storage area, traverses all references, and then finds all surviving objects. Whenever it finds a surviving object, it will give a mark to the object, and no object will be recycled in this process. Only when all the marking work is completed, the cleaning action will start. During the cleaning process, unmarked objects will be released, and no copying will occur. So the remaining heap space is not continuous, if the garbage collector wants to get continuous space, it has to rearrange the remaining objects.

  • Marking-Organizing: Its first stage is exactly the same as the marking/sweeping algorithm, which is to traverse GC Roots and then mark the surviving objects. Move all surviving objects and arrange them in order of memory address, and then reclaim all the memory after the end memory address. Therefore, the second stage is called the finishing stage.

  • Generational collection algorithm: Divide the Java heap into the new generation and the old generation, and then use the most appropriate collection algorithm according to the characteristics of each generation. In the new generation, the survival rate of the object is relatively low, so the replication algorithm is selected. In the old generation, the survival rate of the object is high and there is no extra space to allocate it, so the "mark-clear" or "mark-sort" algorithm is used for recycling.

How to control the priority of thread pool threads?

  • Answer: Change the existing thread pool to a priority queue

Spring's post processor;

  • BeanPostProcessor: Bean's post processor, which mainly works before and after bean initialization.

  • InstantiationAwareBeanPostProcessor: Inherited from BeanPostProcessor, it mainly works before and after bean instantiation; AOP creates proxy objects through this interface.

  • BeanFactoryPostProcessor: The post processor of the Bean Factory, which is executed after the bean definitions are loaded, but before the bean is initialized.

  • BeanDefinitionRegistryPostProcessor: inherited from BeanFactoryPostProcessor. Its custom method postProcessBeanDefinitionRegistry will be executed before the bean definitions are about to be loaded and the bean has not been initialized, that is, it will be called before the postProcessBeanFactory method of BeanFactoryPostProcessor.

When does the query not go to the (expected) index?

  • Fuzzy query %like

  • The index column participates in the calculation and uses the function

  • Non-leftmost prefix order

  • where to judge null

  • where is not equal

  • The or operation has at least one field without index

  • The query result set to be returned to the table is too large (exceeding the configured range)

GC中Stop the world(STW)

  • When the garbage collection algorithm is executed, all other threads of the Java application except the garbage collector thread are suspended. At this point, the system can only allow the GC thread to run, and all other threads will be suspended, waiting for the GC thread to finish executing before running again. These tasks are automatically initiated and completed by the virtual machine in the background. It stops all the threads of the user's normal work when the user is not visible. This is for many applications, especially those with high real-time requirements. The procedure is unacceptable.

  • But it does not mean that GC must be STW, you can also choose to reduce the running speed but can be executed concurrently collection algorithm, it depends on your business.

What is the difference between G1 collector and CMS collector?

  • The CMS collector is a collector whose goal is to obtain the shortest recovery pause time, because when the CMS is working, the GC worker thread and the user thread can execute concurrently to achieve the purpose of reducing the mobile phone pause time (only the initial marking and remarking will be STW) . But the CMS collector is very sensitive to CPU resources. In the concurrency phase, although the user thread will not be stalled, it will take up CPU resources and cause the reference program to slow down and the overall throughput to drop.

  • CMS only works on the old age and is based on a mark-sweeping algorithm, so there will be a lot of space fragments during the cleaning process.

  • The CMS collector cannot handle floating garbage. Because the user thread is still running during the concurrent cleaning phase of the CMS, new garbage will continue to be generated as the program runs spontaneously. This part of the garbage appears after the marking process, and the CMS cannot be processed in this collection. They have to be cleaned up in the next GC.

  • G1 is a garbage collector oriented to server applications, suitable for server systems with multi-core processors and large memory capacity. G1 can make full use of the hardware advantages of the CPU and multi-core environment, and use multiple CPUs (CPU or CPU core) to shorten the STW pause time. It satisfies a short pause while achieving a high throughput.

  • Since JDK 9, G1 has become the default garbage collector. G1 is very suitable when the application has any of the following characteristics: Full GC duration is too long or too frequent; the creation rate and survival rate of objects change greatly; the application does not want to pause for a long time (longer than 0.5s or even 1s).

  • G1 divides the space into many regions (Region), and then they are reclaimed separately. It can be used when the heap is relatively large, and the replication algorithm is adopted, and the fragmentation problem is not serious. On the whole, it belongs to the labeling algorithm, and the local (between regions) belongs to the replication algorithm.

  • G1 needs a memory set (specifically, a card table) to record the reference relationship between the new generation and the old generation. This data structure requires a large amount of memory in G1, which may reach 20% or more of the entire heap memory capacity . Moreover, the cost of maintaining the memory set in G1 is higher, which brings higher execution load and affects efficiency. Therefore, CMS performs better than G1 in small memory applications, and G1 has more advantages in large memory applications. The size of the memory is between 6GB and 8GB.

Data structure of AQS theory

  • There are 3 objects in AQS, one is state (used for counters, similar to gc's recycling counter), one is thread mark (who locks the current thread), and one is blocking queue.

Parental Delegation Model

  • Parental delegation means that if a class loader needs to load a class, then it will first delegate the class request to the parent class loader to complete it. This is the case at each level. Recursively to the top level, when the parent loader cannot complete the request, the child class will try to load.

Relationship between JDBC and Parental Delegation Model

  • Because the class loader is limited by the loading range, in some cases the parent class loader cannot load the required file, and then it is necessary to delegate the sub-class loader to load the class file.

Talk about the difference between -synchronized and reentrantlock-

  • First of all, synchronized is a built-in keyword of java. At the jvm level, Lock is a java class.

  • Synchronized cannot determine whether to acquire the lock status, Lock can determine whether the lock is acquired, and can actively try to acquire the lock.

  • Synchronized will automatically release the lock (a thread will release the lock after executing the synchronization code; b will release the lock if an exception occurs during the thread execution), the Lock must be manually released in the finally (unlock() method releases the lock), otherwise it will easily cause the thread to die lock.

  • Two threads 1 and 2 with the synchronized keyword, if the current thread 1 obtains the lock, the thread 2 waits. If thread 1 is blocked, thread 2 will wait forever, and the Lock lock will not necessarily wait forever. If the lock cannot be acquired, the thread can end without waiting.

  • Synchronized locks can be reentrant, uninterruptible, and unfair, while Lock locks can be reentrant, judged, and fair (both)

  • Lock locks are suitable for synchronization problems with a large number of synchronized codes, and synchronized locks are suitable for synchronization problems with a small amount of codes.

Parameters of the core thread pool ThreadPoolExecutor;

  • corePoolSize: specifies the number of threads in the thread pool.

  • maximumPoolSize: specifies the maximum number of threads in the thread pool.

  • keepAliveTime: The idle time allowed by thread pool maintenance threads.

  • unit: the unit of keepAliveTime.

  • workQueue: Task queue, tasks that have been submitted but not yet executed.

  • threadFactory: Thread factory, used to create threads, generally use the default.

  • handler: Rejection strategy. When there are too many tasks to deal with, how to reject them.

ThreadPoolExecutor workflow

  • The thread execution rules of the thread pool have a lot to do with the task queue.

  • The following assumes that there is no size limit on the task queue:

  • If the number of threads <= the number of core threads, then directly start a core thread to perform the task, and will not be placed in the queue.

  • If the number of threads> the number of core threads, but <= the maximum number of threads, and the task queue is LinkedBlockingDeque, tasks exceeding the number of core threads will be queued in the task queue.

  • If the number of threads> the number of core threads, but <= the maximum number of threads, and the task queue is SynchronousQueue, the thread pool will create new threads to perform tasks, and these tasks will not be placed in the task queue. These threads are non-core threads. After the task is completed, the idle time reaches the timeout period and will be cleared.

  • If the number of threads> the number of core threads and> the maximum number of threads, when the task queue is LinkedBlockingDeque, the tasks that exceed the core threads will be queued in the task queue. That is, when the task queue is LinkedBlockingDeque and there is no size limit, the maximum thread number setting of the thread pool is invalid, and the number of threads will not exceed the number of core threads at most.

  • If the number of threads> the number of core threads and> the maximum number of threads, when the task queue is SynchronousQueue, an exception will be thrown because the thread pool refuses to add tasks.

When the task queue size is limited:

  • When the LinkedBlockingDeque is full, the new task will directly create a new thread to execute, and an exception will be thrown when the number of created threads exceeds the maximum number of threads.

  • There is no limit to the number of SynchronousQueue. Because he does not keep these tasks at all, but directly hand them over to the thread pool for execution. When the number of tasks exceeds the maximum number of threads, an exception will be thrown directly.

to sum up

Finally, I prepared the Java architecture learning materials for everyone. The learning technology content includes: Spring, Dubbo, MyBatis, RPC, source code analysis, high concurrency, high performance, distributed, performance optimization, microservice advanced architecture development, etc. Friends in need click here to remark csdn and download by yourself ! As a developer, not to mention that you are required to be the top in the industry, but you must also ensure that you are not eliminated by the market. Learning is the most basic thing for programmers.

Insert picture description here

Guess you like

Origin blog.csdn.net/jiagouwgm/article/details/111798553