Interviewer: It’s not that I’m talking about these questions, 9 of 10 companies have to ask.

Yesterday, I went to dinner with my friend after get off work. One of his friends along the way was an interviewer from JD. Then I briefly talked to him and learned about his recent situation. He said that more people came to the company for interviews recently. He himself did not expect that there will be so many interviews recently, and many of them are people with about three years of work experience.

Finally, I asked what questions he asked, and then combined the interview questions I asked in other companies to make a summary, and compiled today's article, which is the most frequent interview question in the last month.

Friends in need can click: click this! Click this! , Code: csdn.

Insert picture description here

text

1. What is FutureTask

This is actually mentioned before, FutureTask represents an asynchronous operation task. A specific implementation class of Callable can be passed into FutureTask, which can wait to obtain the result of the asynchronous operation task, determine whether it has been completed, and cancel the task. Of course, since FutureTask is also an implementation class of the Runnable interface, FutureTask can also be placed in the thread pool.

2. How to find which thread uses the longest CPU in Linux environment

This is a more practical problem, and I think this kind of problem is quite meaningful. You can do this:

Get the pid, jps or ps -ef | grep java of the project, as I have mentioned before

top -H -p pid, the order cannot be changed

In this way, you can print out the current project and the percentage of CPU time that each thread occupies. Note that what is typed here is the LWP, which is the thread number of the native thread of the operating system. My laptop does not deploy the Java project in the Linux environment, so there is no way to take a screenshot to demonstrate. If the company uses the Linux environment to deploy the project, you can try a bit.

Use "top -H -p pid" + "jps pid" to easily find the thread stack of a thread with high CPU usage, and locate the cause of high CPU usage. Generally, improper code operations lead to infinite loops.

Lastly, the LWP typed in "top -H -p pid" is in decimal, and the local thread number typed in "jps pid" is in hexadecimal. After a conversion, you can locate the thread with high CPU usage. The current thread stack is up.

3. Java programming write a program that will cause deadlock

The first time I saw this topic, I thought it was a very good question. Many people know what a deadlock is: Thread A and Thread B wait for each other's locks, causing the program to loop endlessly. Of course, it is limited to this. I don’t know how to write a deadlock program. In this case, I don’t understand what a deadlock is. If I understand a theory, it’s over. In practice, we encounter deadlock problems. Basically it is invisible.

To truly understand what a deadlock is, this problem is not difficult, just a few steps:

The two threads hold two Object objects: lock1 and lock2. These two locks are used as locks for synchronization code blocks;

In the run() method of thread 1, the synchronization code block first acquires the object lock of lock1, Thread.sleep(xxx), it doesn't take much time, it is almost 50 milliseconds, and then acquires the object lock of lock2. The main purpose of this is to prevent thread 1 from acquiring the object locks of lock1 and lock2 at once.

Thread 2 run) (The synchronization code block in the method first acquires the object lock of lock2, and then acquires the object lock of lock1. Of course, the object lock of lock1 has been held by the thread 1 lock, and thread 2 must wait for thread 1 to release lock1. Object lock

In this way, thread 1 "sleeps" after sleeping, thread 2 has acquired the object lock of lock2, and thread 1 tries to acquire the object lock of lock2 at this time, and it is blocked. At this time, a deadlock is formed. The code is not written, it takes up a lot of space. Java Multithreading 7: Deadlock This article contains the code implementation of the above steps.

4. How to wake up a blocked thread

If the thread is blocked due to calling wait(), sleep() or join(), you can interrupt the thread and wake it up by throwing InterruptedException; if the thread encounters IO blocking, there is nothing you can do, because IO is the operating system Realized, there is no way for Java code to directly touch the operating system.

5. How does immutable objects help multithreading

There is a problem mentioned earlier. Immutable objects ensure the memory visibility of the object. The reading of immutable objects does not require additional synchronization means, which improves the efficiency of code execution.

6. What is multi-threaded context switching

Multi-threaded context switching refers to the process in which CPU control is switched from an already running thread to another thread that is ready and waiting to obtain CPU execution rights.

7. What happens if the thread pool queue is full when you submit a task

If you use LinkedBlockingQueue, which is an unbounded queue, it does not matter. Continue to add tasks to the blocking queue to wait for execution, because LinkedBlockingQueue can be regarded as an infinite queue, which can store tasks indefinitely; if you use a bounded queue, for example In the case of ArrayBlockingQueue, tasks will first be added to ArrayBlockingQueue. When ArrayBlockingQueue is full, the rejected policy RejectedExecutionHandler will be used to process the full tasks. The default is AbortPolicy.

8. What is the thread scheduling algorithm used in Java

Preemptive. After a thread runs out of CPU, the operating system will calculate a total priority based on thread priority, thread starvation and other data and allocate the next time slice to a thread for execution.

9. What is the role of Thread.sleep(0)

This question is related to the one above, and I am connected. Because Java uses a preemptive thread scheduling algorithm, it may happen that a thread often obtains control of the CPU. In order to allow some threads with lower priority to also obtain control of the CPU, you can use Thread.sleep( 0) Manually trigger an operation of the operating system to allocate time slices, which is also an operation to balance CPU control.

10. What is spin

Many codes in synchronized are just simple codes, and the execution time is very fast. At this time, waiting threads are locked may be a not worthwhile operation, because thread blocking involves switching between user mode and kernel mode. Since the code in synchronized executes very fast, let the thread waiting for the lock not be blocked, but do a busy loop on the boundary of synchronized, which is spin. If you do multiple busy cycles and find that the lock has not been obtained, block again. This may be a better strategy.

If there are friends who are considering going to a big factory, I also have the latest BAT interview notes and written test questions.
You can click: click this! Click this! , Code: csdn. Join to get it.
Insert picture description here

11. What is the Java memory model

The Java memory model defines a specification for multi-threaded access to Java memory. To fully explain the Java memory model is not something that can be said clearly in a few sentences here, let me briefly summarize several parts of the Java memory model:

Java memory model divides memory into main memory and working memory. The state of the class, that is, the variables shared between classes, is stored in the main memory. Every time a Java thread uses these variables in the main memory, it will read the variables in the main memory once and let these memories exist. There is a copy in your working memory. When you run your own thread code, you use these variables and manipulate the copy in your working memory. After the thread code is executed, the latest value will be updated to the main memory

Several atomic operations are defined to manipulate variables in main memory and working memory

Defines the rules for the use of volatile variables

Happens-before, the principle of first occurrence, defines some rules that operation A must occur before operation B. For example, in the same thread, the code in front of the control flow must occur first in the code behind the control flow, and an action to release the lock and unlock It must happen first in the lock lock action for the same lock, etc. As long as these rules are met, no additional synchronization measures are required. If a piece of code does not meet all happens-before rules, this piece of code must be a thread Insecure

12. What is CAS

CAS, the full name is Compare and Swap, that is, compare-replace. Suppose there are three operands: the memory value V, the old expected value A, and the value to be modified B. If and only if the expected value A and the memory value V are the same, the memory value will be modified to B and return true, otherwise what Do nothing and return false. Of course, CAS must cooperate with volatile variables, so as to ensure that the variable obtained each time is the latest value in the main memory, otherwise the old expected value A will always be a constant value A for a certain thread, as long as If a CAS operation fails, it will never succeed.

13. What is optimistic lock and pessimistic lock

Optimistic lock: Just like its name, it is optimistic about the thread safety problems caused by concurrent operations. Optimistic lock believes that competition does not always occur, so it does not need to hold the lock. Compare-replace these two actions as An atomic operation attempts to modify the variables in the memory. If it fails, it means that there is a conflict, and there should be corresponding retry logic.

Pessimistic lock: still like its name, it is pessimistic about the thread safety issues caused by concurrent operations. Pessimistic lock believes that competition will always occur, so every time a resource is operated, it will hold an exclusive lock. Just like synchronized, no matter the three seven twenty one, you can operate the resource directly when you lock it.

14. What is AQS

Briefly talk about AQS, the full name of AQS is AbstractQueuedSychronizer, which should be translated as abstract queue synchronizer.

If the basis of java.util.concurrent is CAS, then AQS is the core of the entire Java concurrent package, and it is used in ReentrantLock, CountDownLatch, Semaphore, etc. AQS actually connects all Entry in the form of a two-way queue. For example, ReentrantLock, all waiting threads are placed in an Entry and connected into a two-way queue. If the previous thread uses ReentrantLock, the two-way queue is actually the first Entry starts to run.

AQS defines all operations on two-way queues, but only opens the tryLock and tryRelease methods for developers to use. Developers can rewrite the tryLock and tryRelease methods according to their own implementation to achieve their own concurrent functions.

15. Thread safety of singleton mode

It's a commonplace question. The first thing to say is that the thread safety of the singleton mode means that an instance of a certain class will only be created once in a multithreaded environment. There are many ways to write singleton mode, let me summarize:

Hungry Chinese-style singleton pattern: thread safety

Lazy-style singleton pattern writing: not thread-safe

The wording of double check lock singleton mode: thread safety

16. What is the role of Semaphore

Semaphore is a semaphore, and its role is to limit the number of concurrent code blocks. Semaphore has a constructor that can pass in an int integer n, which means that a certain piece of code can only be accessed by at most n threads. If it exceeds n, please wait until a certain thread finishes executing this code block, and the next thread Re-enter. It can be seen that if the int type integer n=1 passed in the Semaphore constructor, it is equivalent to becoming a synchronized.

17. There is clearly only one statement "return count" in the size() method of Hashtable, why do we need to synchronize?

This is my previous confusion, I don’t know if you have thought about this issue. If there are multiple statements in a method, and they are all operating on the same class variable, then no locking in a multithreaded environment will inevitably lead to thread safety issues. This is easy to understand, but the size() method clearly has only one statement , Why do we need to lock?

There are two main reasons for understanding this issue through working and studying slowly:

Only one thread can execute synchronous methods of a fixed class at the same time, but for non-synchronized methods of a class, multiple threads can access it at the same time. Therefore, there is a problem. Maybe thread A is executing the put method of Hashtable to add data, thread B can normally call the size() method to read the number of current elements in the Hashtable, and the value read may not be the latest Thread A may have finished adding data, but thread B has already read the size without matching size++, so the size read by thread B must be inaccurate. After adding synchronization to the size() method, it means that the thread B calls the size() method only after the thread A calls the put method, which ensures thread safety.

The CPU executes code, not Java code. This is very important and you must remember. Java code is finally translated into assembly code for execution, and assembly code is the code that can really interact with hardware circuits. Even if you see that there is only one line of Java code, and even if you see that there is only one line of bytecode generated after the Java code is compiled, it does not mean that there is only one operation of this sentence for the bottom layer. The sentence "return count" is assumed to be translated into three assembly sentences and executed. It is entirely possible that the thread is switched after the first sentence is executed.

18. Thread class construction method and static block are called by which thread

This is a very tricky and cunning question. Remember: the construction method and static block of the thread class are called by the thread where the thread class new is located, while the code in the run method is called by the thread itself.

If the above statement confuses you, let me give you an example. Suppose Thread1 is new in Thread2 and Thread2 is new in the main function. Then:

The construction method and static block of Thread2 are called by the main thread, and the run() method of Thread2 is called by Thread2 itself.

The construction method and static block of Thread1 are called by Thread2, and the run() method of Thread1 is called by Thread1 itself.

19. Synchronization method and synchronization block, which is the better choice

Synchronous block, which means that the code outside the synchronous block is executed asynchronously, which improves the efficiency of the code more than the entire method of synchronization. Please know a principle: the smaller the scope of synchronization, the better.

With this article, I would like to mention one more point. Although the smaller the synchronization range, the better, there is still an optimization method called lock coarsening in the Java virtual machine, which is to enlarge the synchronization range. This is useful. For example, StringBuffer is a thread-safe class. Naturally, the most commonly used append() method is a synchronization method. When we write code, we will append strings repeatedly, which means repeated locks. ->Unlock, which is detrimental to performance, because it means that the Java virtual machine has to repeatedly switch between kernel mode and user mode on this thread, so the Java virtual machine locks the code of multiple append method calls. The coarse operation extends multiple append operations to the beginning and the end of the append method, and becomes a large synchronization block, which reduces the number of locks -> unlocks, and effectively improves the efficiency of code execution.

20. How to use thread pools for businesses with high concurrency and short task execution time? How do businesses with low concurrency and long task execution time use thread pools? How to use thread pool for business with high concurrency and long business execution time?

This is a question I saw on the concurrent programming website. I put this question on the last one. I hope everyone can see and think about it, because this question is very good, very practical and very professional. Regarding this issue, my personal opinion is:

For services with high concurrency and short task execution time, the number of threads in the thread pool can be set to the number of CPU cores +1 to reduce thread context switching

Businesses with low concurrency and long task execution time should be distinguished:

a) If the business time is long concentrated on IO operations, that is, IO-intensive tasks, because IO operations do not occupy the CPU, so do not let all the CPUs idle, you can increase the number of threads in the thread pool to let the CPU Handle more business

b) If the business time is long and concentrated on computing operations, that is, computationally intensive tasks, there is no way to do this. Same as (1), set the number of threads in the thread pool to be less, reducing thread context switching

High concurrency and long business execution time. The key to solving this type of task is not the thread pool but the overall architecture design. Seeing whether some data in these businesses can be cached is the first step, and adding servers is the second step. As for the setting of the thread pool, refer to (2) for setting. Finally, the problem of long business execution time may also need to be analyzed to see if middleware can be used to split and decouple tasks.

Here are two sentences:
multithreading (multithreading) refers to the technology of concurrent execution of multiple threads from software or hardware. A computer with multi-threading capability can execute more than one thread at the same time due to hardware support, thereby improving overall processing performance.

Systems with this capability include symmetric multiprocessors, multicore processors, and chip-level multiprocessing or simultaneous multithreading processors. In a program, these independently running program fragments are called "threads", and the concept of programming with them is called "multithreading."

At last

Provide free Java architecture learning materials, learning technology content includes: Spring, Dubbo, MyBatis, RPC, source code analysis, high concurrency, high performance, distributed, performance optimization, microservice advanced architecture development, etc.

Friends in need can click: click this! Click this! , Code: csdn.

There are also Java core knowledge points + a full set of architect learning materials and videos + first-line interview books + interview resume templates can be obtained + Ali Meituan Netease Tencent Xiaomi Iqiyi Kuaishou Bilibili interview questions + Spring source code collection + Java architecture Practical e-book + 2020 latest interview questions from major manufacturers.
Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_48011329/article/details/109537841