40 multi-threaded Java interview questions and answers

1, multi-thread what's the use?

A lot of people seem to be possible in a nonsense question: I would like to use multithreading, also cares what's the use? In my opinion, the answer is more nonsense. The so-called "know know why," "will use" just "know", "Why" is the "know why", only to the extent that "know know why" it can be said that the a knowledge with ease. OK, here to talk about my views on this issue:

1) take advantage of multi-core CPU

With the progress of industry, now laptops, desktops and even commercial application servers are also at least a dual-core, 4-core, 8-core or 16-core is also not uncommon, if it is single-threaded program, then the dual-core CPU in wasted 50% on a 4-core CPU is wasted by 75%. 单核CPU上所谓的"多线程"那是假的多线程,同一时间处理器只会处理一段逻辑,只不过线程之间切换得比较快,看着像多个线程"同时"运行罢了. Multi-threading on multi-core CPU is a true multi-threaded, it allows you to work at the same logic multi-segment, multi-threaded, can really play the advantages of multi-core CPU to achieve the purpose of full use of the CPU.

2) prevent obstruction

From a procedural point of view of operating efficiency, single-core CPU will not only play the advantages of multi-threaded, multi-threaded run because it will lead to thread context switches, and reduce the overall efficiency of the program on a single core CPU. But we still have to single-core CPU multi-threaded applications, it is to prevent clogging. Just think, if it is before the single-core CPU using a single thread, so as long as this thread is blocked, say a data remotely read it, the peer has not yet been returned and no set time-out, then your entire program in return data back stopped working. Multithreading can prevent this problem, a number of threads to run simultaneously, even if a code execution thread to read data blocked, it will not affect the execution of other tasks.

3) facilitate the modeling

This is another advantage of the not so obvious. Suppose you have a large task A, single-threaded programming, then you must consider a lot, build the entire program model is too much trouble. But if this great task A broken down into several small tasks, task B, task C, task D, were established program model and run these tasks through multi-threading, respectively, then a lot simpler.

2, create a thread way

A more common problem, and generally is in two ways:

1) Thread class inheritance

2) implement Runnable

As to which is better, needless to say the latter is certainly good, because the way of implementation of the interface is more flexible than class inheritance way, but also to reduce the degree of coupling between the programs, 面向接口编程the core six principles also design patterns.

Distinction 3, start () method and the run () method

Only call the start () method, will exhibit characteristics of a multi-threaded, different threads run () method code inside alternately executed. If you just call the run () method, the code is executed synchronously, must wait for a thread's run () method after the code inside all is finished, another thread can execute its run () method code inside.

4, the difference between Callable Runnable interface and interface

A little deeper problem, also see a breadth of knowledge of Java programmers to learn.

Runnable interface in the run () method's return value is void, just do it purely to perform the run () method code only; call Callable interface () method returns a value, it is a generic , and Future, FutureTask fit may be used to obtain the results of asynchronous execution.

In fact, this is a useful feature, because 多线程相比单线程更难、更复杂的一个重要原因就是因为多线程充满着未知性, whether a certain threads? Some threads execute how long? Whether certain threads when the execution of an assignment we expect the data has been completed? Not know, we can do is wait this multi-threaded task is finished it. Canceling a task that thread, is really useful in case Callable + Future / FutureTask but you can get the results of running multiple threads, you can not wait too long to get the data needed.

5, the difference CyclicBarrier and CountDownLatch

Two looks a bit like a class, are in java.util.concurrent, can be used to represent the code to run on a certain point, that the difference between the two:

1) a thread to run CyclicBarrier after a certain point, the thread will stop running until all threads have reached this point, all the threads before re-run; CountDownLatch is not, a thread to run at some point after just give it a value of -1, the thread continues to run.

2) CyclicBarrier can only evoke a task, CountDownLatch can evoke multiple tasks.

  1. CyclicBarrier reusable, CountDownLatch not be reused, the count value is 0 CountDownLatch it is not used again.

6, the role of the volatile keyword

A very important issue is that each study, multi-threaded applications Java programmers must grasp. Understand the role of the volatile keyword is to understand the premise of Java memory model, not talked about here Java memory model, you can see the first 31 points, the role of the volatile keyword two main reasons:

1) multi-threaded mainly revolve around visibility and atomicity two properties, the use of the volatile keyword modified variables to ensure its visibility among multiple threads, that is, each read to the volatile variable, it must be the latest data.

2) the underlying code is executed as we have seen high-level language ---- Java program so simple, it execution is controlled Java代码-->字节码-->根据字节码执行对应的C/C++代码-->C/C++代码被编译成汇编语言-->和硬件电路交互, in reality, in order to obtain better performance JVM might reorder instructions may be multithreaded some unexpected problems. It will prohibit the use of volatile semantics of reordering, which of course must also reduces the efficiency degree of code execution.

From a practical point of view, an important role is volatile and CAS combined to ensure the atomicity, details can be found under the category java.util.concurrent.atomic package, such AtomicInteger.

7. What is thread-safe

It is a theoretical question, the answer has a lot of variety, and I think that gives a personal interpretation of the best: 如果你的代码在多线程下执行和在单线程下执行永远都能获得一样的结果,那么你的代码就是线程安全的.

This question is worth mentioning that the place is also a security thread several levels:

1) immutable

Like String, Integer, Long these are the final type of class, a thread can not change any of their values, to be changed unless a new creation, so these immutable objects without any synchronization means can be used directly in a multithreaded environment use

2) Absolute thread safety

Regardless of the runtime environment, the caller does not need additional synchronization measures. To do this usually takes a lot of extra expense, Java noted in its own thread-safe class, the vast majority in fact are not thread-safe, but absolutely thread-safe class, Java also has, say CopyOnWriteArrayList, CopyOnWriteArraySet

3) the relative security thread

The relative security thread that is said on our usual sense of the thread-safe, such as Vector, add, remove methods are atomic operations will not be interrupted, but only so far, if there is a thread traverse a Vector , there are threads at the same time add this Vector, will appear ConcurrentModificationException 99% of the cases, which is the fail-fastmechanism.

4) non-thread safe

This just nothing to say, ArrayList, LinkedList, HashMap are all non-thread-safe class

8, Java how to get to the thread dump file

Infinite loop, deadlocks, blocking, the page opens slow and other issues, playing a thread dump is the best way to solve the problem. That is called a thread dump thread stack, to obtain the thread stack has two steps:

1) to obtain the pid thread, you can use the jps command can also be used in the Linux environment ps -ef | grep java

2) Print thread stack, you can use jstack pid command in the Linux environment you can also use kill -3 pid

Further mention that, Thread class provides a getStackTrace () method may also be used to obtain a thread stack. This is an instance method, so this method is specific and binding thread instance, every acquisition is to obtain a specific currently running thread stack.

9. What happens if a thread exception occurs when running

If the exception is not caught, then the thread is stopped executed. Another important point is:如果这个线程持有某个某个对象的监视器,那么这个对象监视器会被立即释放

10, how to share data between two threads

By sharing objects between threads on it, and then wait / notify / notifyAll, await / signal / signalAll be arouse and wait, say BlockingQueue blocking queue data is shared between threads and design

11, sleep and wait method method What is the difference

This question is often asked, sleep and wait method method can be used to give up some CPU time, except that if the thread holding an object monitor, sleep method does not give up this object's monitor, wait method will give up this object monitor

12. What is the role of producer-consumer model

This problem is very theoretical, but very important:

1) 通过平衡生产者的生产能力和消费者的消费能力来提升整个系统的运行效率This is the most important producer consumer model role

2) decoupling, which is a producer-consumer model side-effect of decoupling means less contact between producers and consumers, the less contact can develop alone without the need to receive each other's constraints

13, ThreadLocal what use

Simply put ThreadLocal is a kind of space for time approach, in which each Thread maintains ThreadLocal.ThreadLocalMap a law to address open implementation, data isolation, data is not shared, naturally, there is no question of the security thread

14, why the wait () method, and notify () / notifyAll () method to be invoked in the sync blocks

This JDK is mandatory, wait () method, and notify () / notifyAll () method must first obtain the object before calling lock

15, wait () method, and notify () / notifyAll () method of any difference object is discarded when the monitor

wait () method, and notify () / notifyAll () method when the object is discarded monitor distinction is characterized: wait () method object monitor immediate release, notify () / notifyAll () method of the thread will wait the remaining code is completed will give up the object monitor.

16. Why use a thread pool

Avoid frequent create and destroy threads, to reuse the thread object. In addition, using a thread pool can also be flexibly controlled according to the number of concurrent projects. Click here to learn Detailed thread pool.

17, how to detect whether a thread holding an object monitor

I was in line to see more than one thread face questions before we know there are ways to determine whether a thread holding an object monitor: Thread class provides a holdsLock (Object obj) method, if and only if the monitor is an object obj returns the threads of time will hold true, that this is a static method, which means "某条线程"指的是当前线程.

18, the difference between synchronized and ReentrantLock

It is synchronized and if, else, for, while the same keyword, ReentrantLock is class, which is the essential difference between the two. Since ReentrantLock is a class, then it provides more flexibility than the synchronized characteristics can be inherited, there are ways you can, you can have a variety of class variables, scalability synchronized ReentrantLock than reflected in the points:

(1) ReentrantLock can set the waiting time to acquire the lock, thus avoiding the deadlock

(2) ReentrantLock may acquire various information lock

(3) ReentrantLock flexibility to achieve multi-channel notification

In addition, both the lock mechanism is actually not the same. ReentrantLock low-level calls that park Unsafe methods of locking, synchronized operation should be the subject header mark word, which I can not determine.

19, what is the degree of concurrency ConcurrentHashMap

ConcurrentHashMap concurrency is the size of the segment, the default is 16, which means that there can be up to 16 threads simultaneously operating ConcurrentHashMap, which is the biggest advantage of the Hashtable ConcurrentHashMap, in any case, there are two threads simultaneously Hashtable can get the Hashtable data?

20, ReadWriteLock what is
the first clear look at, not to say ReentrantLock bad, just ReentrantLock sometimes have limitations. If you use ReentrantLock, may itself be in order to prevent the thread A write data, data inconsistencies thread B in the read data caused, but this way, if the thread C in reading data, the thread D also read data, the read data will not change the data, it is not necessary lock, but still locked up, reducing the performance of the program.

Because of this, before the birth of the read-write lock ReadWriteLock. ReadWriteLock is a read-write lock interfaces, ReentrantReadWriteLock is a concrete realization ReadWriteLock interface, read and write to achieve separation, 读锁是共享的,写锁是独占的not mutually exclusive between reading and reading and reading and writing will be between write and read, write, and write mutex, to enhance the performance of read and write.

21, what is FutureTask

This fact has mentioned earlier, FutureTask indicates that the task of an asynchronous operation. FutureTask which can pass a Callable implementation class may be made to wait for access to the results of the task asynchronous operations, has been completed is determined whether to cancel the operation tasks. Of course, since FutureTask also Runnable interface implementation class, so FutureTask also can be placed in the thread pool.

22, how to find out which Linux environment longest thread CPU usage
issue This is a partial practice, I think this problem quite meaningless. You can do:

pid (1) acquisition project, jps or ps -ef | grep java, have talked about this in front

(2) top -H -p pid, the order can not be changed

So that you can print out the current project, each thread occupancy percentage of CPU time. Note that this play is the LWP, which is the operating system native threads thread, then I Notebook Hill did not deploy Java project under the Linux environment, so there is no way to capture the demo, users and friends if the company is using Linux environment deployment project, you can try a bit.

Use "top -H -p pid" + "jps pid" you can easily find a bar with a high CPU thread thread stack, thereby positioning occupy high CPU reasons, usually because of improper operation of the code results in an infinite loop.

Finally mention that, "top -H -p pid" break out of the LWP is decimal, "jps pid" break out of the local thread number is hexadecimal, convert it, you can navigate to the CPU-high thread the current thread stack up.
23, Java programmer to write a program would lead to deadlock

The first time I saw this topic, feel that this is a very good question. Many people know how it is deadlock children: thread A and thread B waiting for the locks held each other's infinite infinite loop causes the program to continue. Of course, only so far, and ask how to write a deadlock in the program do not know this situation do not know what it means to be a deadlock, understand a theory on the finished thing, and practical problems encountered in deadlock basically you can not see out.

Really understand what is deadlock, the problem is not difficult, a few steps:

1) two threads which were held by two Object object: lock1 and lock2. Both the synchronization code lock as a lock block;

run 2) the thread 1 () method to get the object blocks of the synchronization code of the lock lock1, Thread.sleep (xxx), does not require much time, almost 50 milliseconds, and then acquire the object lock lock2. This is primarily intended to prevent the thread 1 start suddenly and win a lock1 lock2 two objects object lock

run 3) Thread 2) (method of acquiring objects synchronized block to lock lock2, and then acquire the object lock lock1, of course, then it has been subject lock1 lock locks held thread 1, thread 2 thread to wait certainly 1 release the object's lock lock1

In this way, the thread 1 "sleep" sleep finished, thread 2 has acquired the object lock lock2 of the thread 1 attempts to acquire the object lock lock2 this time, they are blocked, this time on the formation of a deadlock. I do not write the code a bit too much of the space, Java multithreading 7: Deadlock this article there is the code to achieve the above steps.

24, how to awaken a blocked thread

If the thread is blocked because the call wait (), sleep () or join () method caused, can interrupt threads, and to wake it up by throwing InterruptedException; if the IO thread encounters a blockage, do nothing, because the operating system IO implementation, Java code and no way to directly access to the operating system.

25, immutable objects was going to help multithreading

There is a problem mentioned in the foregoing, the immutable object to ensure visibility of memory objects, to read immutable object does not require additional synchronization means to enhance the efficiency of code execution.

26, what is the context of multi-threaded switch

Multi-thread context switch refers to the control of the CPU switch from an already running thread to another place and wait for the process of obtaining execution of the CPU threads.

27, if you submit the task, the thread pool queue is full, then what will happen

Here we distinguish between:

1) If you are using LinkedBlockingQueue unbounded queue, the queue is unbounded, then it does not matter, continue to add tasks to the blocking queue waiting to be executed, because LinkedBlockingQueue can be considered almost an infinite queue, you can store unlimited tasks

2) If it is bounded queue used such ArrayBlockingQueue, the first task will be added to ArrayBlockingQueue in, ArrayBlockingQueue full, based on the number of value added threads maximumPoolSize, if increasing the number of threads or handle, however, ArrayBlockingQueue continue full, then the RejectedExecutionHandler treatment strategies will be used to refuse full task, the default is AbortPolicy

28. What Java thread scheduling algorithm is used

Preemptive. After a thread run out of CPU, operating system will be calculated based on a total priority thread priority, thread starvation situation and other data and assign the next time slice to execute a thread.

29. What is the role Thread.sleep (0) is

This problem and that problem is related to the above, I even together. Since Java preemptive thread scheduling algorithm, so there may be a case of threads often get into control of the CPU, in order to allow some low priority threads can get to the control of the CPU, you can use Thread.sleep ( 0) triggered manually operated once the operating system allocation of time slices, which is the balance of operating control of the CPU.

30, what is a spin

Many synchronized code inside just some very simple code, very fast execution time, then wait for the thread lock might be a less worthy of operation, because the thread blocking issues related to the user mode and kernel mode switching. Since the code execution inside the synchronized very fast, let's wait for the lock thread is not blocked, but do busy circulating in the border synchronized, this is spin. If you do not find that many times get busy cycle lock, and then blocked, this may be a better strategy.

31. What is the Java Memory Model

Java memory model defines a multi-threaded Java specification access memory. To complete Java memory model is not here to speak a few words to say clearly, I briefly summarize some parts of Java memory model:

1) Java memory model memory into the 主内存和工作内存. Class status, which is shared between class variables are stored in the main memory, each time Java threads use these variables in main memory, will read one of the main variables in memory, and let them in memory their working memory have a copy, run your own thread code, use these variables, are themselves working memory operation in that one. After threaded code is finished, it will be updated to the latest value to main memory

2) defines several atomic operation, for operating the main memory and the working memory variables

3) defines the use of the rule of volatile variables

4) happens-before, i.e. first principle occurs, defines rules B over A bound ahead occurs in operation, such as in the same thread as the front of the control flow of the code must first occur in later control of the flow of the code, a release lock unlock the first action must occur at a later performed the same action lock locks, etc., provided that they meet the rules, you do not need additional synchronization measures, if a piece of code does not comply with all the rules of happens-before, then this code for a certain It is a non-thread-safe

32. What is CAS

CAS, called the Compare and Swap, namely comparison - replaced. Suppose there are three operands: 内存值V、旧的预期值A、要修改的值B,当且仅当预期值A和内存值V相同时,才会将内存值修改为B并返回true,否则什么都不做并返回false. Of course, with the CAS variable must be volatile, so as to ensure that each variable is the main memory to get the latest that value, otherwise the old expectations A thread on a post, it is always a value of A will not change, as long as CAS a particular operation fails, it will never succeed.

33. What is the optimistic and pessimistic locking

1) optimistic lock: As its name suggests, for thread safety issues between concurrent operations generated optimistic state, optimistic locking that competition does not always happen, so it does not need to hold the lock, the 比较-替换two movements as a atomic operations attempt to modify variables in memory, if said conflict fails, then there should be a corresponding retry logic.

2) pessimistic lock: or, as its name suggests, for thread safety issues between concurrent operations generated a pessimistic state, pessimistic locking that competition will always happen, so every time a resource operation, will hold an exclusive lock, like synchronized, willy-nilly, operating directly on the lock on the resource.

34, what is the AQS

Briefly about AQS, AQS full name AbstractQueuedSychronizer, should be translated abstract queue synchronizer.

If the foundation is java.util.concurrent CAS, then AQS is the core of the contract and the entire Java, ReentrantLock, CountDownLatch, Semaphore, etc., use it. AQS actually connected in the form of two-way queue of all of Entry, say ReentrantLock, all waiting threads are placed in a two-way Entry and even into the queue in front of a thread uses ReentrantLock Well, the fact of the first two-way queue Entry to start a run.

AQS defines all operations on the two-way queue, but only open tryLock and tryRelease method for developers to use, developers can rewrite tryLock and tryRelease method based on their implementation, in order to realize their concurrent function.

Security thread 35, the singleton

Commonplace problem, the first thing to say is thread-safe means Singleton pattern: 某个类的实例在多线程环境下只会被创建一次出来. There are many cases of single mode of writing, I summarize:

Writing 1) starving single mode embodiment: Security Thread

Writing 2) Example lazy single mode: non thread-safe

3) bis written subject latch singletons: Security Thread

36, Semaphore what role

Semaphore is a semaphore, its role is 限制某段代码块的并发数. Semaphore has a constructor, you may pass an int type integer n, represents a piece of code at most n threads can access, if exceeded n, then please wait until a thread completes this code block, the next thread re-entry. It can be seen that if the incoming Semaphore constructor type integer int n = 1, the equivalent into a synchronized.

37, Hashtable's size () method obviously only a statement "return count", why do sync?

This is a confused before me, I do not know if you have not thought about it. If more than one method statement and are in operation with a class variable, then unlocked in multithreaded environment, it will inevitably lead to thread safety issues, it is well understood, but the size () method is clearly only one statement Why lock?

On this issue, slowly work, study, have to understand, for two main reasons:

1)同一时间只能有一条线程执行固定类的同步方法,但是对于类的非同步方法,可以多条线程同时访问 . So, this way there will be problems, may add data execution thread A Hashtable's put method, thread B you can call normal size () method to read the number of the current element of the Hashtable, to read that value may not be current , a thread may be added over the data, but there is no size ++, thread B has read size, then thread B is for reading a certain size to be inaccurate. And to the size () method after the addition of synchronous, meaning that thread B calls the size () method can be called only after a call to put the thread A method is completed, thus ensuring thread safety

2)CPU执行代码,执行的不是Java代码,这点很关键,一定得记住 . Java code will eventually be translated into machine code is the real code that can execute machine code and hardware interaction. 即使你看到Java代码只有一行,甚至你看到Java代码编译之后生成的字节码也只有一行,也不意味着对于底层来说这句语句的操作只有一个. A "return count" hypothesis has been translated into three assembler statement is executed, a compilation of statements and its machine code corresponding to do, and there could complete the implementation of the first sentence, the thread switched.

38, the constructor of the Thread class, is called a static block which thread

This is a very tricky and tricky questions. Remember: Thread class constructor, static block is where the new thread is the thread class called, and run the code inside the method is being called by the thread itself.

If the above statement makes you feel confused, then I, for example, assume that the new Thread2 the Thread1, main function in the new Thread2, then:

Constructor 1) Thread2 static block is the main thread calls, Thread2 the run () method is invoked own Thread2

Constructor 2) Thread1 static blocks are called Thread2, Thread1 the run () method is invoked their Thread1

39, and the sync block synchronization method, which is the better choice

Sync blocks, which means that code outside the sync block is asynchronous, which further enhance the efficiency of the overall process than synchronous code. Please know that a 同步的范围越小越好principle: .

Through this article, I put a little extra, although the scope of the better sync, but in the Java virtual machine, or there is a known 锁粗化optimization method, this method is to synchronize range increases. This is useful, for example StringBuffer, it is a thread-safe class, the most common natural append () method is a synchronous method, we write the code will be repeated append strings, which means that to be repeated lock -> unlock this performance disadvantage, because it means that the Java virtual machine to repeatedly switch between kernel mode and user mode on this thread, so Java virtual machine code that will repeatedly append method calls are a lock roughening operation, extended operation to append multiple craniocaudal append method, it becomes a large sync block, thus reducing the lock -> unlock times, effectively improve the efficiency of the code execution.

40, high concurrency, task execution time is short business how to use thread pool? How concurrency is not high, long-time business using task execution thread pool? High concurrency, how long time business execution services using a thread pool?

This is a problem I see in concurrent programming online, I put this question on the last, I hope everyone can see and think about, because this is a very good, very practical, very professional. On this issue, a personal view is:

1) high concurrent, business task execution time short, the number of threads in the thread pool can be set to the number of CPU cores +1 reducing thread context switching

2) is not high concurrency, see separate tasks to be executed for a long time business area:

a) If the business is a long time focused on the IO operation, that is IO-intensive task, because the operation does not take up CPU IO, so do not let all the CPU retired and sit, you can increase the number of threads in the pool, let CPU handle more business

b) If the traffic is concentrated in a long time operation is calculated, which is computationally intensive tasks, that no solution, and (1) as it, the number of threads in the pool is set to be less, reducing the thread context switching

c) high concurrency, long-time business execution, the key to solving this type of task is not to thread pool but in the overall architecture design, to see whether they can do business inside some of the data cache is the first step, the second is to increase server step, as setting the thread pool, thread pool to set a reference to other relevant articles. Finally, a long time business execution problems, may also need to analyze it, see if you can use the middleware to split tasks and decoupling.

Published 83 original articles · won praise 5 · views 20000 +

Guess you like

Origin blog.csdn.net/qq_20282955/article/details/104278219