2020 Spring recruit Java interview questions summary "3"

Foreword

The first N days at home office, today things are more updated somewhat late, forgive me the one that we share with you today is our third chapter of the content of the interview!

Read the two articles before friends, today's content is also very important! ~ Do not drop the old saying goes, little beep, directly on the dry! ! !

I hope you will like today's content

1. What are threads and processes?

Process is a process of implementation of the program, is the basic unit of the system to run the program. The system is running a program that is a process from creating, running, extinction process.

In Java, when we start the main function will start the process of a JVM, while the main thread is a thread where the function of this process, also known as the main thread.

Threads and processes are similar, but the thread is a smaller unit than the execution process, a process may spawn multiple threads during its execution. The difference is that with the process heap and method area resources shared by multiple threads of the same process. But each thread has its own program counter, stack virtual machine, and native method stacks, so the system generates a thread, or when switching between threads work process and more than a small burden, but also because of this , also known as threads lightweight processes.

2. Please briefly describe the relationship between threads and processes, and the difference between the advantages and disadvantages?

Illustration of the relationship between processes and threads

As can be seen from the figure: a process can have multiple threads, multiple threads share the process heap and method area (yuan space after JDK1.8) resources, but each thread has its own program counter, stack virtual machine and native method stacks.

Summary: A thread is the process of dividing into smaller operating units. The biggest difference between threads and processes that are substantially independent of each process and each thread is not necessarily, because the same threads in the process is likely to affect each other. Thread execution overhead is small, but not conducive to the management and protection of resources; and the process contrary

Why the program counter is private?

The program counter are the following two functions:

Bytecode interpreter program counter is changed by sequentially reading instruction, in order to achieve flow control code, such as: the order of execution, selection cycles, exception handling.

In the case of multiple threads, the recording position of the program counter for execution of the current thread, so that when the thread is switched back to know where the thread last ran to.

Note that, if you perform a native method, then the program counter record is undefined address, only the execution of the next instruction that is the address of the program counter when the Java code records.

== "thread context switching?

CPU by allocating CPU time slice to each thread mechanism to achieve this. CPU time slice is assigned to each thread of time, because time is very short piece, so the CPU by constantly switching threads execute, let's execute multiple threads simultaneously feeling, time slice is typically tens of milliseconds (ms).

It will switch to the next task allocation algorithm by the CPU cycles to perform the task time slice, a time slice of the current task execution. However, before the switch will save the state of a task, so that the next state to switch back to this task, this task can be loaded again, saved from the task to the process of reloading is a context switch.

And in multiple threads, the program counter is used to record the position of the current thread of execution, therefore, mainly after private program counter to the thread switching can be restored to the correct execution position.

VM stack and native method stacks Why is private?

VM stack: Each Java method is created in the same time frame for the implementation of a stack local variables to store information table, operand stack, constant pool references and so on. Until completion of the execution from the call process method, it corresponds to a process of pushing and popping the stack frame of the stack in the Java virtual machine.

Native method stacks: stack and the virtual machine is very similar to the role, the difference is: VM stack for the Java virtual machine to perform method (ie bytecode) services, native method stacks, the virtual machine to use a method of Native Service (a non-call interface java java code, such as a method to achieve this is achieved by a C). In the HotSpot virtual machine and the Java virtual machine stacks combined.

Therefore, in order to ensure the thread local variables are not accessible to other threads, virtual machine stacks and stacks native method is thread private.

3. What is the thread deadlock? How to avoid deadlocks?

The so-called deadlock refers to a deadlock because multiple threads compete for resources caused by (wait for each other)

Example: a business deal with a thread, the thread b b processing business, while a thread processing resources required to call a business of b, b simultaneous processing threads b need to call a business resource, both sides are waiting for the release of other resources, thereby lead to deadlock.

Avoid deadlock:

As long as we destroy the four conditions of deadlock in one of it.

Destruction of mutually exclusive conditions

This condition we can not destroy because we want them to use the lock always been mutually exclusive (exclusive access to critical resources needed).

Destruction request holding condition

One-time application of all resources.

Damage is not deprived conditions

Thread take up some resources to further apply for other resources, if not eligible, it may take the initiative to release the resources occupied.

Destruction of circulation wait condition

On-demand application resources to prevention. Certain order application resources, resources are released in reverse order release. Destruction of circulation wait condition.

  1. To talk threads sleep () method and the wait () method differences and similarities?

Both main difference is that: sleep method does not release the lock, and wait method releases the lock.

Both can suspend execution thread.

Wait typically used for inter-thread interactions / communication, sleep is often used to pause execution.

After the wait () method is called, the thread will not automatically wake up, you need to notify the other thread calls the same object () or notifyAll () method. After the sleep () method execution is completed, the thread will automatically wake. Or you can use wait (long timeout) after the timeout thread will automatically wake.

5. Why do we call the start () performs the run () method when the method, why can not we call the run () method directly? (classic)

a new Thread, the thread enters a new state; call start () method will start a thread and thread into the ready state, when assigned to the time slot after you can start to run. start () performs the appropriate preparations thread, and then automatically execute run (content method), which is a true multi-threaded work. The direct execution run () method, the method will run as a common method in the main thread to perform, and it will not be executed in a thread, so this is not multi-threaded work.

Summary: The start method is called before they start the thread and the thread into the ready state, and the run method just an ordinary thread method calls, or executed in the main thread.

  1. synchronized keyword

1, to talk about their own understanding of the synchronized keyword

The solution is synchronized keyword to access resources between multiple threads synchronization, you can guarantee he modified the method or block of code at any time can only be executed by a thread.

Further, synchronized early low efficiency before 1.6, heavyweight lock belongs. 1.6 achieving lock introduces a lot of optimization, such as spin lock spin lock adaptability, to eliminate the lock, the lock roughening biased locking, lightweight lock techniques to reduce the overhead of the lock operation, the now synchronized efficiency is also optimized very well.

2, talk about how he is using the synchronized keyword, use yet in the project

The main synchronized keyword used in three ways:

Modified Examples of the method: apply to the current instance of the object locked, before entering the synchronization code to obtain the current lock object instance

Modified static method: that is to lock the current class, will apply to all object instances of the class, because static members do not belong to any instance of an object, the class members (static indicates that this is a static resource class, no matter how many new objects, only one). So if a thread A non-static synchronized method calls an instance of an object, and the thread B needs to call this static synchronized method belongs to the class of the object instance, is allowed, mutual exclusion will not happen, because access static synchronized method lock is occupied the current class locks, access to non-static synchronized method lock is occupied by the current instance of the object lock.

Modified block: designated lock object, to lock a given object, before entering the synchronization code library to obtain a given lock of the object.

Summary: synchronized static keyword added to a static method and synchronized (class) is the code block is locked to the class Class. Examples of the synchronized keyword added to the object instance is locked. Try not to use synchronized (String a) because the JVM, the string constant pool has a cache function!

Double check the lock object that implements the singleton (thread-safe)

The first check: Since the singleton instance created once only, if the call back method getInstance again, simply return a previously created instance, and therefore most of the time necessary to perform code synchronization method which has greatly improved performance. If not check first, then top with the lazy mode makes no difference, every time to compete lock.

Second check: If there is no second check, assuming t1 thread execution of the first check, it is determined null, time t2 also acquired the right to execute the CPU, checking is also performed for the first time, is determined also is null. Next t2 get the lock, create an instance. At this time t1 was given executive power CPU, due before the first check has been carried out, the result is null (do not judge again), after acquiring the lock, to create the instance. The results will lead to create multiple instances. It is necessary to check a second synchronization code which, if the instance is empty, is created.

It should be noted, private volatile static Singleton uniqueInstance; need to add the volatile keyword, otherwise an error will occur. JVM instructions cause of the problem is that the presence of the rearrangement optimization. When you create a singleton object in a thread before the constructor is called, it allocates memory space for the object and the object field to the default value. At this point you can assign memory addresses assigned to the instance of the field, but the object may not be initialized. If followed by another thread to call the getInstance, to take the state is not the correct object, the program will go wrong.

3, talk about the synchronized keyword JDK1.6 bottom of what had been done after optimization, can you explain in detail these optimizations

JDK1.6 lock achieved introduced a number of optimization, such as the biased locking, lightweight lock, spin lock, adaptive spin locks, locks elimination lock roughening techniques to reduce the overhead of the lock operation.

There are four main lock state, in order: no lock status, tend to lock status, lock status lightweight, heavyweight lock status, they will be with the fierce competition and escalating. Note that the lock can not be upgraded downgrade, this strategy is to improve the efficiency of obtaining and release locks.

1, biased locking

The purpose of introducing the purpose of biased locking and lock much like the introduction of lightweight, they are for the absence of multi-threaded premise of competition, reducing the traditional heavyweight performance operating system lock mutex produce consumption. But the difference is: lightweight lock using CAS operates without competition case to replace the use mutex. The biased locking in the absence of competitive situation would have eliminated the entire synchronization.

Biased locking of "bias" is the eccentric side, it means will tend to get it the first thread, if in the next execution, the lock is not acquired by another thread, then the thread holding the lock bias You do not need to be synchronized! About biased locking principle can be viewed "in-depth understanding of the Java Virtual Machine: JVM advanced features and best practices," second edition, Chapter 13 Section lock optimization.

But the more intense competition for places locks, lock bias on the failure, because this situation is likely to lock a thread for each application are different, and therefore should not be used biased locking In this case, otherwise would be wasted, need to pay attention that tend to lock after the failure, and will not expand immediately heavyweight lock, but the first upgrade to lightweight lock.

2, lightweight lock

If tend to lock fails, the virtual machine will not immediately upgrade to heavyweight lock, it will try to optimize the use of a tool called a lightweight lock (after adding 1.6). When the premise is not to lock the lightweight heavyweight lock in place, it is not the intention of competing multi-threaded, to reduce the use of traditional heavyweight lock mutex performance overhead operating system produced, because the use of lightweight lock, not You need to apply mutex. In addition, the lightweight lock locking and unlocking have used the CAS operation. Lock and unlock the lock on the Lightweight principles can be viewed "in-depth understanding of the Java Virtual Machine: JVM advanced features and best practices," second edition, Chapter 13 Section lock optimization.

Based on a lightweight lock can improve synchronization performance is the program "For the vast majority of locks, is there is no competition in the entire synchronization cycle," which is an empirical data. If there is no competition, lightweight lock using CAS operations without the overhead of using a mutex operation. But if there is lock contention, in addition to the cost mutex, it will also operate additional CAS happen, so in case there is lock contention, lightweight lock slower than the traditional heavyweight lock! If the lock is highly competitive, it will soon be expanded lightweight heavyweight lock!

3, and adaptive spin lock spin

After a lightweight lock fails, the virtual machine in order to avoid the real thread hangs in the operating system level, but also a means to optimize known as spin lock.

Mutex synchronization is blocked achieve the greatest impact on performance because of a hung thread / recovery operations will need to thread into kernel mode is completed (user mode to kernel mode conversion takes time).

No general thread holds the lock time is too long, so just a little time for this thread to suspend / resume the thread is worth the candle. Therefore, the virtual machine development team to consider this: "Can we get behind the request to acquire the lock threads wait for a while to see if it is suspended without thread holding the lock will soon release the lock?" . In order to make a thread wait, we just let the thread executes a busy loop (spin), the technology is called spin.

4, eliminate lock

Lock eliminate understand it is very simple, it refers to the virtual machine even if the compiler at run time, if it detects that the shared data can not compete, then the elimination of the implementation of the lock. Lock can save time eliminate pointless requests lock.

5, 锁粗 of

In principle, we are in the preparation of the code is always recommended to limit the scope of sync blocks as small as possible - just be straight in the actual scope of the shared data synchronization, so in order to make the number of operations required to synchronize becomes possible small, if there is lock contention, that the waiting thread can get the lock as soon as possible.

In most cases, the above principles are no problem, but if a series of successive operations are repeated locking and unlocking of the same object, it will bring a lot of unnecessary performance overhead.

Talk about the difference of synchronized and ReentrantLock

1, both the lock reentrant

Both are reentrant lock. "Reentrant lock" concept is: they can get their own internal lock again. For example, a thread acquires the lock of an object, then the object lock has not been released, it again when you want to acquire a lock of this object can still get in, if not reentrant locks, it would cause a deadlock. Each time the same thread acquires the lock, the lock counters are incremented by one, so have to wait until the lock counter drops to zero in order to release the lock.

2, synchronized depends on the JVM relies on API ReenTrantLock

3, ReenTrantLock adds some advanced features: ① waiting can interrupt; ② can achieve a fair lock; ③ can achieve selective notification (lock can bind multiple conditions)

ReenTrantLock provides a mechanism capable of interrupting the thread waiting for the lock, this mechanism is implemented by lock.lockInterruptibly (). That is awaiting threads can choose to give up waiting, changed other things.

ReenTrantLock can specify whether fair or unfair lock lock. The only non-synchronized fair locks. The so-called fair lock is the first thread is waiting to acquire the lock. ReenTrantLock non-default fair, can be developed by ReentrantLock ReenTrantLock class (boolean fair) whether the constructor is fair.

4, performance is not the selection criteria, after JDK1.6, synchronized and ReenTrantLock performance is basically the same as the

  1. volatile keyword

volatile characteristics

Ensure visibility when different threads to this variable operation, that is a thread modifies the value of a variable, this new value to other thread is immediately visible. (Achieve visibility)

Prohibition command reordering. (Achieve orderliness)

volatile can only guarantee the atomicity of a single read / write. i ++ Such atomic operation is not guaranteed.

Talk about the difference between the synchronized keyword and volatile keywords

The volatile keyword lightweight thread synchronization is achieved, the volatile performance is certainly better than the synchronized keyword. But the volatile keyword can only be used for variables can be modified method synchronized keyword and code blocks. Efficiency has been significantly improved after the synchronized keyword conducted mainly in order to reduce to obtain and release locks bring the performance overhead introduced by biased locking and lightweight locks and other various optimization after JavaSE1.6, the actual development use the keyword synchronized scenes or more.

Multithreaded access to the volatile keyword blocking will not occur, and may block the synchronized keyword occur

to ensure the visibility of the volatile keyword data, but can not guarantee the atomicity of data. Both the synchronized keyword can be guaranteed.

The volatile keyword is mainly used to solve the visibility of variables between multiple threads, and the synchronized keyword is solved resources between multiple threads to access synchronization.

  1. ThreadLocal

Under normal circumstances, we are creating a variable can be any one thread to access and modify. If you want to implement each thread has its own dedicated local variables how to solve it? JDK provided ThreadLocal class in order to solve this problem. ThreadLocal class main solution is to allow each thread to bind their value, ThreadLocal class image of the metaphor of the box can store data, the box can store private data for each thread.

ThreadLocal ground floor there is a ThreadLocalMap, he can be understood as customized HashMap ThreadLocal class implementation, final data are in existence ThreadLocalMap not exist on ThreadLocal. ThreadLocalMap ThreadLocal can be understood as a package.

Thread are each provided with a ThreadLocalMap, and may be stored in ThreadLocal ThreadLocalMap as the key pairs.

ThreadLocal memory leaks

used as key ThreadLocalMap ThreadLocal weak reference, and a reference value is strong. So, if the case is strong ThreadLocal no external references, when garbage collection, key will be cleared away, and the value will not be cleared away. As a result, ThreadLocalMap key is null will appear in the Entry. If we do not do any measure, then, value can never be GC recovery, this time may cause memory leaks. ThreadLocalMap implementation has been considered such a case, when calling set (), get (), remove () method, it will clean out the key to null records. After calling method using ThreadLocal remove () method is preferably manually

9. Thread Pool

Why use a thread pool?

Pooling technology compared to everyone has been commonplace, thread pools, database connection pool, Http connection pool so are the applications of this idea. The main idea of ​​pooling technique is to reduce the consumption of each access to resources, improve the utilization of resources.

Borrowing benefits about using a thread pool for "Java concurrent programming art" mentioned:

Reduce resource consumption. By reusing the thread has been created to reduce thread creation and destruction caused by consumption.

Improve the response speed. When the mission arrives, the task may not need to wait until the thread creation can be implemented immediately.

Thread improve manageability. A thread is a scarce resource, if the unlimited creation, not only consumes system resources, but also reduce the stability of the system, using a thread pool can be unified distribution, tuning and monitoring.

Runnable interface to achieve distinction and Callable interface

Runnable interface does not return a result or throw checked exceptions, but Callable Interface can. So, if the task does not need to return a result or throw an exception recommended Runnable interface, the code looks like this will be more concise.

You can achieve conversion between objects through Runnable and Callable Executors tools.

What is the difference execute execute () and submit () method is?

execute (): used to submit the job does not require a return value, it is impossible to determine whether the task is executed successfully or not the thread pool

submit (): used to submit the task requires a return value, thread pool will return an object of type Future, by the Future object can determine whether the task executed successfully.

How to create a thread pool?

"Ali Baba Java Development Manual" to force the thread pool is not allowed Executors to create, but by ThreadPoolExecutor way, this approach allows the students to write more explicit operating rules thread pool, to avoid the risk of resource depletion

Executors return to the thread pool object drawbacks as follows:

FixedThreadPool and SingleThreadExecutor: request is allowed queue length Integer.MAX_VALUE, may accumulate a large number of requests, thereby causing OOM.

CachedThreadPool and ScheduledThreadPool: allows you to create a number of threads to Integer.MAX_VALUE, may create a large number of threads, resulting OOM.

Way: by a constructor implemented

Second way: tools implemented by the frame Executors Executor

We can create three types of ThreadPoolExecutor:

FixedThreadPool: This method returns a fixed number of threads in the thread pool. The number of threads in the thread pool is always the same. When there is a new task submission, if the thread pool idle threads is executed immediately. If not, the new task will be temporarily stored in a task queue, there is a thread to be idle, it handles tasks in the task queue.

SingleThreadExecutor: method returns only one thread of a thread pool. If more than one task is to submit to the thread pool, the task will be saved in a task queue until the thread is idle, to perform the task queue of first-in first-out order.

CachedThreadPool: This method returns a number of threads can be adjusted according to the actual thread pool. The number of threads in the thread pool is uncertain, but can be reused if idle threads will use the thread priority reusable. If all the threads are at work, another new task submission, new thread processing task is created. All threads in the current task is finished, it will return to the thread pool multiplex.

Finally, the entire contents of my collated into a PDF document, please forward + For free access attention after private letter: interview to remember is to receive triple Ha ~! ! !

Published 78 original articles · won praise 9 · views 6193

Guess you like

Origin blog.csdn.net/WANXT1024/article/details/104384503