Server-side development of Java preparation for autumn recruitment interview 5

After working hard for so many years, looking back, it is almost all long setbacks and sufferings. For most people's life, smooth sailing is only occasional, and frustration, unbearable, anxiety and confusion are the main theme. We take the stage we did not choose, play the script we did not choose. Keep going!

Table of contents

1. What is the difference between ArrayList and LinkedList, and what is the application scenario?

2. Is LinkedList a singly linked list or a double linked list? Is Linkedlist finding the second and penultimate the same efficiency? Why?

3. What are the thread startup methods and what is the difference?

4. How to achieve thread safety besides locking?

5. Is volatile locked? What is spin? What is CPU idling?

6. What is the difference between a process and a thread? difference in communication?

7. What are the basic data types of Java? What are the constant pools? Where?

8. TreadLoacl explain why there is a memory leak? How to avoid memory leaks?

9. What is the difference between HashMap and hashTable?

10. What is the expansion mechanism of HashMap?

11. ArrayList expansion mechanism?

12. How does spring start a transaction?

13. Explain the BIO, NIO, and AIO models? The three major components of NIO?

14. Tell me about the difference between a clustered index and a non-clustered index?

15. How to start a thread? What is the difference between calling the start and run methods?

16. What is the difference between InnodB and myiasm?

17. Database isolation level? MySQL and Oracle default isolation level?

18. AOP explain?

19. How about CAS? Syschronized? ReentrantLock? Lock? Synchronized lock upgrade?

20. Algorithm question: Preorder, inorder and postorder traversal of binary tree


1. What is the difference between ArrayList and LinkedList, and what is the application scenario?

Usually, the differences between ArrayList and LinkedList are as follows:
1. Data structure: ArrayList implements a data structure based on a dynamic array, while LinkedList is a data structure based on a linked list; 2. Random access: For random access get and set , ArrayList is better than LinkedList, because LinkedList needs to move the pointer;

 3. Add and delete operations: For the add and delete operations add and remove, generally everyone will say that LinkedList is faster than ArrayList, because ArrayList needs to move data. But the actual situation is not like this. For adding or deleting, LinkedList and ArrayList cannot clearly indicate who is faster and who is slower. When the amount of data is large, it starts at about 1/10 of the capacity, and the efficiency of LinkedList begins to be lower than that of ArrayList. , especially when the half and second half positions are inserted, LinkedList is obviously less efficient than ArrayList, and the larger the amount of data, the more obvious;

Application scenario:

(1) If the application has more random access to the data, the ArrayList object is better than the LinkedList object;

  (2) If the application has more insertion or deletion operations and less random access, the LinkedList object is better than the ArrayList object;

(3) However, the insertion and deletion operations of ArrayList are not necessarily slower than LinkedList.

2. Is LinkedList a singly linked list or a double linked list? Is Linkedlist finding the second and penultimate the same efficiency? Why?

Linkedlist, doubly linked list, advantages, adding and deleting, takes a very short time, but because there is no index, the operation of the index is more troublesome, and can only be traversed in a loop, but every time you loop, you will first judge that this index is located in the linked list Whether the front part or the back part, half of the linked list will be traversed each time, not all of them. 
The doubly linked list has a previous and next, and the first part of the linked list has a fiest and last pointing to the first element and the last element. When adding and deleting, you only need to change a previous and next to add and delete. Therefore, LinkedList is quite convenient for deleting and adding data. 
Logically speaking, Linkedlist finds the second and penultimate efficiency, because the doubly linked list can be traversed in two directions.

3. What are the thread startup methods and what is the difference?

1. Inherit the Thread class, and override the run method, create an object of this class, and call the start method to start the thread.
  2. Implement the Runnable interface, override the run method, create a Thread class object, and pass the Runnable subclass object to the Thread class object. Call the start method to start the thread.
  3. Create a FutureTask object, create a Callable subclass object, override the call (equivalent to run) method, and pass it to the FutureTask object (equivalent to a Runnable).
  Create a Thread class object and pass the FutureTask object to the Thread object. Call the start method to start the thread. In this way, the return value after the thread is executed can be obtained.
 

4. How to achieve thread safety besides locking?

Using thread-safe classes is essentially locking.

To achieve a balance between performance and security, a concept of lock-free concurrency is introduced. Of course, in essence, it is bullshit. To maintain atomicity must be locked.

The first method, through the spin lock, the thread first spins the specified number of times to acquire the lock without preempting the lock.

The second method is optimistic locking, adding a version number to each data, and modifying the version number once the data changes.

The third is to minimize the use of shared objects in the business, to achieve isolation and reduce concurrency.

The fourth is to use Threadlocal to create a copy of the shared variable.

5. Is volatile locked? What is spin? What is CPU idling?

Volatile does not require locking, is lighter than synchronized, and does not block threads. Volatile can guarantee visibility and order, but cannot guarantee atomicity under multi-threading. volatile can only guarantee limited atomicity. By adding various specific memory barriers before and after the read and write operations of variables modified by volatile to prohibit instruction reordering to ensure orderliness. Visibility: The variable is modified by volatile, and the java memory model can ensure that the value of this variable seen by all threads is consistent.

Spin: It means that it keeps looping here until the goal is achieved. The CAS algorithm is a spin lock mechanism that will not cause thread blocking. If the lock cannot be obtained, it will keep spinning and try to lock.

CPU idling: If the CAS fails, it will keep trying. If the CAS fails for a long time and does not release the CPU, it may bring a lot of overhead to the CPU. (CPU idling problem) (lock starvation)

6. What is the difference between a process and a thread? difference in communication?

Summary of the difference between process and thread:

Essential difference: Process is the basic unit of operating system resource allocation, while thread is the basic unit of processor task scheduling and execution.

Containment relationship: A process has at least one thread, and a thread is a part of a process, so a thread is also called a lightweight process or a lightweight process.

Resource overhead: each process has an independent address space, switching between processes will have a large overhead; threads can be regarded as lightweight processes, threads in the same process share the address space of the process, each thread Each has its own independent running stack and program counter, and the overhead of switching between threads is small.

Impact relationship: After a process crashes, other processes will not be affected in protected mode, but a thread crash may cause the entire process to be killed by the operating system, so multi-process is more robust than multi-thread.
 

Inter-process communication:

①Pipeline

The data transmitted by the pipeline is one-way. If we want to communicate with each other, we need to create two pipelines, half-duplex.

②Message queue:

Basic principle: Process A wants to send a message to process B. Process A can return the data after putting the data in the corresponding message queue, and process B can read the data when it needs it.

③ Shared memory:

Shared memory solves the problem of copying messages between user mode and kernel mode during the process of reading and writing message queues.

It is to take out a virtual address space and map it to the same physical memory. This shared memory is created by one process, but can be accessed by multiple processes. In this way, the things written by this process can be seen by another process immediately without copying, which improves the speed of inter-process communication.

④Signal level:

  • To prevent data confusion caused by multi-process competition for shared resources, a protection mechanism is required so that shared resources can only be accessed by one process at any time, and semaphores realize this protection mechanism.
  • A semaphore is actually an integer counter, which is mainly used to realize mutual exclusion and synchronization between processes, not to cache data for inter-process communication.

⑤Socket:

If you want to communicate with processes on different hosts across the network, you need Socket communication. It is also possible to communicate between processes on the same host. TCP and UDP.

Inter-thread communication:

The purpose of communication between threads is mainly for thread synchronization. So threads do not have a communication mechanism for data exchange like in process communication.

Different threads of the same process share the same memory area, so threads can share information conveniently and quickly. Just copy the data into a shared (global or heap) variable. But it is necessary to avoid multiple threads trying to modify the same information at the same time.

1. Mutex locks
are locked before accessing shared resources, and the mutex is released after the access is completed. After locking, other threads that want to access this resource will be blocked until the current thread releases the mutex. Take care to prevent deadlocks.

2. Read-write lock
Only one thread can occupy the read-write lock in write mode at a time, but multiple threads can simultaneously occupy the read-write lock in read mode.

When the read-write lock is in the write-locked state, all threads attempting to lock the lock will be blocked until the lock is unlocked.
When the read-write lock is in the read-locked state, all threads that attempt to lock it in read mode can gain access, but any thread that wishes to lock the lock in write mode will block until all threads are released their read locks.
3. Condition variables
The mutex is used for locking, and the condition variable is used for waiting, and the condition variable always needs to be used with the mutex, and the running thread waits for a specific condition to occur in a non-competitive manner.

The condition variable itself is protected by a mutex, and the thread must first add the mutex before changing the condition variable. (When a shared data reaches a certain value, wake up the thread waiting for the shared data)

4. Semaphore
Using thread semaphore can efficiently complete thread-based resource counting. A semaphore is actually a non-negative integer counter used to control public resources.

When the public resource increases, the semaphore increases; when the public resource decreases, the semaphore decreases; only when the value of the semaphore is greater than 0, can the public resource represented by the semaphore be accessed.
 

7. What are the basic data types of Java? What are the constant pools? Where?

There are 8 basic data types in Java, namely:

6 types of numbers: byte, short, int, long, float, double

1 character type: char

1 boolean type: boolean

String Constant Pool

class constant pool (Class Constant Pool)

Runtime Constant Pool

Before Java6, the constant pool was stored in the method area (permanent generation).

In Java7, the constant pool is stored in the heap.

After Java8, the entire permanent generation area was canceled and replaced by metaspace. The runtime constant pool and the static constant pool are stored in the metaspace, while the string constant pool is still stored in the heap.
 

8. TreadLoacl explain why there is a memory leak? How to avoid memory leaks?

When multiple threads access the same shared variable, if no synchronization control is performed, the problem of "data inconsistency" will often occur. Usually, the synchronized keyword is used to lock to solve it. ThreadLocal changes the way of thinking.

ThreadLocal itself does not store values, it depends on the ThreadLocalMap in the Thread class, when calling set(T value), ThreadLocal uses itself as a Key, and stores the value as a Value in the ThreadLocalMap in the Thread class, which is equivalent to all threads reading What is written is a private copy of itself, and the data between threads is isolated and does not affect each other, so there is no thread safety problem.

Entry uses ThreadLocal as the key and saves the value as value. It inherits from WeakReference. Note the first line of code super(k) in the constructor, which means that the ThreadLocal object is a "weak reference". To sum up, since the ThreadLocal object is a weak reference, if there is no strong external reference pointing to it, it will be recycled by GC, causing the Key of Entry to be null. If there is no strong external reference pointing to it at this time, the value will be forever It can’t be accessed anymore, and it should be reclaimed by GC, but because the Entry object still strongly references the value, the value cannot be reclaimed. At this time, a “memory leak” occurs, and the value becomes a value that can never be accessed, but it is Objects that cannot be collected.

How to avoid memory leaks?
When using ThreadLocal, it is generally recommended to declare it as static final to avoid frequent creation of ThreadLocal instances.
Try to avoid storing large objects. If you have to save them, try to call remove() to delete them in time after the access is completed.

9. What is the difference between HashMap and hashTable?

1. HashMap is a lightweight implementation of Hashtable (non-thread-safe implementation). They all complete the Map interface. The main difference is that HashMap allows empty (null) key values ​​(key). Due to non-thread safety, only one thread accesses In the case of , the efficiency is higher than that of Hashtable.

2. HashMap allows null to be used as an entry key or value, but Hashtable does not.
HashMap removes the contains method of Hashtable and changes it to containsvalue and containsKey. Because the contains method is easily misleading.

3. Hashtable inherits from the Dictionary class, and HashMap is an implementation of the Map interface introduced by Java1.2.

4. The biggest difference is that Hashtable's method is Synchronized, but HashMap is not. When multiple threads access Hashtable, you don't need to synchronize its methods yourself, but HashMap must provide synchronization for it.
 

10. What is the expansion mechanism of HashMap?

The bottom layer of HashMap is composed of array + linked list (red-black tree). The size of the array can be set in the construction method. The default size is 16. Each element in the array is a linked list. The elements in the linked list before jdk7 use the head insertion method to insert elements After jdk8, the tail insertion method is used to insert elements. As more and more elements are inserted, the search efficiency becomes lower. Therefore, when certain conditions are met, the linked list will be converted into a red-black tree. As the number of elements increases, the array of HashMap will expand frequently. If the load factor is not given a default value during construction, the default value of the load factor is 0.75. The expansion of the array is as follows:

1: After adding an element, the total number of elements added to the array is greater than the array length * 0.75 (default, you can also set it yourself), and the array length is doubled. (For example, after starting to create a HashMap collection, the length of the array is 16, and the critical value is 16 * 0.75 = 12. When the number of elements exceeds 12 after adding elements, the length of the array is expanded to 32, and the critical value becomes 24)

2: In the absence of a red-black tree, if the length of a linked list in the array exceeds 8 after adding elements, the array will be doubled. (For example, after starting to create a HashMAp collection, assuming that the added elements are all in a linked list, When the number of elements in the linked list is 8, add another element to the linked list. At this time, if there is no red-black tree in the array, the array will be doubled to 32. Assuming that the elements of the linked list remain unchanged at this time, then in this An element is added to the linked list, and the length of the array is doubled to 64. Assuming that the arrangement of the linked list elements remains unchanged at this time, there are 10 elements in the linked list at this time, which is the maximum number of elements in the HashMap linked list. At this time , and then add elements to meet the two conditions of the linked list tree (1: the length of the array reaches 64, 2: the length of the linked list reaches 8), the linked list will be converted into a red-black tree
 

11. ArrayList expansion mechanism?

The bottom layer of ArrayList is a dynamic array. ArrayList will first judge the initialization parameter initalCapacity passed in.

  • If the argument is equal to 0, the array is initialized to an empty array,
  • If not equal to 0, initialize the array to an array of capacity 10.

Expansion time

When the size of the array is greater than the initial capacity (for example, the initial value is 10, when the 11th element is added), the capacity will be expanded, and the new capacity will be 1.5 times the old capacity.

Expansion method

 When expanding, a copy of the original array will be created with the new capacity, and the original array will be modified to point to the new array. The original array will be discarded and reclaimed by GC.

12. How does spring start a transaction?

Annotated declarative transactions

To use Spring transaction management through annotations, you first need to enable this function. There are two ways.

  1. Configured in the Spring XML configuration file  <tx:annotation-driven/>.
  2. Add annotations to Spring configuration classes  @EnableTransactionManagement .

After the annotation is enabled, you need to configure TransactionManager as a bean in Spring. The commonly used implementation is DataSourceTransactionManager. If spring-boot-starter-jdbc is introduced, you don't need to explicitly configure TransactionManager, but only need to configure a Datasource.

After enabling the annotation support, you need to use the @Transactional annotation on the Spring Bean class or method.
 

13. Explain the BIO, NIO, and AIO models? The three major components of NIO?

Java supports three I/O models for network programming : BIO, NIO, AIO

BIO:

Synchronous and blocking (traditional blocking type), the server implementation mode is one thread per connection, that is, when the client has a connection request, the server needs to start a thread for processing. If the connection does not do anything, it will cause unnecessary thread overhead. It can be improved through the thread pool mechanism (realize multiple clients connecting to the server).

Combing of BIO programming process:

  1. The server starts a ServerSocket , registers the port, and calls the accpet method to monitor the client's Socket connection
  2. The client starts the Socket to communicate with the server. By default, the server needs to establish a thread for each client to communicate with it.

NIO

Synchronous non-blocking, the server implementation mode is one thread to process multiple requests (connections), that is, the connection requests sent by the client will be registered on the multiplexer, and the multiplexer will poll when the connection has an I/O request. to process.

  • NIO has three core parts: Channel (channel), Buffer (buffer), Selector (selector)

The non-blocking mode of Java NIO enables a thread to send a request or read data from a channel, but it can only get the currently available data. If no data is currently available, nothing will be obtained instead of keeping the thread blocked. So until the data becomes readable, the thread can continue to do other things. The same is true for non-blocking writes. A thread requests to write some data to a channel, but does not need to wait for it to be completely written. The thread can do other things at the same time.
 

AIO

Asynchronous and non-blocking, the server implementation mode is one effective request and one thread. The client's I/O request is completed by the operating system first and then notifies the server application to start the thread for processing. It is generally applicable to a large number of connections and a long connection time Applications.

14. Tell me about the difference between a clustered index and a non-clustered index?

In InnoDB, the MySQL default engine, indexes can be roughly divided into two categories: clustered indexes and non-clustered indexes.

There can only be one clustered index in a table, which generally refers to the primary key index (if there is a primary key index), and the clustered index is also called a clustered index . The leaf nodes of the index are data nodes. Put the index and table data in the same node, the leaf node of the index structure stores the data, and the index is found, that is, the data is found

The leaf nodes of the non-clustered index are still index nodes, but there is a pointer pointing to the corresponding data block. Index storage and data storage are separated, and the leaf nodes of the index structure point to the location of the data. Find the location through the index, and then find the data through the location. This process is called back-to-table query . A table can have multiple nonclustered indexes . Nonclustered indexes also become secondary indexes .

The clustered index table data is stored in the order of the index, that is to say, the order of the index items is consistent with the physical order of the records in the table.

nonclustered index. The order of table data storage has nothing to do with the order of indexes.

15. How to start a thread? What is the difference between calling the start and run methods?

1) Inherit the Thread class and override the run() method

2) Implement the Runnable interface and rewrite the run() method

3) Implement the Callable interface and rewrite the call() method

To start a thread is to call the start() method to make the thread ready, and it can be scheduled to run in the future. A thread must be associated with some specific execution code, and the run() method is the execution code associated with the thread.

16. What is the difference between InnodB and myiasm?

       1. InnoDB supports transactions, but MyISAM does not. For InnoDB, each SQL language is encapsulated into a transaction by default and automatically submitted, which will affect the speed, so it is best to put multiple SQL languages ​​between begin and commit to form a transaction;

  2. InnoDB supports foreign keys, but MyISAM does not. Converting an InnoDB table containing foreign keys to MYISAM will fail; (Foreign keys are not used much now, because they are too correlated. If you want to delete a table, the deletion will fail because of the foreign key association. Usually It indirectly replaces the role of foreign keys through the way of table a = table b on a.id = b.id, which associates two tables)

  3. InnoDB is a clustered index, using B+Tree as the index structure, and the data file is bound together with the (primary key) index; MyISAM is a non-clustered index, which also uses B+Tree as the index structure, but the index and the data file are separated Yes, the index holds pointers to data files.

  4. InnoDB 必须要有主键,MyISAM可以没有主键; InnoDB If we do not explicitly specify to create a primary key index. It will help us hiddenly generate a 6 byte int index as the primary key index.

  5. InnoDB supports table-level locks and row-level locks, and the default is row-level locks; while MyISAM only supports table-level locks. InnoDB's row locks are implemented on indexes, not on physical rows. If the access misses the index, the row lock cannot be used, and it will degenerate into a table lock.
 

17. Database isolation level? MySQL and Oracle default isolation level?

MySQL supports four transaction isolation levels, the default transaction isolation level is repeatable read, and the default isolation level of Oracle database is read committed.

The four isolation levels are read uncommitted, read committed, repeatable read, and serialized/serialized. Among them, read uncommitted is the lowest isolation level, and serialization isolation level is the highest.

1. Read uncommitted (read uncommitted): read without committing.
2. Read committed (read committed): Only those that have been submitted can be read.
3. Repeatable read (repeatable read): before the end of the transaction. The real data will never be read, and it will not be read even after submission. What is read will always be the original data, that is, the illusion.
4. Serializable: Indicates that transactions are queued and cannot be concurrent, and the most real data is read each time.

Running multiple transactions at the same time, when these transactions access the same data in the database, if the necessary isolation mechanism is not taken, it will lead to various concurrency problems.
1-Dirty read: For two transactions T1 and T2, T1 reads the fields that have been updated by T2 but not yet committed. After that, if T2 is rolled back, the data read by T1 is temporary and invalid.
2-Non-repeatable read: For two transactions T1 and T2, T1 reads the field, but T2 updates the field, T1 reads the field again, and the value is different.
3-Phantom reading: For two transactions T1 and T2, T1 reads some fields from the table, T2 inserts some new rows in the table, T1 reads the table again and finds a few more rows.

read uncommitted: Dirty reads, phantom reads, and non-repeatable reads can occur.
read committed: Avoid dirty reads, phantom reads and non-repeatable reads.
Repeatable read: Avoid dirty reads and non-repeatable reads, and phantom reads.
serializable: Avoid dirty reads, phantom reads, and non-repeatable reads.
 

18. AOP explain?

AOP ( Aspect  Oriented Programming): Aspect Oriented Programming, a programming paradigm that belongs to the category of software engineering, and guides developers on how to organize program structures. AOP makes up for the shortcomings of OOP, and conducts horizontal development based on OOP. In fact, this part of the repeated code can be extracted, which simplifies our development. When running, it is necessary to put back the extracted common function code to form a complete code, so that the program can run normally. Such a development mode is called AOP.

Joinpoint (connection point): The ordinary method we usually write is the connection point in AOP

Pointcut (entry point): The method that digs out the remaining common functions.

Advice (notification): the extracted common function is the notification, which is finally presented in the form of a method

Aspect (section): the corresponding relationship between the common function and the entry point

Target (target object): The object generated by the class corresponding to the method of digging out the function, this kind of object cannot directly complete the final work.

Weaving (weaving): It is a dynamic process of backfilling the functions that have been dug out.

Proxy (proxy): The target object cannot directly complete the work, and it needs to be backfilled with functions, which is realized through the created proxy object.

Introduction (introduction/introduction): It is to add member variables or member methods to the original object out of nothing
 

19. How about CAS? Syschronized? ReentrantLock? Lock? Synchronized lock upgrade?

CAS is a way to implement optimistic locking, that is, compare and swap (comparison and exchange), involving three operands:

The memory value V that needs to be read and written, the value A to be compared, and the new value B to be written
will first query the original value A, and then write the new value B only when V=A, if they are not equal It means that the original value has been modified by other threads, and it will continue to spin and retry to modify the value again.
Problems with CAS: ABA problem, high CPU overhead, only atomic operation of a shared variable can be guaranteed

CAS usage scenarios, more reads and less writes, for situations with less resource competition (less thread conflicts)

  • If synchronized is used at this time, frequent switching between user mode and kernel mode will consume a lot of resources;
  • The CAS spin rate is small and the performance is higher.

Synchronized is a type of pessimistic lock. It is used in scenarios where there are many write conflicts, serious thread conflicts, and strong consistency scenarios. At this time, the probability of CAS spin is high, and more CPU resources will be wasted. Synchronized is a built-in lock of the JVM. It is implemented internally through a monitor lock. It implements code block synchronization based on monitorenter and monitorexit. Each synchronization object has its own monitor lock. It is necessary to determine whether the object can obtain the monitor lock. The monitor lock is required to enter the synchronization block to execute the synchronization logic, otherwise it needs to enter the synchronization queue and wait.

ReentrantLock usage scenario:
The synchronized lock upgrade is irreversible. If it is a taxi-hailing software, after the peak period of taxi-hailing, it is still a heavyweight lock, which will reduce efficiency; at this time, it is better to use Reentratlock.
The concept of reentrant lock: You can acquire your own internal lock again. For example, a thread acquires the lock of an object, and the object lock has not been released at this time. When it wants to acquire the lock of the object again, it can still be acquired. If the lock is not reentrant, it will cause a deadlock. Every time the same thread acquires a lock, the lock counter is incremented by 1, so the lock cannot be released until the lock counter drops to 0.

ReentrantLock is more flexible to use, but there must be a cooperative action to release the lock.
ReentrantLock must manually acquire and release the lock, while synchronized does not need to manually release and open the
lock
. The locking mechanism of the latter is actually different. The bottom layer of ReentrantLock calls Unsafe's park method and locks. The synchronized operation should be the mark word in the object header.
 

Lock upgrade process: from no lock -> biased lock -> lightweight lock -> heavyweight lock. Biased lock: Only one thread enters the critical section to access the synchronized block. Lightweight lock: Multi-thread competition is not fierce, and the response of synchronization block execution is fast. Heavyweight lock: multi-thread competition, synchronization block execution time is longer.

1 Lock-free
​When an object is created, no thread has entered yet. At this time, the object is in a lock-free state, and the information in its Mark Word is shown in the above table.

2 Biased lock
​When the lock is in the lock-free state, when a thread A accesses the synchronization block and acquires the lock, it will record the thread ID in the lock record in the object header and stack frame, and the thread will not enter and exit the synchronization block later. A CAS operation is required to lock and unlock, just simply test whether the thread ID in the object header is consistent with the current thread.

3 Lightweight lock
​Based on the biased lock, another thread B comes in. At this time, it is judged that the ID of thread A stored in the object header is inconsistent with thread B, and the CAS competition lock will be used and upgraded to a lightweight lock. level lock, a lock record (lock Record) will be created in the thread stack, the Mark Word will be copied to the lock record, and then the thread will try to use CAS to replace the Mark Word in the object header with a pointer to the lock record. If successful, the current The thread acquires the lock; if it fails, it means that other threads compete for the lock, and the current thread tries CAS to acquire the lock.

4 Heavyweight lock
​When a thread does not acquire a lightweight lock, the thread will CAS spin to acquire the lock. When a thread spins 10 times and still does not acquire the lock, it will be upgraded to a heavyweight lock. After becoming a heavyweight lock, the thread will enter the blocking queue (EntryList), the thread will no longer spin to acquire the lock, but will be scheduled by the CPU, and the thread will execute serially.
 

20. Algorithm question: Preorder, inorder and postorder traversal of binary tree

First store the result of the preorder traversal into the collection, and then recursively pay attention to the recursive exit, and finally take the elements from the collection and store them in the array.

Preorder traversal topic: Preorder traversal of binary tree

import java.util.*;

/*
 * public class TreeNode {
 *   int val = 0;
 *   TreeNode left = null;
 *   TreeNode right = null;
 *   public TreeNode(int val) {
 *     this.val = val;
 *   }
 * }
 */

public class Solution {
    /**
     * 代码中的类名、方法名、参数名已经指定,请勿修改,直接返回方法规定的值即可
     *
     * 
     * @param root TreeNode类 
     * @return int整型一维数组
     */
    public static void preOrder(TreeNode root, ArrayList<Integer> arrayList){
     if(root == null){ //递归出口
        return ;
     }
        arrayList.add(root.val) ;
        preOrder(root.left,arrayList) ;
        preOrder(root.right,arrayList) ;
    }
    public int[] preorderTraversal (TreeNode root) {
        // write code here
       
        ArrayList<Integer> arrayList = new ArrayList<> () ;
        preOrder(root, arrayList) ;
        int [] ans = new int [arrayList.size()] ;
        for(int i=0; i<ans.length; i++){
            ans[i] = arrayList.get(i) ;
        }
        return ans ;
    }

}

Inorder traversal topic: Inorder traversal of a binary tree

import java.util.*;

/*
 * public class TreeNode {
 *   int val = 0;
 *   TreeNode left = null;
 *   TreeNode right = null;
 *   public TreeNode(int val) {
 *     this.val = val;
 *   }
 * }
 */

public class Solution {
    /**
     * 代码中的类名、方法名、参数名已经指定,请勿修改,直接返回方法规定的值即可
     *
     * 
     * @param root TreeNode类 
     * @return int整型一维数组
     */
    public int[] inorderTraversal (TreeNode root) {
        // write code here
        ArrayList<Integer> arraylist = new ArrayList<>() ;
        inOrder(root, arraylist) ;
        int [] ans = new int [arraylist.size()] ;
        for(int i=0; i<ans.length; i++){
            ans[i] = arraylist.get(i) ;
        }
        return ans ;
    }
    public static void inOrder(TreeNode root, ArrayList<Integer> arraylist){
        if(root == null){
            return  ;
        }
        inOrder(root.left,arraylist) ;
        arraylist.add(root.val) ;
        inOrder(root.right,arraylist) ;
    }
}

Post-order traversal topic: post-order traversal of binary tree

import java.util.*;

/*
 * public class TreeNode {
 *   int val = 0;
 *   TreeNode left = null;
 *   TreeNode right = null;
 *   public TreeNode(int val) {
 *     this.val = val;
 *   }
 * }
 */

public class Solution {
    /**
     * 代码中的类名、方法名、参数名已经指定,请勿修改,直接返回方法规定的值即可
     *
     * 
     * @param root TreeNode类 
     * @return int整型一维数组
     */
    public int[] postorderTraversal (TreeNode root) {
        // write code here
          ArrayList<Integer> arraylist = new ArrayList<>() ;
        postOrder(root, arraylist) ;
        int [] ans = new int [arraylist.size()] ;
        for(int i=0; i<ans.length; i++){
            ans[i] = arraylist.get(i) ;
        }
        return ans ;
    }
      public  void postOrder(TreeNode root, ArrayList<Integer> arraylist){
        if(root == null){
            return  ;
        }
        postOrder(root.left,arraylist) ;
        postOrder(root.right,arraylist) ;
         arraylist.add(root.val) ;
    }
}

Guess you like

Origin blog.csdn.net/nuist_NJUPT/article/details/129230225