The latest JAVA interview questions in 2023

The latest JAVA interview questions in 2023

1 JavaWeb Basics

1.1 What is the underlying implementation principle of HashMap?

The underlying implementation principle of HashMap? Taking jdk7 as an example:
HashMap map = new HashMap();
After instantiation, the bottom layer creates a one-dimensional array with a length of 16 Entry[] table. May have been executed multiple times put, map.put(key1,value1):

  • First, the calculated hash value key1of the class is called. After the hash value is calculated by a certain algorithm, the storage location in the array is obtained. hashCode()key1Entry
    • If the data at this location is empty, key1-value1the addition at this time is successful. ----Case 1
    • If the data at this location is not empty (meaning there is one or more data at this location (existing in the form of a linked list))
      compare key1the hash value with the already existing one or more data:
      • If key1the hash value is different from the hash value of the existing data, the key1-value1addition is successful. ----Scenario 2
      • If the hash value is the same as the hash value of key1an existing data , continue the comparison: call the method of the class and compare: (key2-value2)key1equals(key2)
        • If equals()it returns false: key1-value1the addition is successful at this time. ----Case 3
        • If equals()returned true: use value1replace value2.
        • Supplement: Regarding case 2 and case 3: the current key1-value1and original data are stored in a linked list.

In the process of continuous addition, the problem of expansion will be involved. When the critical value is exceeded (and the location to be stored is not empty), the capacity will be expanded. The default expansion method: expand to 2 times the original capacity and copy the original data.
jdk8Compared with jdk7the differences in underlying implementation:

    1. new HashMap(): jdk8The bottom layer does not create an array of length 16
    1. jdk8The underlying array is: Node[], notEntry[]
    1. When the method is called for the first time put(), an array of length 16 is created underneath.
    1. The underlying structure of jdk7 is only: array + linked list. jdk8Middle-lower structure: array + linked list + red-black tree.
    • 4.1 When forming a linked list, be cautious ( jdk7: new elements point to old elements. jdk8: old elements point to new elements)
    • In 4.2, jdk8when the number of data elements in a linked list at an index position of the array is > 8 and the length of the current array is > 64, all data at this index position will be stored using a red-black tree instead.
      DEFAULT_INITIAL_CAPACITY : HashMap的默认容量,16
      DEFAULT_LOAD_FACTORHashMap的默认加载因子:0.75
      threshold:扩容的临界值,=容量*填充因子:16 * 0.75 => 12
      TREEIFY_THRESHOLDBucket中链表长度大于该默认值,转化为红黑树:8
      MIN_TREEIFY_CAPACITY:桶中的Node被树化时最小的hash表容量:64
      

1.2 What are the similarities and differences between HashMap and HashTable?

HashMap : As the main implementation class of Map; thread-safe, high efficiency; stores null keys and values, and the capacity will be doubled if expanded Hashtable: As an ancient implementation class; thread-safe,
low efficiency; cannot store null If the key and value
are expanded, they will be doubled +1.
Property : commonly used to process configuration files. Both key and value are the bottom layer of String type
HashMap : array + linked list (jdk7 and before), array + linked list + red-black tree (jdk 8)

Similar points:
(1) Both are classes under the java.util package
(2) Both implement the Map interface, and the storage methods are all in the key-value format
(3) They also implement the Serializable and Cloneable interfaces
(4) The load factors are all is 0.75

Load Factor (loadFactor):

When we first create a HashMap, we will specify its capacity (if not explicitly specified, the default is 16). As we continue to put elements into the HashMap, it may exceed its capacity, so we need There is an expansion mechanism. The so-called expansion is to expand the capacity of HashMap. During the process of adding elements to HashMap, if the number of elements (size) exceeds the critical value (threshold), it will automatically expand (resize), and after the expansion, it will also It is necessary to rehash the original elements in the HashMap, that is, to redistribute the elements in the original bucket to the new bucket.
In HashMap, threshold = loadFactor * capacity. loadFactor is the load factor (load factor), indicating how full the HashMap is. The default value is 0.75f, which means that by default, when the number of elements in the HashMap reaches 3/4 of the capacity, it will automatically expand.

Differences:
(1) HashMapIt is non-thread-safe and has high efficiency; HashTableit is thread-safe and has low efficiency
(2) HashMapIt is allowed nullas a key or value, HashTablebut not allowed, and it will report when running NullPointerException
(3) HashMapAdding elements uses a custom hash algorithm, HashTableuse Yes (4) Red-black tree is introduced in the structure of array + linked list, no (5) keyThe initial capacity is 16, the initial capacity is 11 (6) The expansion is to double the current capacity, which is double the current capacity + 1 ( 7) Only supports traversal, and supports some methods different from those in (8) , such as methods .hashCode
HashMapHashTable
HashMapHashTable
HashMapHashTable
HashMapIteratorHashTableIteratorEnumeration
HashMapHashTableHashTablecontains

1.2 The thread insecurity of HashMap is mainly reflected in the following two aspects:

  • In JDK1.7, when expansion operations are performed concurrently, a circular chain and data loss will occur.
  • In JDK1.8, data overwriting may occur when executing put operations concurrently.

1.3 What is the difference between Collection and Collections?

Similarities : Both can be found in the classes of the java.util package .
Differences :

  • Collection is an interface
  • Collections are tool classes

1.4 The difference between Collection and Map interfaces

  • Collection interface: single-column collection, used to store objects one by one

    • List interface: stores ordered and repeatable data. -->"Dynamic" array, replace the original array

      • ArrayList : As the main implementation class of the List interface; it is thread-safe and highly efficient; it uses Object[ ] elementDatastorage at the bottom and is suitable for frequent searches.
      • LinkedList : Thread-unsafe. For frequent insertion and deletion operations, ArrayListit is more ; the bottom layer uses doubly linked list storage.
      • Vector : As an ancient implementation class of the List interface; thread-safe and inefficient; using Object[ ] elementDatastorage at the bottom
    • Set interface: stores unordered, non-repeatable data --> the "set" talked about in high school (unordered mutual exclusion)

      • HashSet : As Setthe main implementation class of the interface; thread-unsafe; can store null values
      • LinkedHashSet : As HashSeta subclass; when traversing its internal data, it can be traversed in the order of adding.
        While adding data, each data also maintains two references, recording the previous data and the next data of this data. For frequent traversal operations, LinkedHashSetthe efficiency is higher than HashSet.
      • TreeSet : You can sort according to the specified attributes of the added objects.
  • Map : double-column data, storing key-valuepairs of data—similar to the function in high school: y = f(x)

    • HashMap : As the main implementation class of Map; thread-safe and highly efficient; the storage nulland keyexpansion valuecapacity will be doubled.
    • LinkedHashMap : Ensure that when traversing mapelements, they can be traversed in the order they are added.
      Reason: Based on the original HashMapunderlying structure, a pair of pointers are added, pointing to the previous and next elements.
      For frequent traversal operations, this type of execution is more efficient than HashMap.
    • TreeMap : Ensure that the added key-valuepairs are sorted to implement sorted traversal. At this time, consider the natural sorting or customized sorting of keys.
      The bottom layer uses red-black trees.
    • HashTable : As an ancient implementation class; thread-safe and low-efficiency; it cannot be stored nulland keyexpanded valueby twice the original size +1
    • Properties : Commonly used to process configuration files. keyand valueare both Stringtypes

1.5 What are the similarities and differences between ArrayList and LinkedList?

The similarities and differences between ArrayList and LinkedList
:

  • Both implement the List interface and store ordered and repeatable data.
  • Both are thread-unsafe, and relatively thread-safe Vector has high execution efficiency.
  • ArrayList is based on the data structure of dynamic arrays, and LinkedList is based on the data structure of linked lists.

difference

  • For random access to get and set, ArrayList is better than LinkedList because LinkedList needs to move the pointer.
  • For the new and deletion operations add (specifically insert) and remove, LinkedList has the advantage because ArrayList needs to move data.

1.6 How does HashMap resolve hash conflicts?

(1) What is hash conflict?
The data calculated by the hash algorithm is infinite, and the range of the calculated results is limited, so there will always be different data with the same value after calculation. This is a hash conflict (2) How does HashMap solve hash conflicts
?

  • The linear detection method,
    also known as the open addressing method, starts from the position where the conflict occurs, finds an idle position from the hash table in a certain order, and then stores the conflicting element into the idle position.
  • Chain addressing method
    This is a very common method. A simple understanding is to store keys with hash conflicts in a one-way linked list. For example, HashMap is implemented using chain addressing method.
  • The rehash method
    means that when the key calculated by a certain hash function conflicts, another hash function is used to hash the key, and the operation is continued until no more conflicts occur. This method will increase the calculation time and have a greater impact on performance.
  • Establishing a public overflow area
    means dividing the hash table into two parts: the basic table and the overflow table. All conflicting elements are put into the overflow table.

1.7 Comparison of String, StringBuffer and StringBuilder

  • String: Immutable character sequence; underlying char[ ]storage is used
  • StringBuffer: Variable character sequence; thread-safe, inefficient; uses char[]storage underneath
  • StringBuilder: Variable character sequence; jdk5.0new, thread-unsafe, high efficiency; underlying char[]storage is used

1.8 Why is the length of String immutable?

(1) Immutability: When you reassign a value to a string, the old value is not destroyed in the memory, but a space is re-opened to store the new value.
(2) The final keyword is used in the String class to modify the character array to save the string, private final char value[], so the String object is immutable.
(3) Thread safety

1.9 New features in JDK1.8

  • 1. Lambda expressions and functional interfaces
    Lambda expressions, also known as closures, are the biggest and most anticipated language changes in Java 8. It allows us to pass functions as parameters to a method, or process the code itself as data. This is typical functional development. The simplest lambda expression can consist of a comma-separated parameter list, the -> symbol, and a statement block.

In a Lambda expression, it is divided into several pieces. This line is lambdathe expression.
() -> System.out.println(“使用Lambda表达式”);Below we lambdagive an introduction to the format:

(1) lambdaThe formal parameter list of the left bracket: is like us defining an interface with an abstract method and the formal parameter list of this abstract method.

(2) Arrow: lambdaoperator, so when you see this arrow, you just know that it is an lambdaexpression.

(3) Right lambdabody: It is like we have implemented the abstract method in the interface.

  • 2. The default method and static method of the interface
    allow developers to add new methods to existing interfaces without destroying binary compatibility, that is, they do not force implementation classes that implement the interface to also implement this new method. Add method.
    Simply put, the default method is that the interface can have implementation methods, and no implementation class is required to implement its methods.

  • 3. Method reference
    Method reference allows developers to directly reference existing methods, Java class constructors, or instance objects. Method references and Lambda expressions are used together to make the construction method of a Java class look more compact and concise, reducing redundant code.
    A method reference points to a method by its name.
    A method reference uses a pair of colons::

    public class Test {
          
          
        public static void main(String[] args) {
          
          
    
            //传统的方式来实现MyFunction/得到一个实现接口的对象 可以使用
            //匿名内部类
            //MyFunction<Desk, String> hf = new MyFunction<Desk, String>() {
          
          
            //    @Override
            //    public String apply(Desk desk) {
          
          
            //        return "hello desk";
            //    }
            //};
            //String val = hf.apply(new Desk());
            //System.out.println("val-" + val);
    
            MyFunction<Desk,String> hf2 = Desk::getBrand;
    
            String val2 = hf2.apply(new Desk());
            System.out.println("val2-" + val2);//这里输出结果就是val2-北京牌
        }
    }
    
    //定义一个函数式接口: 有且只有一个抽象方法的接口
    //我们可以使用@FunctionalInterface 来标识一个函数式接口
    //MyFunction是一个函数式接口 (是自定义泛型接口)
    
    @FunctionalInterface
    interface MyFunction<T, R> {
          
          
        R apply(T t); //抽象方法: 表示根据类型T的参数,获取类型R的结果
    
        //public void hi();
    
        //函数式接口,依然可以有多个默认实现方法
        default public void ok() {
          
          
            System.out.println("ok");
        }
    }
    @FunctionalInterface
    interface MyInterface {
          
          
        public void hi();
    }
    
    class Desk {
          
           //Bean
        private String name = "my desk";
        private String brand = "北京牌";
        private Integer id = 10;
    	//getter   setter  tostring  方法
    }
    
  • 4. Duplicate annotations.
    There is a limitation in using annotations in Java 5, that is, the same annotation can only be declared once in the same position. Java 8 introduces duplicate annotations, so that the same annotation can be declared multiple times in the same place; Java 8 is in the compiler The layer has been optimized, and the same annotations will be saved in a collection, so the underlying principle has not changed.

  • 5. Extended support for annotations.
    Java 8 has expanded the context of annotations. Annotations can be added to almost anything, including local variables, generic classes, parent classes, and interface implementations. Annotations can also be added to method exceptions.

  • 6. Optional class
    The Optional class is a container object that can be null. The method will return true if the value exists isPresent(), and calling the get() method will return the object.
    Optional is a container: it can hold a value of type T, or just null. Optional provides many useful methods so that we don't have to explicitly detect null values.
    The most common bug in Java applications is the null pointer exception. The introduction of the Optional class can solve the null pointer exception very well, thereby preventing the source code from being polluted by various null checks, so that developers can write cleaner code.

  • 7. Stream

What is Stream?

Stream API (java.util.stream) introduces a true functional programming style into the Java library. This is the largest improvement to the Java library so far, so that developers can write efficient, clean, and concise code. . In fact, in simple terms, Stream can be understood as MapReduce. Of course, Google's MapReduce is also inspired by functional programming. It is actually a series of elements that support continuous and parallel aggregation operations. From a syntax perspective, it is also very similar to Linux pipes or chains. Formula programming, the code is written concisely and clearly!
Stream is a queue of elements from a data source and supports aggregation operations. Stream uses an intuitive way similar to using SQL statements to query data from the database to provide a high-level abstraction for Java set operations and expressions
. This style treats the collection of elements to be processed as a stream, which is transmitted in the pipeline and can be processed on the nodes of the pipeline, such as filtering,
sorting, aggregation, etc. The element stream is processed by an intermediate operation in the pipeline, and finally
the result of the previous processing is obtained by the terminal operation.

  • 8. Date and Time API
    Java 8 further enhances date and time processing by releasing the new Date-Time API (JSR 310). In older versions of Java, there were many problems with the date and time API, including:
    Non-thread safety java.util.Date is non-thread safety, and all date classes are mutable. This is one of the biggest problems with Java date classes.
    Poor design The definition of Java's date/time classes is inconsistent. There are date classes in the java.util and java.sql packages. In addition, classes used for formatting and parsing are defined in the java.text package. java.util.Date contains both date and time, while java.sql.Date only contains date, and it is not reasonable to include it in the java.sql package. Plus both classes have the same name, which in itself is a very bad design.
    Time zone processing troubles The date class does not provide internationalization and has no time zone support, so Java introduced the java.util.Calendar and java.util.TimeZone classes, but they also have all the above problems.
    Java8 provides many new APIs under the java.time package. The new java.time package covers all operations that deal with date, time, date/time, time zone, instants, during and clock. . The following only introduces two important APIs:

    • Local: Simplifies date and time processing without time zone issues.
    • Zoned (time zone): Process date and time through the specified time zone.
  • 9. JavaScript engine Nashorn
    Starting from JDK 1.8, Nashorn replaced Rhino (JDK 1.6, JDK1.7) as Java's embedded JavaScript engine. Nashorn fully supports the ECMAScript 5.1 specification as well as some extensions. It uses new language features based on JSR 292, including invokedynamic introduced in JDK 7, to compile JavaScript into Java bytecode. Compared with the previous Rhino implementation, this brings Performance improvements of 2 to 10 times.
    The main purpose of Nashorn is to allow JavaScript applications to be developed and run on the JVM, and to allow Java and JavaScript to call each other.

  • 10. Base64
    In Java 8, Base64 encoding has become the standard for Java class libraries. The Base64 class also provides URL- and MIME-friendly encoders and decoders.
    In addition to these ten new features, there are other new features:

  • 11. Better type speculation mechanism:
    Java 8 has greatly improved type speculation, which makes the code cleaner and does not require too many forced type conversions.

  • 12. Compiler optimization:
    Java 8 adds the method parameter names to the bytecode, so that the parameter names can be obtained through reflection at runtime. You only need to use the -parameters parameter at compile time.

  • 13. Parallel array:
    Supports parallel processing of arrays, mainly the parallelSort() method, which can greatly improve the speed of array sorting on multi-core machines.

  • 14. Concurrency:
    Based on the new Stream mechanism and Lambda, some new methods have been added to support aggregation operations.

  • 15. Nashorn engine jjs:
    A command line tool based on Nashorn engine. It accepts some JavaScript source code as parameters and executes these source codes.

  • 16. Class dependency analyzer jdeps:
    can display package-level or class-level dependencies of Java classes.

  • 17. The JVM's PermGen space was removed:
    it was replaced by Metaspace (JEP 122).

1.10 Character stream and byte stream

  • Character stream getWriter(): often used to return strings (commonly used)
  • Byte stream getOutputStream(): often used for downloading (passing binary data)

Only one of the two streams can be used at the same time. If you use a byte stream, you cannot use a character stream, and vice versa, otherwise an error will be reported.

1.11 Serialization and Deserialization

  • Serialization: The process of converting Java objects into byte streams
  • Deserialization: The process of converting a byte stream into a Java object.

Why do you need serialization and deserialization?

When two processes communicate remotely, they can send various types of data to each other, including text, pictures, audio, video, etc., and these data will be transmitted on the network in the form of binary sequences. When two
Java processes communicate, Java serialization and deserialization are required to implement object transfer between processes. In other words, on the one hand, the sender needs to convert the Java
object into a byte sequence and then transmit it on the network; on the other hand, the receiver needs to recover the Java object from the byte sequence.

advantage:

  • Implementing data persistence, serialization can permanently save data to the hard disk (usually stored in a file).
  • Objects are transmitted and received over the network in the form of byte streams through serialization .
  • Pass objects between processes via serialization .

1.12 How many methods are there to implement multi-threading, and what are they? How many methods are there to implement synchronization, and what are they?

There are two ways to implement multithreading,

  • Inherit the Thread class
  • Implement the Runnable interface,

There are two ways to achieve synchronization

  • synchronized,wait
  • notify。

1.13 Create thread safety (arraylist as an example)

Add a lock on the thread. Lock or synchronized can turn thread unsafe into thread safe.
Lock can complete all the functions achieved by synchronized;
Collections.synchronizedList();it can also achieve thread safety
. The main differences:

  • Lock has more precise thread semantics and better performance than synchronized.
  • synchronized will automatically release the lock, but Lock must require the programmer to release it manually, and it must be released in the finally clause.

1.14 Is there any other way to create an object besides new?

  • Use newkeywords

  • ClassObject's newInstance() method
    Class's newInstance() method can create a new instance of a class at runtime. It is equivalent to using the new operator, but the syntax is more dynamic.

  • newInstance() method of the constructor object
    The newInstance() method of the Constructor can create a new instance of the class at runtime and can pass in the parameters of the constructor. This method is more flexible than Class's newInstance() method because different constructors can be selected.

  • Object Deserialization
    Deserialization is the process of recovering an object from a byte stream. After serialization, the object can be stored in a file or network, and then restored to the object through deserialization.

  • Objectclone()The object's
    clone() method can create a copy of the object, and the clone() method can be overridden to implement deep cloning.

  • Using the factory pattern
    can decouple the creation and use of objects. By defining an object factory, objects can be generated more flexibly.

1.15 What is deadlock? What are the necessary conditions for deadlock to occur? How to avoid deadlock?

1) Deadlock definition:
Deadlock refers to a phenomenon in which two or more processes occupy each other on the same resource and request to lock each other's resources, resulting in a vicious cycle.

2) Necessary conditions for deadlock to occur

  • Mutually exclusive condition : A process requires that the allocated resource (such as a printer) be occupied by only one process for a period of time. At this time, if other processes request the resource, the requesting process can only wait.
  • Inevitability condition : The resources obtained by a process cannot be forcibly taken away by other processes before they are completely used, that is, they can only be released by the process that obtained the resources (it can only be released actively).
  • Request and hold conditions : The process has held at least one resource, but has made a new resource request, and the resource is already occupied by another process. At this time, the requesting process is blocked, but it will not let go of the resources it has obtained.
  • Circular waiting condition : There is a circular waiting chain for process resources. The resources obtained by each process in the chain are simultaneously requested by the next process in the chain.

3) How to avoid deadlock

  • If different programs will access multiple tables concurrently, try to agree to access the tables in the same order, which can greatly reduce the chance of deadlock.
  • In the same transaction, try to lock all the resources required at once to reduce the probability of deadlock.
  • For business parts that are very prone to deadlocks, you can try to use upgraded lock granularity to reduce the probability of deadlocks through table-level locking.
  • If the business processing is not good, you can use distributed transaction locks or optimistic locks.

1.16 Commonly used design patterns in Java? Explain factory mode?

Answer: There are 23 design patterns in Java : Builder (construction mode), Factory (factory mode), Factory Method (factory method mode), Prototype (original model mode), Singleton (single case mode), Facade (facade mode), Adapter (adapter mode), Bridge (bridge mode), Composite (synthetic mode), Decorator (decoration mode), Flyweight (flyweight mode), Proxy (agent mode), Command (command mode), Interpreter (interpreter mode), Visitor, Iterator, Mediator, Memento, Observer, State, Strategy, Template Method Method pattern), Chain Of Responsibleity (chain of responsibility pattern).
Factory pattern :
A class implemented according to the factory pattern can generate an instance of a certain class in a group of classes based on the provided data. Usually this group of classes has a common abstract parent class and implements the same method, but these methods target different Different operations were performed on the data.
First, you need to define a base class, and the subclasses of this class implement the methods in the base class through different methods. Then you need to define a factory class, which can generate different subclass instances based on conditions. After obtaining an instance of the subclass, developers can call methods in the base class without having to consider which subclass instance is returned.

1.17 What is thread safety?

A piece of code is thread-safe if it can ensure that shared data is correctly manipulated when accessed by multiple threads.
When multiple threads access an object, without considering the scheduling and alternate execution of these threads in the runtime environment, without the need for additional synchronization, or any other coordination operations on the caller, the behavior of calling this object is If the correct result can be obtained, the object is thread-safe.

1.18 Understanding Parallelism and Concurrency

  • Parallelism: Multiple CPUs perform multiple tasks at the same time. For example: multiple people doing different things at the same time.
  • Concurrency: One CPU (using time slices) executes multiple tasks at the same time. For example: flash sale, multiple people doing the same thing

1.19 The difference between overloading and rewriting

Both are manifestations of Java polymorphism
(1) Overloading:
Two identical differences: same method name in the same class. Different parameter
lists: different number of parameters and different parameter types
(2) Rewriting
the subclass rewrites the parent class Methods.

1.20 Array sorting algorithm

  • Selection sort:
    direct selection sort, heap sort
  • Exchange sort,
    bubble sort, quick sort
  • Insertion sort:
    direct insertion sort, half insertion sort, Shell sort
  • Merge
    sort, bucket sort, radix sort

1.21 Intercept string

  • Intercept string through substring()
    • Only one parameter substring (int beginIndex)
      is passed in, which means that the string is intercepted from the index number beginIndex to the end of the string. Note that the index value of the first character is zero, and the character with index beginIndex is included when intercepting; the sample code is as follows:

      String oldStr = "zifu截取练习ing";
      String str = oldStr.substring(5);
      System.out.println(str);
      运行结果:
      取练习ing
      
    • Pass in two parameters substring (int beginIndex, int endIndex)
      starting from the index number beginIndex and ending with the index number endIndex. The returned result contains the characters with index beginIndex and does not include the characters with index endIndex;

       String oldStr = "zifu截取练习ing";
       String str = oldStr.substring(0,5);
       System.out.println(str);
       运行结果:
       zifu截
      
      
  • Split the string through split() and return the result as a string array
    • Only one parameter is passed: the split(String regex)
      parameter supports regular or ordinary characters, and the string is split according to the given regular expression or character matching. The sample code is as follows:

      String oldStr = "China,Japan,美国,俄罗斯";
      String[] strs = oldStr.split(",");//根据,切分字符串
      for(int i = 0;i < strs.length; i++){
              
              
          System.out.println(strs[i]);
      }
      
      
    • Pass in two parameters: split(String regex,int limit)
      regex regular expression delimiter. limit the number of copies to be divided. Split this string based on a regular expression or character and the number of parts you want to split. The sample code is as follows:

      String oldStr = "China,Japan,美国,俄罗斯";
      String[] strs = oldStr.split(",",2);//根据,切分字符串;切两份
      for(int i = 0;i < strs.length; i++){
              
              
          System.out.println(strs[i]);
      }
      //运行结果:
      //China
      //Japan,美国,俄罗斯	
      

1.22 Lock lock

1.23 synchronized relocking and mitigation solutions

1.24 List deduplication

Reference article: Six ways to deduplicate List elements
1. Use the characteristics of Set to deduplicate
Since the characteristics of Set are unnecessary and non-repeatable, we use this characteristic to perform two operations
(1) Put the list into the set
(2) Return the set to the list

public static List<String> distinct(List<String> list) {
    
    
    List doubleList = new ArrayList();
    if(list.size()>0&&list != null){
    
     //判断传入的list是否为空
        Set set = new HashSet(); //新建一个HashSet
        set.addAll(list); //list里的所有东西放入set中 进行去重
        doubleList.addAll(set); //把去重完的set重新放回list里
    }
    return doubleList;
}

Original: 22 11 33 55 66 22 33 66
After deduplication: 22 11 33 55 66

2. Use LinkedHashSet collection to remove duplication
. After verification, it was found that although LinkedHashSet can remove duplication, according to its characteristics, it cannot sort the data and can only maintain the original order when inserted.

 public static List<String> delRepeat1(List<String> list) {
    
    
        List<String> listNew2 = new ArrayList<String>(new LinkedHashSet<String>(list));
        return listNew2;
    }

Original: 22 11 33 55 66 22 33 66
After deduplication: 22 11 33 55 66
3. Use list.contains() to judge all elements.
In order to explore the role of the contains() method, I deliberately used List and String. I tried them separately and
the result looks like this

String aaa = "aaa"; //声明一个String
String aa = "aa"; //再声明一个不同的String
boolean b = aaa.contains(aa); //比较一下 结果为true
 
List<String> listA = new ArrayList<String>(); //新建一个list
listA.add("aaa"); //添加一个String类型的元素
boolean b = listA.contains("aa"); //比较一下 结果为false

The String type will judge whether there are the same parts in the string,
and the List will judge whether there are the same elements.
With this result, we can use the list.contains() method to judge, and then add it to a new list. Among them, the order of the elements does not change

public static List<String> delRepeat(List<String> list) {
    
    
    List<String> listNew = new ArrayList<String>();
    for (String str : list) {
    
    
        if (!listNew.contains(str)) {
    
    
            listNew.add(str);
        }
    }
    return listNew ;
}

Original: 22 11 33 55 66 22 33 66
After deduplication: 22 11 33 55 66

4. Use Java8 features to remove duplicates
. I haven’t gone into details on this part. The general meaning is to collect the list -> Stream, then use distinct() to remove duplicates on the stream, and then use collect() to collect it.

public static List<String> delRepeat(List<String> list) {
    
    
    List<String> myList = list.stream().distinct().collect(Collectors.toList());
    return myList ;
}

Original: 22 11 33 55 66 22 33 66
After deduplication: 22 11 33 55 66

5. Use the list’s own method remove()–>not recommended.
If your list is more complicated and is in List<Map<String,Object>> format, the most helpless thing is that the main operation of this method is
to For the same list, use two layers of for loops with the .equals() method. If there are similarities, use the remove() method to eliminate them, and then get a list without duplicate data.

public static List<Map<String, Object>> distinct(List<Map<String, Object>> list) {
    
    
    if (null != list && list.size() > 0) {
    
    
        //循环list集合
        for  ( int  i  =   0 ; i  <  list.size()  -   1 ; i ++ )  {
    
    
            for  ( int  j  =  list.size()  -   1 ; j  >  i; j -- )  {
    
    
                // 这里是对象的比较,如果去重条件不一样,在这里修改即可
                if  (list.get(j).equals(list.get(i)))  {
    
    
                    list.remove(j);
                }
            }
        }
    }
    //得到最新移除重复元素的list
    return list;
} 

2 Spring related frameworks

2.1 Do you know about AOP in Spring?

SSM study notes—AOP explains
the role of AOP in detail:
AOP (Aspect-Oriented Programming) can enhance certain functions of the program without changing the original code. Reduce the coupling between modules and improve the readability and maintainability of the code. AOP principles are used in scenarios such as transaction processing, log management, and permission control in Spring.

Spring AOP is based on dynamic proxy. If the object to be proxied implements a certain interface, then Spring AOP will use JDK dynamic proxy to create the proxy object. For objects that do not implement the interface, JDK dynamic proxy cannot be used. Instead, Use CGlib dynamic proxy to generate a subclass of the proxy object as a proxy. Of course, you can also use AspectJ. Spring AOP has integrated AspectJ. AspectJ should be regarded as the most complete AOP framework in the Java ecosystem. After using AOP, we can abstract some common functions and use them directly where they are needed, which can greatly simplify the amount of code. We need to add new functions conveniently and improve the scalability of the system. AOP is used in scenarios such as logging function, transaction management, and permission management.

2.2 What is the difference between Spring AOP and AspectJ AOP?

  • Spring AOP is a run-time enhancement, while AspectJ is a compile-time enhancement.
  • Spring AOP is based on Proxying, while AspectJ is based on Bytecode Manipulation.
  • Spring AOP has integrated AspectJ, which should be regarded as the most complete AOP framework in the Java ecosystem.
  • AspectJ is more powerful than Spring AOP, but Spring AOP is relatively simpler. If we have fewer aspects, there is not much performance difference between the two. However, when there are too many aspects, it is best to choose AspectJ, which is much faster than SpringAOP.

2.3 Talk about your understanding of SpringBoot

Disadvantages of Spring

  • Dependency setup is cumbersome
  • Configuration is cumbersome

Advantages of SpringBoot

  • Starting dependencies (simplified dependency configuration)
  • Automatic configuration (simplifying common project-related configurations)
  • Accessibility (built-in server)

2.4 Annotations in Controller (SpringMVC presentation layer) in Spring

  • @Controller
    Spring injects beans and hands this class to Spring for management.

1) Two annotations above the class:

  • @RestController compound annotation
    @RestController annotation = @ResponseBody+@Controller, the effect is to display the object returned by the method directly in json format on the browser.

  • @RequestMapping
    maps HTTP requests to the processing methods of MVC and REST controllers, provides routing information, and is responsible for mapping URLs to specific functions in the Controller.

2) Four REST style request methods

  • @PostMapping (added)
    method annotation that maps HTTP post requests to specific handlers

  • @DeleteMapping (delete)
    method annotation that maps HTTP post requests to specific handlers

  • @PutMapping (modification)
    method annotation that maps HTTP post requests to specific handlers

  • @GetMapping (query)
    method annotation that maps HTTP get requests to specific handlers

3) Several request parameter annotations

  • @PathVariable
    uses "/" to obtain parameter values.
    It is also the RSET style springmvc value.
    Insert image description here
  • @PathParam
    obtains parameter values ​​in key-value pairs.
    This annotation is relatively simple, it just takes the parameter value from the address bar, using the traditional splicing parameter method.
    For example: http://localhost:8080/HNZGDXSYS/ImgbyNumber?name=李思&name1=张三
    Insert image description here
  • @RequestParam() request parameters
  • @RequestBody (request body parameter)
    reads RequestBody through HttpMessageConverter and deserializes it into an Object (generally referred to as) object
    @RequestBody is mainly used to receive data in the json string passed from the front end to the back end (data in the request body); The most commonly used request body to pass parameters is a POST request, so when using @RequestBody to receive data, it is generally submitted using POST. In the same receiving method on the backend, @RequestBody and @RequestParam() can be used at the same time. There can be at most one @RequestBody, but there can be multiple @RequestParam().
  • @ResponseBody (response body parameter)
    The @ResponseBody annotation is usually used on methods in the control layer (controller). Its function is to write the return value of the method in a specific format to the body area of ​​the response, and then return the data to the client. . When there is no ResponseBody written on the method, the bottom layer will encapsulate the return value of the method into a ModelAndView object.
    If it is a string, the string will be written directly to the client.
    is an object. At this time, the object will be converted into a json string and then written to the client. What needs to be noted here is that if the object is returned, it is encoded in UTF-8. If a String is returned, it is encoded according to iso8859-1 by default, and the page may be garbled. Therefore, we can manually modify the encoding format in the annotation, such as @RequestMapping(value="/cat/query",produces="text/html;charset=utf-8"). The front is the request path, and the back is the encoding format.
    The string converted to json format is implemented through the method in HttpMessageConverter. Because it is an interface, the conversion is completed by its implementation class. If it is a bean object, the getXXX() method of the object will be called to obtain the attribute value and encapsulated in the form of key-value pairs, and then converted into a json string. If it is a map collection, use the get(key) method to obtain the value, and then encapsulate it.
    Generally used when obtaining data asynchronously. After using @RequestMapping, the return value is usually parsed as a jump path. After adding @responsebody, the return result will not be parsed as a jump path, but will be written directly into the HTTP response body. For example, if you obtain json data asynchronously and add @responsebody, the json data will be returned directly.

2.5 Common annotations for SpringBoot

1) Start the annotation @SpringBootApplication

  • @SpringBootConfiguration annotation, inherits @Configuration annotation, mainly used to load configuration files

  • @EnableAutoConfiguration annotation turns on the automatic configuration function

  • @ComponentScan annotation, mainly used for component scanning and automatic assembly

2) Controller related notes

  • @Controller

  • @RestController compound annotation

  • @RequestBody

  • @RequestMapping

  • @PostMapping method annotation for mapping HTTP post requests to specific handlers

  • @DeleteMapping method annotation for mapping HTTP post requests to specific handlers

  • @PutMapping method annotation for mapping HTTP post requests to specific handlers

  • @GetMapping method annotation for mapping HTTP get requests to specific handlers

3) Get the request parameter value

  • @PathVariable: Get the data in the url

  • @RequestParam: Get the value of the request parameter

  • @RequestHeader binds the value of the Request header part to the parameters of the method

  • @CookieValue binds the cookie value in the Request header to the method parameters

4) Injection related to beans

  • @Service

  • @Controller

  • @Component

  • @Repository

  • @Scope scope annotation

  • @Entity entity class annotation

  • @Bean generates a bean method

  • @Autowired automatic import

5) Import configuration file

  • @PropertySource annotation

  • @ImportResource import xml configuration file

  • @Import imports additional configuration information

6) Transaction annotation
@Transactional(rollbackFor=Exception.class)
7) Global exception handling

  • @ControllerAdvice handles exceptions uniformly

  • @ExceptionHandler annotation declares exception handling method

2.6 Base class of exception class

exception

2.7 What is the difference between SVN and Git?

SVN
SVN: Centralized version controller (version controller), based on C/S architecture and heavily dependent on the server. It stores data in the central warehouse of SVN. When the server cannot be used, version control can no longer be used. .

GIT
Git is currently the most advanced distributed version control system in the world (one of the few). When any client of this system has a problem, all codes can be obtained from other clients (even if the server is down).

The difference between SVN and GIT:

  • GIT is distributed, while SVN is centralized
  • GIT stores content as metadata, while SVN stores content as files.
  • The branches of the two are different: svn will cause branches to be missed, while git can quickly switch between several branches in the same working directory, and it is easy to find branches that have not been merged.
  • GIT does not have a global version number, but SVN does
  • GIT's content integrity is better than SVN: GIT's content storage uses the SHA-1 hash algorithm. This ensures the integrity of code content and reduces disruption to the repository in the event of disk failures and network problems.

2.8 There are several ways to package Maven

Pom, jar, war package

  • pom : used in parent projects or aggregate projects for version control of jar packages. The packaging method of this aggregate project must be specified as pom.
    The aggregation project is just a tool used to help other modules build, and has no real content itself. The specific writing of each project code is still written in the generated project.
    It can also be enjoyed for dependent projects imported in the parent project.

  • jar : The default packaging method of the project, package it into jar and use it as a jar package. Store some classes and tool classes that will be used by other projects. We can reference it in the pom files of other projects

  • war : packaged into war and published on a server, such as a website or service. Users can access directly through the browser, or be called by other projects through publishing services.

2.9 How to implement multi-table query in MyBatis

1) Cascading attributes

Union query: cascading attributes encapsulate the result set
eg: dept.id
eg:dept.departmentName

Department table structure:
Insert image description here


     <resultMap type="com.atguigu.mybatis.bean.Employee" id="MyDifEmp">
          <id column="id" property="id"/>
          <result column="last_name" property="lastName"/ >
          <result column="gender" property="gender"/>
          <result column="did" property="dept.id"/>
          <result column="dept_name" property="dept.departmentName"/>
     </resultMap>

	 <!-- public Employee getEmpAndDept(Integer id);-->
     <select id="getEmpAndDept" resultMap="MyDifEmp">
          SELECT  e.id id,e.last_name last_name,e.gender gender,e.d_id d_id,  d.id did,d.dept_name dept_name
          FROM tbl_employee e,tbl_dept d
          HERE e.d_id=d.id AND e.id=#{id}
     </select>

2)association

  • Use association to define encapsulation rules for associated single objects
    	<resultMap type="com.atguigu.mybatis.bean.Employee" id="MyDifEmp2">
    		<id column="id" property="id"/>
    		<result column="last_name" property="lastName"/>
    		<result column="gender" property="gender"/>
    		<!--association可以指定联合的javaBean对象
    		property="dept":指定哪个属性是联合的对象
    		javaType:指定这个属性对象的类型[不能省略]
    		-->
    		<!--定义的association 的封装规则  下面的id是 dept的返回值主键-->
    		<association property="dept" javaType="com.atguigu.mybatis.bean.Department">
    			<id column="did" property="id"/>
    			<result column="dept_name" property="departmentName"/>
    		</association>
    	</resultMap>
    
  • Use association for step-by-step query:

    (1) First query the employee information according to the employee ID
    (2) Go to the department table to find out the department information according to the d_id value in the employee information query
    (3) Set the department to the employee;

     <!--  id  last_name  email   gender    d_id   -->
    	 <resultMap type="com.atguigu.mybatis.bean.Employee" id="MyEmpByStep">
    	 	<id column="id" property="id"/>
    	 	<result column="last_name" property="lastName"/>
    	 	<result column="email" property="email"/>
    	 	<result column="gender" property="gender"/>
    	 	<!-- association定义关联对象的封装规则
    	 		select:表明当前属性是调用select指定的方法查出的结果
    	 		column:指定将哪一列的值传给这个方法
    	 		
    	 		流程:使用select指定的方法(传入column指定的这列参数的值)查出对象,并封装给property指定的属性
    	 	 -->
     		<association property="dept" 
    	 		select="com.atguigu.mybatis.dao.DepartmentMapper.getDeptById"
    	 		column="d_id">
     		</association>
    	 </resultMap>
    	  <!--  public Employee getEmpByIdStep(Integer id);-->
    	 <select id="getEmpByIdStep" resultMap="MyEmpByStep">
    	 	select * from tbl_employee where id=#{id}
    	 	<if test="_parameter!=null">
    	 		and 1=1
    	 	</if>
    	 </select>
    

Test:
Insert image description here
running results
Insert image description here

2.10 Understanding Nagix

Reverse proxy, load balancing, dynamic and static separation

2.11 Can multiple servers share port 80 in nagix?

You can share a port 80. The detailed configuration is as follows:
used for load balancing and reverse proxy.

  • Solution 1: Multiple different port services share port 80

    #管理端转发
    server {
        listen       80;
        server_name admin-xxxxx.xxx.xxx;
        location / {
            proxy_pass http://localhost:10003;
        }
    }
    #商家端转发
    server {
        listen       80;
        server_name store-xxxxx.xxx.xxx;
        location / { 
            proxy_pass http://localhost:10002;
        }
    }
    
  • Solution 2: Multiple services share port 80

    // nginx.conf
    # nginx 80端口配置 (监听demo二级域名)
    server {
        listen  80;
        server_name     demo.test.com;
        location / {
            root   /home/www/demo;
            index  index.html index.htm;
        }
    }
      
    # nginx 80端口配置 (监听product二级域名)
    server {
        listen  80;
        server_name     product.test.com;
        location / {
            root   /home/www/product;
            index  index.html index.htm;
        }
    }
    

After the configuration is completed, save it, restart the nginx service, and access the test

2.11 nagix load balancing

	# 服务器1
	server {
		listen 9001;
		server_name localhost;
		default_type text/html;
		
		location / {
			return 200 '<h1>server:9001</h1>';
		}
	}
	# 服务器2
	server {
		listen 9002;
		server_name localhost;
		default_type text/html;
		
		location / {
			return 200 '<h1>server:9002</h1>';
		}
	}
	# 服务器3
	server {
		listen 9003;
		server_name localhost;
		default_type text/html;
		
		location / {
			return 200 '<h1>server:9003</h1>';
		}
	}
	
	# 代理服务器
	# 设置服务器组
	upstream backend {
		server localhost:9001;
		server localhost:9002;
		server localhost:9003;
	}
	server {
		listen 8080;
		server_name localhost;
		
		location / {
			# backend 就是服务器组的名称
			proxy_pass http://backend/;
		}
	}

2.12 What is Redis? What are the advantages

Redis, the full English name is Remote Dictionary Server (Remote Dictionary Service), is an open source log-type, Key-Value database written in ANSIC language, supports network, can be memory-based and persistent, and provides APIs in multiple languages.
Unlike the MySQL database, Redis data is stored in memory. Its read and write speeds are very fast and can handle more than 100,000 read and write operations per second. Therefore redis is widely used in caching.
In addition, Redis is also often used for distributed locks.
In addition, Redis supports transactions, persistence, LUA scripts, LRU driven events, and various cluster solutions

2.13 Redis transaction mechanism

Redis implements the transaction mechanism through a set of commands such as MULTI, EXEC, and WATCH. Transactions support executing multiple commands at one time, and all commands in a transaction will be serialized. During the transaction execution process, the commands in the queue will be executed serially in order, and command requests submitted by other clients will not be inserted into the transaction execution command sequence.
In short, a Redis transaction is the sequential, one-time, and exclusive execution of a series of commands in a queue.
The process of Redis executing transactions is as follows:

  • Start transaction (MULTI)
  • command to queue
  • Execute transaction (EXEC)
  • Undo transaction (DISCARD)

2.14 The CPU usage is too high. Analyze and locate which line of code contains the bug.

  • The first step is to find the process ID with the highest CPU usage (locating process)
top

As shown in the following output, finding the %CPU indicates the CPU usage. The default is descending order, so first locate the process ID, which is the PID.
After finding the process, we then carefully find which thread in the process occupies too high a CPU.

  • The second step is to locate the java program
jps -l | grep 进程pid

The function of this line of command is to jps -lprint which program pidcorresponds to the process. As for grep, it is only for filtering.java

  • The third step is to locate the tid of the thread
    and locate the specific thread through the following naming.
ps -mp 进程pid -o THREAD,tid,time
  • The fourth step is to get the hexadecimal thread ID based on tid.
    The following is the command format to obtain the hexadecimal thread ID
    . Use Linux commands and windows environment to convert through the calculator. Find a way to convert it yourself.

    printf "%x\n" 上面得到的线程tid
    

You can also get the corresponding tid of the 16 mechanism by yourself through hexadecimal conversion.

  • Finally, we can use jstack to locate the specific code and
    jstack print the stack information. Because the stack information contains a lot of content, we need to know the corresponding thread ID to quickly find the problem.

    jstack 进程id | grep 16进制的tid -A60
    

This line of command prints the stack information of the specified process, and then filters it by grep's corresponding tid (to find the thread with the highest CPU usage). -A60 prints the first 60 lines of stack information.

At this time, you will find the error message, just like the exception printed by the console when writing a program, and then just find the line of code of the corresponding class. Generally, you first search based on the project package name. If the package is not in your own project, it is definitely not the code you wrote. For the time being, I think there is no problem with the code of other jar packages. If there is any problem, come back and look at this information.

2.15 Caching mechanism in MyBatis

Detailed explanation of MyBatis: SSM study notes-------MyBatis , which contains a detailed introduction
to the first-level cache: (local cache)
sqlSession level cache. The first-level cache is always enabled;
data queried during the same session between a Map at the SqlSession level and the database will be placed in the local cache.
If you need to obtain the same data in the future, you can get it directly from the cache. There is no need to query the database again;

First-level cache failure:

1. sqlSession is different.
2. The sqlSession is the same, but the query conditions are different. (There is no such data in the current first-level cache)
3. The sqlSession is the same, and additions, deletions, and modifications were performed between the two queries (this addition, deletion, and modification may have an impact on the current data)
4. sqlSession Same, manually cleared the first level cache (cache clearing)

2.16 The difference between #{ } and ${ } in MyBatis

#Characteristics of placeholders

  • MyBatisTo process #{ }placeholders, the object used JDBCis PreparedStatementthe object, which is more efficient in executing SQL statements.

  • Using PreparedStatementobjects can avoid sqlinjection and make sqlstatement execution safer.

  • #{ }Often used as a column value, located on the right side of the equal sign in a SQL statement; #{ }the value at the position is related to the data type.

Characteristics of $placeholder

  • MyBatisTo process ${ }placeholders, the object used JDBCis Statementan object, and the efficiency of executing SQL statements #{ }is lower than that of placeholders.

  • ${ }The value of the placeholder uses string concatenation, which has sqlthe risk of injection and also has code security issues.

  • ${ }The data in the placeholder is unchanged, and the data type is not distinguished.

  • ${ }Placeholders are often used as table names or column names. It is recommended to use them when data security can be ensured ${ }.

The difference between the two:
1. Different meanings from the #two$

  • #The incoming data will be treated as a string and a double quote will be added to the incoming data.
  • And $ displays the incoming data directly in the sql statement without adding double quotes.

2. The implementation methods of the two are different
(1) $The function is equivalent to string splicing
(2) #The function is equivalent to variable value replacement

3. # and $ have different usage scenarios
(1) In sqlthe statement, if you want to receive the value of the passed variable, you must use it #. Because the usage operates #through PreparedStementthe interface, injection can be prevented sqland sqlefficiency can be improved when executing statements multiple times.

(2) Caution sqlinjection

$It's just simple string concatenation, so be especially careful about sqlinjection issues. For sqlthe non-variable part of the statement, you can use it $. For example, $the method is generally used to pass in database objects (such as passing in table names).

For example:

select * from ${tableName},
$ 对于不同的表执行统一的查询操作时,就可以使用$来完成。

(3) If # and $ can be used simultaneously in a sql statement, it is best to use #.

2.17 Framework for file upload and download

3 SQL related

3.1 Differences between delete, trunk and drop in oracle

delete truncate drop
Delete data in the table (the table structure is retained and can be rolled back) Delete the table and create a new table with the original table structure. (If the previous table has auto_increment, then clear the counter) Delete table
DML language (data processing language), when starting a transaction, deleted data can be rolled back DDL (Data Definition Language) deleted data cannot be rolled back DDL (Data Definition Language) deleted data cannot be rolled back

1) Similar points:

  • Truncate, delete without where clause, and drop will delete the data in the table.
  • Drop and truncate are both DDL statements (data definition language), which will be automatically submitted after execution.

2) Differences:

  • truncate and delete only delete data without deleting the structure (definition) of the table.
    The drop statement will delete the constraints (constrain), trigger (trigger), and index (index) that the table structure depends on; the stored procedures/functions that depend on the table will Reserved, but changed to invalid state.

  • Speed, generally speaking: drop > truncate > delete

  • The delete statement is a database operation language (dml). If a transaction is started, it will not be automatically committed and can be rolled back.
    Truncate and drop are database definition languages ​​(ddl). The operation takes effect immediately and cannot be rolled back .

  • The delete statement does not affect the extent occupied by the table. The high watermark remains in its original position. The
    drop statement releases all the space occupied by the table.
    By default, the truncate statement releases space to minextents extents, unless reuse storage is used; truncate will reset the high watermark (back to the beginning).

  • Security: Be careful when using drop and truncate, especially when there is no backup. Otherwise, it will be too late to cry.
    In use, if you want to delete some data rows, use delete. Be careful to bring the where clause. The rollback segment must be large enough. If
    you want to delete the table, of course use Drop
    wants to retain the table and delete all data. If it has nothing to do with the transaction, just use truncate. If it is related to a transaction, use delete.
    If you want to defragment the table internally, you can use truncate followed by reuse stroage, and then re-import/insert the data.

  • For tables referenced by FOREIGN KEY constraints, you cannot use truncate. Instead, use a delete statement without a WHERE clause.
    Because TRUNCATE TABLE is not logged, it cannot activate the trigger.

  • TRUNCATE TABLE cannot be used on tables that participate in indexed views

3.2 The difference between views and tables in Oracle

  • Views are compiled SQL statements. The watch is not.

  • Views have no actual physical record. And the table has.

  • The table is the content and the view is the window.

  • The table only uses physical space but the view does not occupy physical space. The view is only a logical concept. The table can be modified in time, but the view can only be modified by the creation statement.

  • The table is an inner schema and the view is an outer schema.

  • A view is a way to view a data table. It can query data composed of certain fields in the data table. It is just a collection of SQL statements. From a security perspective, the view does not allow users to access the data table, so they do not know the table structure.

  • Tables belonging to the global schema are real tables; tables whose views belong to the local schema are virtual tables.

  • The creation and deletion of views only affects the view itself and does not affect the corresponding basic table.

3.3 Isolation levels of database transactions

Insert image description here

  • Dirty read : refers to reading uncommitted data from other transactions. Uncommitment means that the data may be rolled back, which means that it may not be stored in the database in the end, that is, it does not exist. Data that is read and must eventually exist is called dirty reading.
  • Repeatable reading : refers to that within a transaction, the data read at the beginning is consistent with the same batch of data read at any time before the end of the transaction. Usually for data update (UPDATE) operations.
  • Non-repeatable reading : refers to the fact that within the same transaction, the same batch of data read at different times may be different and may be affected by other transactions, such as other transactions that have modified this batch of data and submitted it. Usually for data update (UPDATE) operations.
  • Phantom reading : It is for data insertion (INSERT) operation. Assume that transaction A has changed the content of some rows, but has not yet submitted it. At this time, transaction B has inserted the same record row as the record before transaction A changed, and submitted it before transaction A submitted. At this time, When querying in transaction A, you will find that it seems that the changes just now have no effect on some data, but in fact it was just inserted by transaction B, which makes the user feel magical and hallucinating. This is called phantom reading.

3.4 Four major characteristics of transactions

  • Atomicity (Atomicity)
    A transaction must either be submitted successfully or fail and be rolled back. It cannot only perform part of the operations. This is the atomicity of the transaction.

  • The execution of a consistency
    transaction cannot destroy the integrity and consistency of the database data. The database must be in a consistent state before and after a transaction is executed. It means that database transactions cannot destroy the integrity of relational data and the consistency of business logic. For example, for a bank transfer transaction, regardless of whether the transaction succeeds or fails, it should be ensured that the total deposits of Tom and Jack in the ACCOUNTS table are 2,000 yuan after the transaction is completed.

  • Isolation The
    isolation of transactions means that in a concurrent environment, concurrent transactions are isolated from each other, and the execution of one transaction cannot be interfered with by other transactions. When different transactions operate the same data concurrently, each transaction has its own completed data space, that is, the operations and data used within a transaction are isolated from other concurrent transactions, and the transactions executed concurrently cannot interfere with each other.

  • Durability:
    Once a transaction is committed, its changes to the state of the corresponding data in the database will be permanently saved in the database. – Even if a system crash or machine downtime occurs, as long as the database can be restarted, it will be able to be restored to the state where the transaction ended successfully

3.5 How to invalidate the index

  • If there is or in the query condition, even if some conditions are indexed, they will be invalid.

  • The index itself is invalid

  • The like query starts with %

  • Violates the leftmost matching principle.
    For example, a table has three fields a, b, and c, and then creates a joint index index(a, b, c). Pay attention to the order of the index fields here.

    //因为建立索引树的时候,a是第一个,就好像树干一样。
    //没有最左边的字段,即使后面的字段建立了索引,也无法命中。
    select * from table where c = "3"; //不会走索引
    
    select * from table where b = 2 and  c = "3"; //不会走索引
    
  • If the column type is a string, the data needs to be quoted in quotes, otherwise the index will not be used.

  • Participating in calculations on index columns will cause index failure

  • If mysql estimates that a full table scan is faster than using an index, the index will not be used.

  • There is no query condition, or the query condition is not indexed

  • No leading column is used in query conditions

  • The number of queries is the majority of large tables and should be more than 30%.

3.6 Indexing is required and indexing is not required

3.6.1. Situations where index creation is required

  • The primary key automatically creates a unique index
  • Fields that are frequently used as query conditions should be indexed
  • Fields associated with other tables in the query, foreign key relationships are indexed
  • The choice of single key group index, combined index is more cost-effective
  • The sorting field in the query. If the sorting field is accessed through the index method, the sorting speed will be greatly improved.
  • Statistics or grouping fields in queries

3.6.2. Which tables do not need to create indexes

  • The table has too few records
    because this will increase the query speed, but it will slow down the table update speed.

  • Tables that are frequently added, deleted, or modified
    Because when updating a table, MySQL not only needs to save the data, but also save the table fields where the index file data is repeated and evenly distributed. Therefore, indexes should only be created for frequently queried and frequently sorted data columns.

  • Do not define redundant or duplicate indexes

  • It is not recommended to use unordered values ​​as indexes;

  • Delete indexes that are no longer used or rarely used;

  • It is best not to use indexes for tables with small data volumes;

  • Do not create indexes on columns with a large amount of duplicate data;

  • Avoid creating too many indexes on frequently updated tables;

  • Do not set indexes on fields that are not used in WHERE;

3.7 Answer the following questions based on the ER diagram

Insert image description here
Student table S (student number Sno, student name Sname, gender Ssex, department Sdept)
course table C (course number cno, course name cname, credit hours ccredit)
course selection table SC (student number sno, course number cno, grade)

question

# 1、查询“CS”系学生的基本信息;
SELECT * FROM s WHERE  Sdept="CS";


# 2、统计各系学生的人数,结果按升序排列;
SELECT Sdept,COUNT(*) '人数' 
FROM s 
GROUP BY Sdept 
ORDER BY '人数';

# 3、查询选修了“1”或“2”号课程的学生学号和姓名;
SELECT s.Sno,Sname,grade
FROM  s ,sc
where s.`Sno`=sc.`Sno` AND Cno IN ('1','2');

# 4、查询选修了课程名为“数据库”且成绩在60分以下的学生的学号、姓名和成绩;
SELECT s.Sno,Sname 
FROM s JOIN sc ON s.`Sno`=sc.`Sno` 
JOIN c ON sc.`Cno`=c.`Cno`
WHERE Cname ="数据库" AND grade<60;

# 5、查询选修了3门以上课程的学生学号;
SELECT sno 
FROM sc 
GROUP BY sc.Sno 
HAVING COUNT(*)>=3;

# 6、查询选修课程成绩至少有一门在80分以上的学生学号;
SELECT sno
FROM sc 
GROUP BY sc.Sno 
HAVING MAX(grade)>80;

# 7、查询选修课程成绩均在80分以上的学生学号;
SELECT sno 
FROM sc 
GROUP BY sc.Sno 
HAVING MIN(grade)>80;

# 8、查询选修课程平均成绩在80分以上的学生学号
SELECT sno
FROM sc 
GROUP BY sc.Sno 
HAVING AVG(grade)>80;

data preparation

Create database and set character set

CREATE DATABASE IF NOT EXISTS test DEFAULT CHARSET utf8;
use test;

#创建学生表:包括学号,姓名,年龄,性别,院系
CREATE TABLE s
(
Sno VARCHAR(7)PRIMARY KEY,
Sname VARCHAR(10)NOT NULL,
Sage INT,
Ssex VARCHAR(2),
Sdept VARCHAR(20) DEFAULT '计算机系'
);

#创建课程表:包括课程号,课程名,选修课课程号,学分
CREATE TABLE c
(
Cno VARCHAR(10)PRIMARY KEY,
Cname VARCHAR (20)NOT NULL,
Cpno VARCHAR(10),
Ccredit INT
);

#创建选课表
CREATE TABLE sc
(
Sno VARCHAR(7),
Cno VARCHAR(10),
grade INT,
FOREIGN KEY (sno) REFERENCES s(Sno),
FOREIGN KEY (cno) REFERENCES c(cno)
);

#  向学生表S中插入数据
INSERT INTO s
   (Sno,Sname,Sage,Ssex,Sdept)
VALUE
   ("10001","张三",20,'男','计算机'),
   ("10002","李梅",19,'女','计算机'),
   ("10003","王五",18,'男','CS'),
   ("10004","小明",21,'男','计算机'),
   ("10006","黎明",18,'男','艺术表演'),
   ("10008","杰克",21,'男','计算机'),
   ("10005","小红",22,'女','CS');

#  向课程表C中插入数据   
INSERT INTO c
   (Cno,Cname,Cpno,Ccredit)
VALUE
   ("1","离散数学",NULL,5),
   ("2","线性代数",'3',6),
   ("3","高等数学",NULL,4),
   ("4","数据结构",'3',6),
   ("5","操作系统",'1',4),
   ("6","数据库",'4',5);

#  向选课表SC中插入数据  
INSERT INTO sc
  (Sno,Cno,grade)
VALUE
  ("10001","1",70),
  ("10001","6",56),
  ("10003","4",90),
  ("10003","5",83),
  ("10004","1",75),
  ("10004","3",90),
  ("10008","1",70),
  ("10008","5",70),
  ("10008","6",88),
  ("10002","1",85),
  ("10002","6",89);

3.8 Query the first five pieces of data in the group

Question : Query the top five grades in each class in the student table
. Data preparation:

DROP TABLE IF EXISTS `student`;
CREATE TABLE `student` (
  `id` int(40) NOT NULL AUTO_INCREMENT,
  `name` varchar(40) DEFAULT NULL,
  `age` int(20) DEFAULT NULL,
  `class_id` int(50) DEFAULT NULL,
  `class_name` varchar(40) DEFAULT NULL,
  `grad` int(30) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=29 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of student
-- ----------------------------
INSERT INTO `student` VALUES ('1', 'Tom', '19', '1', '计算机系', '80');
INSERT INTO `student` VALUES ('2', 'Jun', '20', '1', '计算机系', '90');
INSERT INTO `student` VALUES ('3', 'Tim', '23', '2', '数学系', '78');
INSERT INTO `student` VALUES ('4', 'Pom', '21', '1', '计算机系', '87');
INSERT INTO `student` VALUES ('5', 'wom', '23', '1', '计算机系', '70');
INSERT INTO `student` VALUES ('6', 'Jury', '23', '1', '计算机系', '78');
INSERT INTO `student` VALUES ('7', 'Tim', '22', '1', '计算机系', '74');
INSERT INTO `student` VALUES ('8', 'Tom', '20', '1', '计算机系', '75');
INSERT INTO `student` VALUES ('9', 'ERT', '13', '1', '计算机系', '76');
INSERT INTO `student` VALUES ('10', 'RYn', '18', '1', '计算机系', '71');
INSERT INTO `student` VALUES ('11', 'Qom', '20', '2', '数学系', '78');
INSERT INTO `student` VALUES ('12', 'Jury', '23', '2', '数学系', '98');
INSERT INTO `student` VALUES ('13', 'Jim', '22', '3', '中文系', '99');
INSERT INTO `student` VALUES ('14', 'Kom', '20', '3', '中文系', '77');
INSERT INTO `student` VALUES ('15', 'ORT', '13', '3', '中文系', '32');
INSERT INTO `student` VALUES ('16', 'TTYn', '18', '3', '中文系', '56');
INSERT INTO `student` VALUES ('17', 'SM', '18', '3', '外语系', '78');
INSERT INTO `student` VALUES ('18', 'Hid', '23', '3', '外语系', '98');
INSERT INTO `student` VALUES ('19', 'Wed', '22', '3', '外语系', '58');
INSERT INTO `student` VALUES ('20', 'Uim', '20', '3', '外语系', '65');
INSERT INTO `student` VALUES ('21', 'Dom', '13', '3', '外语系', '32');
INSERT INTO `student` VALUES ('22', 'Kfg', '18', '3', '外语系', '56');
INSERT INTO `student` VALUES ('23', 'SM', '18', '2', '数学系', '96');
INSERT INTO `student` VALUES ('24', 'Hid', '23', '2', '数学系', '95');
INSERT INTO `student` VALUES ('25', 'Wed', '22', '2', '数学系', '94');
INSERT INTO `student` VALUES ('26', 'Uim', '20', '2', '数学系', '93');
INSERT INTO `student` VALUES ('27', 'Dom', '13', '2', '数学系', '92');
INSERT INTO `student` VALUES ('28', 'Kfg', '18', '2', '数学系', '91');

Inquire:

select a.* 
from student a 
	where 5 > (
	select count(*) 
	from student b
	where b.class_id = a.class_id 
	and b.grad > a.grad 
) 
order by a.class_id, a.grad DESC 

Show results:
Insert image description here

3.9 Optimistic locking and pessimistic locking

Pessimistic lock:

It is a relatively pessimistic lock, always assuming the worst case scenario. Every time you go to get the data, you think that others will modify it, so you will lock it every time you get the data, so that if others want to get the data, they will block until it Obtain the lock (the shared resource is only used by one thread at a time, other threads are blocked, and the resource is transferred to other threads after use).
Many such lock mechanisms are used in traditional relational databases, such as row locks, table locks, read locks, write locks, etc., which are all locked before operations. Exclusive locks such as Java synchronizedand Java are the implementation of the pessimistic lock idea.ReentrantLock

Optimistic locking:

Always assume the best situation. Every time you go to get the data, you think that others will not modify it, so it will not be locked. However, when updating, you will judge whether others have updated the data during this period. You can use the version Number mechanism and CASalgorithm implementation.
Optimistic locking is suitable for multi-read application types, which can improve throughput. Similar write_conditionmechanisms like those provided by databases are actually provided with optimistic locking. The atomic variable class under the Java java.util.concurrent.atomicpackage is implemented using CAS, an implementation method of optimistic locking.

Mybatis-Plus implements optimistic locking
1. Add the version field to the table.
2. Modify the entity Insert image description here
3. Add optimistic lock plug-in configuration

<!--配置乐观锁插件-->
<bean id="optimi sti cLockerInnerInterceptor"
class="com. baomi dou . mybatisplus . extension. plugins. inner . optimisticlockerInnerInt
erceptor"> </bean>

4. SpringBoot implementation:

@Bean
public MybatisPlusInterceptor mybatisPlusInterceptor() {
    
    
    MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor();
    //乐观锁插件
    interceptor.addInnerInterceptor(new OptimisticLockerInnerInterceptor());
    return interceptor;
}

@Configuration
public class MybatisPlusConfig {
    
    
	@Bean
	public MybatisPlusInterceptor mybatisPlusInterceptor() {
    
    
		MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor(); 
		//分页插件
		interceptor.addInnerInterceptor(new PaginationInnerInterceptor());
		//防止全表更新插件
		interceptor.addInnerInterceptor(new BLockAttackInnerInterceptor());
		//乐观锁插件
		interceptor.addInnerInterceptor(new optimisticLocker InnerInterceptorp;
		return interceptor ;
	}
}

3.10 Stored procedures

Stored procedure : It is a collection of SQL statements that have been compiled in advance and stored in the database. It is the code encapsulation and reuse at the SQL language level of the database.
Stored procedure parameters:

  • INType parameters represent accepting data passed in by the caller;
  • OUTType parameters represent data returned to the caller;
  • INOUTType parameters can accept parameters passed in by the caller and can also return data to the caller.

advantage

  • Stored procedures simplify complex operations by encapsulating processing in easy-to-use units.

  • Simplify management of change. If the table name, column name, or business logic changes. Only the code of the stored procedure needs to be changed. People who use it don't have to change their code.

  • Usually stored procedures are helpful in improving application performance. When the created stored procedure is compiled, it is stored in the database.
    However, MySQL implements stored procedures slightly differently.
    MySQL stored procedures are compiled on demand. After compiling the stored procedure, MySQL places it in the cache.
    MySQL maintains its own cache of stored procedures for each connection. If the application uses the stored procedure multiple times in a single connection, use the compiled version, otherwise the stored procedure works like a query.

  • Stored procedures help reduce traffic between the application and the database server.
    Because the application does not have to send multiple lengthy SQL statements, it only needs to send the name and parameters of the stored procedure.

  • Stored procedures are reusable and transparent to any application. Stored procedures expose the database interface to all applications so that developers do not have to develop functionality that is already supported in stored procedures.

  • Stored programs are safe. The database administrator can grant appropriate permissions to applications that access stored procedures in the database, without providing any permissions to the underlying database tables.

shortcoming

  • Large memory footprint and increased APU utilization
    If a large number of stored procedures are used, the memory usage of each connection using these stored procedures will increase significantly.
    In addition, if a large number of logical operations are overused in stored procedures, the CPU usage will also increase, because the original design of the MySQL database focused on efficient queries rather than logical operations.

  • The structure of stored procedures makes it difficult to develop stored procedures with complex business logic.

  • It's difficult to debug stored procedures . Only a few database management systems allow debugging stored procedures, and MySQL does not provide the function of debugging stored procedures.

  • Developing and maintaining stored procedures is not easy.
    Developing and maintaining stored procedures often requires a specialized skill set that not all application developers possess. This can cause problems during application development and maintenance phases.

  • It has a high degree of dependence on the database and poor transferability.

Define a stored procedure with parameters:

First define a student database table:
Insert image description here
Now we need to query how many men are studentin this table .sex

DELIMITER $$

CREATE
    PROCEDURE `demo`.`demo2`(IN s_sex CHAR(1),OUT s_count INT)
	-- 存储过程体
	BEGIN
		-- 把SQL中查询的结果通过INTO赋给变量
		SELECT COUNT(*) INTO s_count FROM student WHERE sex= s_sex;
		SELECT s_count;
		
	END$$
DELIMITER ;

Call this stored procedure

-- @s_count表示测试出输出的参数
CALL demo2 ('男',@s_count);

Insert image description here
For details, please see : Stored procedures in MySQL (details)

3.11 The difference between transactions and locks

Lock:
Pessimistic lock : It is believed that during the period of modifying the database data, there are transactions that also want to modify this data;
Optimistic lock : It is believed that there will be no transactions to modify the data of this database in a short period of time;
we generally speaking The lock actually refers to pessimistic lock, which puts the data in a locked state (implemented by the database) during data processing.
Transaction
A transaction is a unit of concurrency control, a user-defined sequence of operations that satisfies ACIDthe attributes (atomicity, consistency, isolation and durability).
Locks are a mechanism used to resolve isolation. The isolation level of a transaction is implemented through the lock mechanism. In addition, locks have different granularities, and there are generally four transaction isolation levels:

  • Read uncommitted Read uncommitted,
  • Read submitted Read committed,
  • Repeatable reading Repeatable read,
  • Serializable Serializable.

Important: General transactions use pessimistic locking (exclusive).

3.12 Commonly used database engines

Commonly used are MyISAMand InnoDB.
InnoDB: Supports transactions, supports foreign keys, supports row-level locks, does not support full-text indexing,
MyISAM: Does not support transactions, does not support foreign keys, does not support row-level locks, supports full-text indexes

3.13 mysql view historical execution time

You can open performance_schemathe related tables of the system data table:

USE performance_schema;

Then, enter the following command to view the historical execution time of all executed statements:

SELECT * FROM events_statements_history;

If you only want to view the execution time of the most recent period, you can enter the following command:

SELECT * FROM events_statements_history
WHERE event_time >NOW() - INTERVAL 1 DAY;

3.14 Mysql query syntax

selectQuery list ⑦
fromTable 1 Alias ​​①
Connection type joinTable 2 ②
onConnection conditions ③
whereFilter ④
group byGroup list ⑤
havingFilter after grouping ⑥
order bySort list ⑧
limitStarting entry index, number of entries; ⑨

3.15 What is ORM? Have you ever used related frameworks?

The Object Relational Mapping ( Object Relational Mapping , for short ORM) pattern is a technology designed to solve the mismatch between object-oriented and relational databases. ORMThe framework is a bridge connecting the database. As long as the mapping relationship between the persistence class and the table is provided, ORMthe framework can refer to the information in the mapping file at runtime to persist the object into the database.
ORMFramework: A framework designed to solve the mismatch between surface objects and relational databases.

  • HibernateFully automatic requires writing hqlstatements
  • iBATISSemi-automatically write your own sqlstatements, highly maneuverable and compact
  • mybatis
  • eclipseLink
  • JFinal

Guess you like

Origin blog.csdn.net/CSDN_Admin0/article/details/131719225