Java Interview Questions - Advanced (2022.4 latest summary)

Java Interview Questions - Advanced

1. Basics

1.1 Basic data types and wrapper classes

1. Basic data types and wrapper classes

basic type size (bytes) Defaults Package class
byte 1 (byte)0 Byte
short 2 (short)0 Short
int 4 0 Integer
long 8 0L Long
float 4 0.0f Float
double 8 0.0d Double
boolean 1 false Boolean
char 2 \u0000(null) Character

2. Are int and Integer the same byte size?

Int and integer occupy the same memory, both are 4 bytes.

1.2 What problems may arise when converting Double to Bigdecimal? How to deal with it?

Loss of precision
You cannot directly use the constructor of Bigdecimal to pass double to convert, and some values ​​will lose precision, because the computer is binary, and Double cannot accurately store some decimal places.

Solution There are two common ways to convert Double to Bigdecimal:

  • First convert Double to String and then use the Bigdecimal constructor, the problem of precision loss will not occur;
  • Call Bigdecimal's valueOf() function directly. Its internal implementation is also to convert String first and then use the Bigdecimal constructor.

1.3 What is the difference between equals and ==?

==

== compares the (heap) memory address of the object stored in the variable (stack) memory, and is used to determine whether the addresses of the two objects are the same , that is, whether they are the same object . The comparison is a pointer operation in the true sense.

  • The comparison is whether the operands at both ends of the operator are the same object.
  • The operands on both sides must be of the same type (can be between parent and child classes) to compile.
  • The comparison is the address. If it is a comparison of specific Arabic numbers, the value is equal to true, such as:
    int a=10 and long b=10L and double c=10.0 are the same (true), because they all point to the address for a heap of 10.

equals

equals is used to compare whether the contents of two objects are equal . Since all classes are inherited from the java.lang.Object class, it is applicable to all objects. If the method is not overridden, the call is still Object The method in the class, and the equals method in Object returns the judgment of == .

1.4 How many ways does Java create objects?

  • new creates a new object
  • through the reflection mechanism
  • Using the clone mechanism
  • through the serialization mechanism

1.5 Reflection mechanism in Java

1. Definition

The reflection mechanism is that at runtime, for any class, you can know all the properties and methods of this class; for any object, you can call any of its methods. This function of dynamically obtaining class information and dynamically calling object methods is called the reflection mechanism of the Java language.

2. Implementation of reflection

There are 4 methods to get the Class object:

  • Class.forName("path of class");
  • classname.class
  • ObjectName.getClass()
  • The wrapper class of the basic type, you can use the wrapper class.TYPE to get the Class object of the wrapper class

3. Get object information

  • Class : Represents classes and interfaces in a running Java application
  • Field : Provides attribute information about classes and interfaces, and dynamic access to it.
  • Constructor : Provides information about a single constructor of a class and its access rights
  • Method : Provides information about a method in a class or interface

4. Advantages and disadvantages of reflection mechanism

advantage

  • Ability to dynamically obtain instances of classes at runtime to improve flexibility;
  • combined with dynamic compilation

shortcoming

  • The performance of using reflection is low, and the bytecode needs to be parsed to parse the objects in memory.
  • Relatively unsafe and breaks encapsulation (because private methods and properties can be obtained through reflection).

Low performance solution:

1. Turn off the JDK security check through setAccessible(true) to improve the reflection speed;
2. When creating an instance of a class multiple times, it will be much faster if there is a cache
3. ReflectASM tool class, through bytecode generation to speed up the reflection speed

1.6 Use of the clone method in Java

1. Object cloning
To use the clone() method, the object must implement the Cloneable interface and override the clone method

2. Shallow copy and deep copy

  • Shallow copy: All values ​​of the copied object are the same as the original object, and all object references still point to the original object.

  • Deep copy: On the basis of shallow copy, all variables that refer to other objects are also cloned and point to the copied new object.

Directly use the clone method of the object to perform a shallow copy;

To implement deep copy, in addition to calling the clone method in Object to get a new object, the reference variables in the class must also be cloned. This requires that the referenced object must also implement the Cloneable interface and override the clone method.

1.7 Usage of serialization in Java

1. Serialization
Just implement the java.io.serializable interface when the class is defined, and Java will automatically serialize it. It should be noted that properties marked as transient and static will not be serialized .

2. Serialized objects
The ObjectOutputStream class is used to serialize an object, and the writeObject() method can serialize a Java object into a file (.ser).

3. Deserialize objects
The ObjectInputStream class is used to deserialize an object, and the readObject() method can deserialize a .ser file into a Java object.
The try/catch code block in the readObject() method needs to try to catch the ClassNotFoundException exception. For the JVM to deserialize an object, it must be a class that can find the bytecode

1.8 usage of final keyword

  • The final modified class cannot be inherited and has no subclasses. The methods in the final class are final by default.
  • Final modified methods cannot be overridden by methods of subclasses, but can be inherited.
  • Final modifies a member variable, which means a constant, which can only be assigned once, and the value will not change after assignment.
  • final cannot be used to modify constructors.
  • For instance constants modified by final, the instance itself cannot be changed, but for instance variables of some container types (such as ArrayList, HashMap), the container variable itself cannot be changed, but the objects stored in the container can be modified.

Use final to modify variables, pay attention to:

  • The final modified variable must be assigned an initial value (at the time of declaration, in the static block, and in the constructor, there are three assignment methods);
  • A final modified variable can only be assigned once;

1.9 usage of static keyword

1. Usage scenarios of the static keyword

  • Both static variables and static methods belong to the static resources of the class and are shared by the class instances.
  • Static blocks are mostly used for initialization operations to improve program performance.
  • static inner class.
  • For static package import, you don't need to use the class name, you can use the resource name directly.

2. The initialization order of static variables, member variables, and constructors

First initialize the parent class static --> then initialize the static of the subclass --> then initialize other member variables of the parent class --> parent class construction method --> other member variables of the subclass --> subclass construction method.

Example: What is the output of this code?

public class Test {
    
    
    Person person = new Person("Test");
    static{
    
    
        System.out.println("test static");
    }

    public Test() {
    
    
        System.out.println("test constructor");
    }

    public static void main(String[] args) {
    
    
        new MyClass();
    }
}

class Person{
    
    
    static{
    
    
        System.out.println("person static");
    }
    public Person(String str) {
    
    
        System.out.println("person "+str);
    }
}


class MyClass extends Test {
    
    
    Person person = new Person("MyClass");
    static{
    
    
        System.out.println("myclass static");
    }

    public MyClass() {
    
    
        System.out.println("myclass constructor");
    }
}
test static
myclass static
person static
person Test
test constructor
person MyClass
myclass constructor

Process analysis:

1. Find the main method entry, the main method is the program entry, but before executing the main method, you must first load the Test class
2. When loading the Test class, you find that the Test class has a static block, execute the static block first, and output the test static result
3 .Then execute new MyClass(). Before executing this code, first load the MyClass class and find that the MyClass class inherits the Test class. You must first load the Test class. The Test class has been loaded before. 4. Load the MyClass class and find that the MyClass class has a static block.
First Execute the static block and output the myclass static result
5. Then call the constructor of the MyClass class to generate the object. Before generating the object, you need to initialize the member variables of the parent class Test, execute the Person person = new Person("Test") code, and find Person The class is not loaded
6. Load the Person class and find that the Person class has a static block. First execute the static block and output the person static result
7. Then execute the Person constructor and output the person Test result
8. Then call the parent class Test constructor and output the test constructor As a result, the initialization of the parent class Test is completed.
9. Then initialize the member variables of the MyClass class, execute the Person constructor, and output the result of person MyClass
10. Finally, call the MyClass class constructor and output the result of the myclass constructor, thus completing the MyClass class is initialized.

3. Static access restrictions

Non-static member methods and non-static member variables cannot be accessed in static methods, but static member methods and static member variables can be accessed in non-static member methods.

4. static garbage collection

The static method belongs to the class and is not an instance object. When the JVM loads the class, it already exists in the memory and will not be recycled by the virtual machine GC , so the memory load will be very large. Non-static methods will be GCed by the virtual machine after running to reduce memory pressure.

5. The misunderstanding of the static keyword

  • The static keyword cannot change the access rights of variables and methods .
  • Although static member variables are independent of objects, it does not mean that they cannot be accessed through objects. All static methods and static variables can be accessed through objects (as long as the access rights are sufficient).
  • static is not allowed to modify local variables.

1.10 What is the difference between an abstract class and an interface?

Characteristics of abstract classes

  • Abstract classes cannot be instantiated, that is, objects cannot be instantiated using the new keyword, and can only be inherited;
  • An abstract class must contain an abstract method, but an abstract class does not necessarily contain an abstract method;
  • The modifier of an abstract method in an abstract class can only be public or protected, and the default is public;
  • An abstract method in an abstract class has only a method body and no concrete implementation;
  • If a subclass implements all the abstract methods of the parent class (abstract class), then the subclass does not have to be an abstract class, otherwise it is an abstract class;
  • Abstract classes can contain attributes, methods, and construction methods, but construction methods cannot be used for instantiation. The main purpose is to be called by subclasses.

Interface Features

  • Interfaces can contain variables and methods; variables are implicitly specified as public static final, and methods are implicitly specified as public abstract (before JDK1.8);

  • The interface supports multiple inheritance, that is, an interface can extend multiple interfaces, which indirectly solves the problem of single inheritance of classes in Java;

  • New features have been added to interfaces in JDK1.8:

    • 默认方法(default method): JDK 1.8 allows non-abstract method implementations to be added to interfaces, but they must be modified with the default keyword; methods defined as default may not be implemented by subclasses, but can only be called by objects that implement subclasses; if subclasses implement multiple interfaces, and these interfaces contain the same default method, the subclass must override the default method;

    • 静态方法(static method): In JDK 1.8, it is allowed to use the static keyword to modify a method and provide an implementation, which is called an interface static method. Interface static methods can only be called through the interface (interface name. static method name).

The difference between abstract class and interface

the difference abstract class interface
Member difference Member variables Can be variable or constant only constant
Construction method have none
member method Can be abstract or non-abstract Only abstraction is possible (default methods and static methods are provided after JDK1.8)
relationship difference class and class Inheritance (extends), single inheritance
Classes and Interfaces implements, single implementation, multiple implementations
interface and interface Inheritance (extends), single inheritance, multiple inheritance
Design concept difference What is inherited is the relationship of "is a". The abstract class defines the common functions of the inheritance system What is realized is the relationship of "like a". What is defined in the interface is the extended function of the inheritance system

1.11 What is the difference between List, Set, and Map?

List (list)

The elements of List are stored in a linear manner and can store repeated objects . List mainly has the following two implementation classes:

ArrayList: An array with variable length, which can perform random access to elements, and the speed of inserting and deleting elements into ArrayList is slow. The implementation of ArrayList expansion in JDK8 is to use the statement newCapacity = oldCapacity + (oldCapacity >> 1) (that is, 1.5 times expansion) in the grow() method to calculate the capacity, and then call the Arrays.copyof() method to copy the original array.

LinkedList: The linked list data structure is used, the insertion and deletion speed is fast, but the access speed is slow.

Set (collection)

The objects in the Set are not sorted in a specific ( HashCode ) way, and there are no duplicate objects . The Set mainly has the following two implementation classes:

HashSet: HashSet accesses the objects in the set according to the hash algorithm, and the access speed is relatively fast. When the number of elements in the HashSet exceeds the array size *loadFactor (the default value is 0.75), approximately twice the capacity will be expanded (newCapacity = (oldCapacity << 1) + 1).

TreeSet: TreeSet implements the SortedSet interface, which can sort the objects in the collection.

Map (mapping)

Map is a collection of mapping key objects and value objects. Key objects cannot be repeated , and each element of it contains a key object and a value object. Map mainly has the following implementation classes:

HashMap: HashMap is implemented based on a hash table. The cost of inserting and querying <K, V> is fixed, and the performance of the container can be adjusted by setting the capacity and load factor through the constructor.

LinkedHashMap: Similar to HashMap, but when iterating through it, the order of getting <K, V> is its insertion order, or the least recently used (LRU) order.

TreeMap: TreeMap is implemented based on red-black tree. When viewing <K,V>, they are sorted. TreeMap is the only Map with a subMap() method that returns a subtree.

1.12 Map traversal method

1. Using entrySet in a for loop (the most common and most used)

for (Map.Entry<String, String> entry : map.entrySet()) {
    
    
    String mapKey = entry.getKey();
    String mapValue = entry.getValue();
    System.out.println(mapKey + ":" + mapValue);
}

2. Use the for loop to traverse keys or values, which is generally applicable when you only need the key or value in the Map. Performance is better than entrySet.

// 打印键集合 keySet
for (String key : map.keySet()) {
    
    
    System.out.println(key);
}
// 打印值集合 values
for (String value : map.values()) {
    
    
    System.out.println(value);
}

3. Use iterator (Iterator) to traverse

Iterator<Entry<String, String>> it = map.entrySet().iterator();
while (it.hasNext()) {
    
    
    Entry<String, String> entry = it.next();
    String key = entry.getKey();
    String value = entry.getValue();
    System.out.println(key + ":" + value);
}

4. Traversing through keys to find values, this method is relatively inefficient

for(String key : map.keySet()){
    
    
    String value = map.get(key);
    System.out.println(key+":"+value);
}

1.13 What is the difference between HashMap and HashTable?

HashMap HashTable
father AbstractMap Dictionary
thread safety thread unsafe thread safety
efficiency efficient low efficiency
support for null The key can be null, and there can only be one; there can be multiple values ​​that are null Neither key nor value can be null
Initial capacity and expansion The default size is 11, and the expansion becomes 2n+1 The default size is 16, and the expansion becomes 2n
The way to calculate the hash value (h = key.hashCode()) ^ (h >>> 16) key.hashCode()

1.14 Why is HashMap not thread-safe?

  • The same key during put causes the value of one of the threads to be overwritten;
  • Multiple threads expand capacity at the same time, resulting in data loss;
  • When multi-threaded expansion causes the Node linked list to form a ring structure, resulting in an infinite loop

1.15 Mechanism of HashMap

1. Storage structure

In JDK1.7, it consists of "array + linked list". The array is the main body of HashMap, and the linked list is mainly for solving hash conflicts .

In JDK1.8, it consists of "array + linked list + red-black tree". When the linked list is too long, it will seriously affect the performance of HashMap, the red-black tree search time complexity is O(logn), and the linked list is O(n). Linked lists and red-black trees will be converted when certain conditions are met:

  • When the linked list exceeds 8 and the total amount of data exceeds 64, it will turn into a red-black tree.
  • Before converting the linked list into a red-black tree, it will judge that if the length of the current array is less than 64, then it will choose to expand the array first instead of converting it into a red-black tree to reduce the search time.

2. Can I use a binary search tree instead of a red-black tree?

Can. However, the binary search tree will become a linear structure under special circumstances (this is the same as the original linked list structure, causing deep problems), and the traversal search will be very slow when the amount of data is large.

3. What is the default load factor? Why 0.75 and not some other value?

The initialization length of table is length (the default value is 16), Load factor is the load factor (the default value is 0.75), and threshold is the maximum value of key-value pairs that HashMap can hold. threshold = length * Load factor. That is to say, after the length of the array is defined, the larger the load factor, the more key-value pairs it can accommodate .

The default loadFactor is 0.75. 0.75 is a balanced choice for space and time efficiency. A higher value will reduce space overhead but increase search costs . Generally, do not modify it, except in special cases of time and space:

  • If there is a lot of memory space and high time efficiency requirements, you can reduce the value of the load factor Load factor.
  • On the contrary, if the memory space is tight and the time efficiency is not high, you can increase the value of the load factor loadFactor, which can be greater than 1.

4. How is the storage index of the key in HashMap calculated?

  • First calculate the value of hashcode according to the value of key: h = key.hashCode()
  • Then calculate the hash value according to the hashcode, and realize it by XORing the upper 16 bits of hashCode() and the lower 16 bits: h ^ (h >>> 16)
  • Finally, take the modulus of length: hash & (length-1) to calculate the storage location.

5. What is the put method flow of HashMap?

  1. First calculate the hash value according to the value of the key, and find the subscript of the element stored in the array;
  2. If the array is empty, call resize to initialize;
  3. If there is no hash conflict, put it directly in the corresponding array subscript;
  4. If there is a conflict and the key already exists, the value will be overwritten;
  5. If the conflict occurs, and the key does not exist, and the node is a red-black tree, hang the node on the tree;
  6. If it is a linked list after the conflict, judge whether the linked list is greater than 8, if it is greater than 8 and the capacity of the array is less than 64, expand the capacity; if the linked list node is greater than 8 and the capacity of the array is greater than 64, convert this structure into a red-black tree and insert the key Value pairs; if the linked list is less than 8, insert key-value pairs.

6. How to expand the capacity of HashMap?
After the Hashmap capacity exceeds the capacity defined by the load factor, it will expand. Arrays in Java cannot be automatically expanded. The method is to expand the size of the Hashmap to twice the size of the original array , and put the original objects into the new array.

In JDK1.7, using the head interpolation method, the hash value needs to be recalculated.
In JDK1.8, using the tail interpolation method, there is no need to recalculate the hash value. The position of the element after expansion is at the original position, or the original position + oldCap (original the length of the hash table).

7. How does the HashMap linked list turn into a red-black tree?

①Use the treeifyBin() method to convert red-black tree judgment . First judge whether to expand or convert to red-black tree operation according to the length of the array.

  • First calculate the position in the table array where the current linked list is located according to the hash, and then change its data structure from the one-way linked list Node to the two-way linked list TreeNode;
  • If the head node hd of the doubly-linked list TreeNode is not null, call the treeify() method to perform a tree-like operation on the TreeNode doubly-linked list;
final void treeifyBin(Node<K,V>[] tab, int hash) {
    
    
    int n, index; Node<K,V> e;
    // 数组长度小于64时,就先进行扩容
    if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
        resize();
    // 如果新增的node 要插入的数组位置已经有node存在了,取消插入操作
    else if ((e = tab[index = (n - 1) & hash]) != null) {
    
    
        // 步骤一:遍历链表中每个节点,将Node转化为TreeNode
        // hd指向头节点,tl指向尾节点
        TreeNode<K,V> hd = null, tl = null;
        do {
    
    
            // 将链表Node转换为红黑树TreeNode结构
            TreeNode<K,V> p = replacementTreeNode(e, null);
            // 以hd为头结点,将每个TreeNode用prev和next连接成新的TreeNode链表
            if (tl == null)
                hd = p;
            else {
    
    
                p.prev = tl;
                tl.next = p;
            }
            tl = p;
        } while ((e = e.next) != null);
        // 步骤二:如果头结点hd不为null,则对TreeNode双向链表进行树型化操作
        if ((tab[index] = hd) != null)
            // 执行链表转红黑树的操作
            hd.treeify(tab);
    }
}

The treeify() method actually converts the linked list into a red-black tree ; it is roughly divided into three steps:

  • 1. Traverse the TreeNode doubly linked list, determine whether the node x to be inserted is on the left or right of its parent node, and then insert the node into the red-black tree;
  • 2. After the node is inserted, the tree structure changes, and the balance of the red-black tree needs to be maintained through color change and rotation operations;
  • 3. Because the red-black tree is adjusted, the root node may have changed, so the latest root node needs to be placed at the head of the doubly linked list and inserted into the table array.
final void treeify(Node<K,V>[] tab) {
    
    
    TreeNode<K,V> root = null;
    // 最开始的x表示TreeNode双向链表的头结点
    for (TreeNode<K,V> x = this, next; x != null; x = next) {
    
    
        next = (TreeNode<K,V>)x.next;
        x.left = x.right = null;
        // 构建树的根节点
        if (root == null) {
    
    
            x.parent = null;
            x.red = false;
            root = x;
        }
        else {
    
    
            // 第一部分
            K k = x.key;
            int h = x.hash;
            Class<?> kc = null;
            // p 表示parent节点
            for (TreeNode<K,V> p = root;;) {
    
    
                // dir表示x节点在parent节点的左侧还是右侧
                int dir, ph; // ph表示parent节点的hash值
                K pk = p.key; // pk表示parent节点的key值
                // x节点在parent节点的左侧
                if ((ph = p.hash) > h)
                    dir = -1;
                // x节点在parent节点的右侧
                else if (ph < h)
                    dir = 1;
                else if ((kc == null &&
                          (kc = comparableClassFor(k)) == null) ||
                         (dir = compareComparables(kc, k, pk)) == 0)
                    dir = tieBreakOrder(k, pk);

                // 第二部分
                TreeNode<K,V> xp = p; // xp表示x的父节点
                // 如果p节点的左节点/右节点不为空,则令p = p.left/p.right,继续循环
                // 直到p.left/p.right为空
                if ((p = (dir <= 0) ? p.left : p.right) == null) {
    
    
                    // 令待插入节点x的父节点为xp, 即p
                    x.parent = xp;
                    // 根据dir判断插入到xp的左子树(<0)还是右子树(>0)
                    if (dir <= 0)
                        xp.left = x;
                    else
                        xp.right = x;
                    // 往红黑树中插入节点后,进行树的平衡操作
                    root = balanceInsertion(root, x);
                    break;
                }
            }
        }
    }
    // 将root节点插入到table数组中
    moveRootToFront(tab, root);
}

③After inserting a node into the red-black tree, the balanceInsertion() method will be called to perform tree balancing operations

When performing a balancing operation on a red-black tree, it is mainly divided into two parts: direct return, return after color change and rotation;

The first part: directly return the root node :
1. If the node x to be inserted has no parent node, directly turn the node of x into black and return it as the root node;
2. If the parent node of x is black or has no grandparent node, then Directly return the root node root;

The second part : rotate and change color according to whether the x node is on the left or right of the parent node xp, and whether the xp node is on the left or right of its parent node xpp ;

  1. If the parent node xp of x is the left child node of the grandparent node xpp:
    0) If the right child node xppr of the grandparent node xpp of x exists and is red, directly exchange the colors of the grandparent node and its child node and return.
    1) If x is the right child node of its parent node xp, exchange the positions of x and xp, and then rotate x to the left; 2)
    Then set the parent node xp to black, grandparent node xpp to red, and use the xpp node as the axis to rotate right .
  2. The parent node xp of x is the right child node of the grandparent node:
    this is similar to the above, except that the left-handed changes to the right-handed, and the right-handed changes to the left-handed
    ; Swap the colors of the grandparent and its children, and return.
    1) If x is the left child node of its parent node xp, exchange the positions of x and xp, and then rotate x to the right;
    2) Then set the parent node xp to black, grandparent node xpp to red, and use the xpp node as the axis to rotate left .
static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root,
                                            TreeNode<K,V> x) {
    
    
    // 将待插入节点x的节点颜色置为红色
    x.red = true;
    for (TreeNode<K,V> xp, xpp, xppl, xppr;;) {
    
    
        // 1、如果x没有父节点,自己变为黑色节点、作为根节点返回
        if ((xp = x.parent) == null) {
    
    
            x.red = false;
            return x;
        }
        // 2、如果x的父节点为黑色或者没有祖父节点,则直接返回root
        else if (!xp.red || (xpp = xp.parent) == null)
            return root;
        // 3、如果x的父节点xp是祖父节点的左子节点
        if (xp == (xppl = xpp.left)) {
    
    
            // 1)仅变色:
            //    如果xp和xppr都是红色节点,则把xppr、xp置为黑色、zpp置为红色。
            if ((xppr = xpp.right) != null && xppr.red) {
    
    
                xppr.red = false;
                xp.red = false;
                xpp.red = true;
                x = xpp;
            }
            // 2)旋转 + 变色:
            else {
    
    
                // (1)如果x 是其父节点xp的右子节点。令当前节点为其父节点,然后对当前节点x进行左旋
                if (x == xp.right) {
    
    
                    root = rotateLeft(root, x = xp);
                    xpp = (xp = x.parent) == null ? null : xp.parent;
                }
                // (2)如果x的父节点xp不为空
                if (xp != null) {
    
    
                    // 令xp的颜色为黑色
                    xp.red = false;
                    // 如果x的祖父节点xpp不为空
                    if (xpp != null) {
    
    
                        // 令xpp的颜色为红色
                        xpp.red = true;
                        // 对祖父节点xpp进行右旋
                        root = rotateRight(root, xpp);
                    }
                }
            }
        }
        // 4、x的父节点xp是祖父节点的右子节点
        else {
    
    
            // 逻辑和3、类似,只是单纯的左变右、右变左
            if (xppl != null && xppl.red) {
    
    
                xppl.red = false;
                xp.red = false;
                xpp.red = true;
                x = xpp;
            }
            else {
    
    
                if (x == xp.left) {
    
    
                    root = rotateRight(root, x = xp);
                    xpp = (xp = x.parent) == null ? null : xp.parent;
                }
                if (xp != null) {
    
    
                    xp.red = false;
                    if (xpp != null) {
    
    
                        xpp.red = true;
                        root = rotateLeft(root, xpp);
                    }
                }
            }
        }
    }
}

Source code analysis of HashMap linked list to red-black tree, refer to: https://blog.csdn.net/Saintmm/article/details/121582015

1.16 Exceptions in Java

Java can throw (Throwable) structure is divided into three types: checked exception ( CheckedException), runtime exception ( RuntimeException), error ( Error).

1. Runtime exception RuntimeException and its subclasses are called runtime exceptions.

  • ArithmeticExceptionexception raised when divisor is zero
  • IndexOutOfBoundsExceptionException raised when the array is out of bounds
  • ConcurrentModificationExceptionException caused by concurrent modification of util package collection class
  • NullPointerException(null pointer exception)
  • ClassCastException(class conversion exception)

2. The checked exception Exception class and its subclasses except "runtime exception" belong to the checked exception.
The Java compiler checks it. Such exceptions are either throwsthrown by declaration or try-catchhandled by capture, otherwise they cannot be compiled.

  • When cloning does not implement the Cloneable interface, CloneNotSupportedExceptionan exception will be thrown
  • FileNotFoundExceptionFile does not exist exception raised when reading a file
  • SQLExceptionExceptions generated when JDBC interacts with a data source
  • IOExceptionExceptions thrown when accessing information using streams, files, and directories

3. Error Error class and its subclasses.
It is an error that the program cannot handle, indicating a problem with the JVM when the code is running.

  • Java virtual machine running error ( VirtualMachineError)
  • Occurs when the JVM no longer has the memory resources it needs to continueOutOfMemoryError
  • class definition error ( NoClassDefFoundError)

2. JVM articles

2.1 JVM memory model

Thread exclusive: method stack, local method stack, program counter
thread sharing: heap, method area

1. Method stack
Thread execution methods will create a stack array to store information such as local variable tables, operation stacks, dynamic links, and method exits. When the method is called, it is pushed into the stack, and when the method returns, it is popped out of the stack.
2. The local method stack
is similar to the method stack. The Java method is executed using the stack, and the native method stack is used when the Native method is executed.
3. Program counter
Stores the bytecode position executed by the current thread. Each thread has an independent counter when it is working, and it only serves to execute Java methods. When executing Native methods, the program counter is empty.
4. The heap
is shared by threads to store object instances. When there is no available space in the heap, an OOM exception will be thrown. According to the life cycle of the object, the JVM manages the object in generations. Garbage recovery management is performed by the garbage collector.
5. The method area
is used to store data such as class information loaded by the virtual machine, constants, static variables, and code optimized by the just-in-time compiler. The permanent generation of JDK1.7 and the metaspace of JDK1.8 are both an implementation of the method area.

2.2 Class loading process

insert image description here
The class loading process is divided into loading, connection, and initialization. The connection includes verification, preparation, parsing and
loading : through the fully qualified name of the class, find this type of bytecode file, and use the bytecode file to create a Class object.
Verification : Make sure that the Class file meets the requirements of the current virtual machine and will not endanger the safety of the virtual machine itself.
Preparation : perform memory allocation, allocate memory for static modified class variables, and set the initial value (0 or null). Static variables that do not contain final modifiers, because final variables are assigned at compile time.
Resolution : The process of replacing symbolic references in the constant pool with direct references.
Initialization : It mainly completes the execution of static blocks and the assignment of static variables. First initialize the parent class, and then initialize the current class. Only initialized when the class is actively used.

Loading mechanism-parent delegation mode
Parent delegation mode, that is, when the loader loads a class, it first delegates the request to its parent class loader for execution until the top-level startup class loader. If the parent class loader can complete the loading, it will return successfully, and if it cannot, the child class loader will try to load by itself.

  • Avoid duplicate loading of classes
  • Prevent Java's core API from being tampered with

2.3 How to judge whether the object can be recycled?

  • Reference counting : Each object has a reference counting attribute. When a reference is added, the count is incremented by 1, and when the reference is released, the count is decremented by 1. When the count is 0, it can be recycled. This method is simple, but it cannot solve the problem of circular references between objects.
  • Reachability Analysis : Starting from GC Roots to search downwards, the path traveled by the search is called the reference chain. When an object does not have any reference chain connected to GC Roots, it proves that the object is unavailable and unreachable.

2.4 The garbage collector used by jdk1.8 by default

Use java -XX:+PrintCommandLineFlags -version to check

-XX:InitialHeapSize=132500864 //初始堆大小
-XX:MaxHeapSize=2120013824    //最大堆大小
-XX:+PrintCommandLineFlags    //程序运行前打印出用户手动设置或者JVM自动设置的XX选项,因为我们执行时间加上了这个选项,所以这里会打印出来
-XX:+UseCompressedClassPointers // 默认开启类指针压缩
-XX:+UseCompressedOops  // 默认开启对象指针压缩
-XX:-UseLargePagesIndividualAllocation
-XX:+UseParallelGC // 默认使用Parallel垃圾收集器
java version "1.8.0_221" // jdk版本
Java(TM) SE Runtime Environment (build 1.8.0_221-b11) // jre
Java HotSpot(TM) 64-Bit Server VM (build 25.221-b11, mixed mode) // Hotspot虚拟机,Server模式,混合编译

The ParallelGC garbage collector is used by default in JDK1.8 , including the combination of Parallel Scavenge (new generation) and Parallel Old (old generation) collectors.

2.5 GC algorithm

  • mark-and-sweep algorithm
  • copy algorithm
  • Markup-Collating Algorithm
  • Generational Collection Algorithm

1. Basics: Mark-Sweep Algorithm

Basic idea:

  • First mark all objects that need to be recycled;
  • After marking, all marked objects are collected uniformly.

insufficient:

  • Efficiency problem: The efficiency of both marking and cleaning processes is not high.
  • Space fragmentation problem: After the mark is cleared, a large number of discontinuous memory fragments will be generated, which will cause insufficient continuous memory to be found when allocating memory for larger objects in the future, and another GC will be triggered in advance.

2. Solve the efficiency problem: copy algorithm

The GC used in the new generation (Young) is the copy algorithm used.

-XX:MaxTenuringThreshold //Set the number of generations the object survives in the new generation, the default is 15

Basic idea:

  • Divide the available memory into two pieces of equal size, and only use one of them at a time;
  • When a piece of memory is used up, copy the surviving objects to another piece of memory, and clean up all of this piece of memory.

insufficient:

  • The available memory is reduced to half of the original size, which is suitable for the new generation where only a small number of objects survive after GC.

3. Solving the space fragmentation problem: mark-sort algorithms

Basic idea:

  • First mark all objects that need to be recycled;
  • After marking, move all surviving objects to one end, and then directly clean up the memory outside the boundary.

insufficient:

  • There is an efficiency problem and it is suitable for the old age .

4. Evolution: generational collection algorithm

  • Young generation: Only a small number of objects survive after GC - the copy algorithm.
  • Old age: High object survival rate after GC - mark-compact algorithm.

Reference article: https://www.toobug.cn/post/4990.html

2.6 What are the JDK monitoring commands?

  • jps , JVM Process Status Tool, displays all HotSpot virtual machine processes in the specified system.
  • jstat , JVM statistics Monitoring is a command used to monitor the status information of the virtual machine when it is running. It can display the running data such as class loading, memory, garbage collection, and JIT compilation in the virtual machine process.
  • jmap , the JVM Memory Map command is used to generate a heap dump file, which can be loaded and analyzed using the jvisualvm tool .
  • jhat , the JVM Heap Analysis Tool command is used in conjunction with jmap to analyze the dump generated by jmap. jhat has a built-in miniature HTTP/HTML server. After generating the dump analysis results, you can view them in the browser
  • jstack , used to generate thread snapshots of the java virtual machine at the current moment.
  • jinfo , JVM Configuration info This command is used to view and adjust the running parameters of the virtual machine in real time.

2.7 How to optimize JVM performance?

Set the heap memory size

-Xmx: Maximum heap memory limit.

Set the young generation size. The new generation should not be too small, otherwise a large number of objects will flood into the old generation

-XX:NewSize: the size of the new generation
-XX:NewRatio The ratio of the new generation to the old generation
-XX:SurvivorRatio: The ratio of the Eden space to the survivor space

Configure the garbage collector

  • new generation -XX:+UseParNewGC
  • Old generation -XX:+UseConcMarkSweepGC

3. Multithreading & Concurrency

3.1 What are the states of threads?

Threads usually have five states, created, ready, running, blocked and dead\color{red}{created, ready, running, blocked and dead}create , ready , run , block and die . _

insert image description here

  • Create (New) state. When the thread object is generated, the start method of the object is not called, which means the thread is in the created state.
  • Ready (Runnable) state. When the start method of the thread object is called, the thread enters the ready state, but at this time the thread scheduler has not set the thread as the current thread, so it is in the ready state. After the thread runs, after returning from waiting or sleeping, it will also be in the ready state.
  • Running (Running) state. The thread scheduler sets the thread in the ready state as the current thread. At this time, the thread enters the running state and starts to run the code in the run function.
  • Blocked state. When a thread is running, it is suspended, usually to wait for a certain time to happen (for example, a resource is ready) before continuing to run. Sleep, suspend, wait and other methods can all cause thread blocking.
  • Dead state. If a thread's run method ends or the stop method is called, the thread will die. For dead threads, the start method can no longer be used to make them ready.

3.2 How many ways are there to implement multithreading in Java?

  1. Inherit the Thread class and override the run method.
  2. Implement the Runnable interface, rewrite the run method, and use its instance object as the target of the Thread constructor.
  3. Implement the Callable interface to create Thread threads through the FutureTask wrapper.
  4. Create threads through the thread pool.

3.3 Executors thread pool framework?

Java provides four thread pools through Executors, namely:

  • newCachedThreadPool creates a cacheable thread pool . If the length of the thread pool exceeds the processing needs, idle threads can be recycled flexibly. If there is no recycling, new threads will be created.
  • newFixedThreadPool creates a fixed-length thread pool , which can control the maximum number of concurrent threads, and the excess threads will wait in the queue.
  • newScheduledThreadPool creates a scheduled thread pool that supports scheduled and periodic task execution.
  • newSingleThreadExecutor creates a single-threaded thread pool , which uses a unique worker thread to execute tasks, ensuring that all tasks are executed in the specified order (FIFO, LIFO, priority).

3.4 Workflow of thread pool?

insert image description here

  • When the number of threads is less than corePoolSize, create threads to perform tasks.
  • When the number of threads is greater than or equal to corePoolSize and the workQueue is not full, put it into the workQueue
  • The number of threads is greater than or equal to corePoolSize and when the workQueue is full, a new task will be created with a new thread to run, and the total number of threads must be less than maximumPoolSize
  • When the total number of threads is equal to the maximumPoolSize and the workQueue is full, the rejectedExecution of the handler is executed. That is the rejection strategy.

3.5 What is the difference between thread pool execute() and submit()?

  • The execute() method is used to submit tasks that do not need to return a value, so it is impossible to judge whether the task is successfully executed by the thread pool;
  • The submit() method is used to submit tasks that need to return a value. The thread pool will return a future type object . Through this future object, you can judge whether the task is executed successfully, and you can get the return value through the get() method of the future. The get() method will block the current thread until the task is completed, and use get (long timeout, TimeUnit unit) method will block the current thread for a period of time and then return immediately. At this time, the task may not be completed.

3.6 What thread-safe collections are there? How is it achieved?

  • vector: synchronized is added to the method;
  • hashtable: synchronized is added to the method;
  • CopyOnWriteArrayList: Implements mutable operations over an up-to-date copy of the underlying array. The attribute has only one volatile modified array, and one ReentrantLock lock;
  • ConcurrentHashMap: Node + CAS + Synchronized to ensure concurrent security implementation;

3.7 What types of locks are there? What are the characteristics?

  • Reentrant lock . It means that the same thread can acquire the same lock multiple times, which can avoid deadlock to a certain extent. For example ReentrantLock and synchronized.
  • Interruptible locks . Refers to whether the thread can respond to interrupts during the process of trying to acquire the lock. Synchronized is an uninterruptible lock, while ReentrantLock is an interruptible lock.
  • Fair lock/unfair lock . A fair lock means that when multiple threads try to acquire the same lock at the same time, the order in which the locks are acquired is in the order in which the threads arrive, while an unfair lock allows threads to "jump in the queue". The advantage of unfair locks is that the throughput is greater than that of fair locks . Synchronized is an unfair lock, and the default implementation of ReentrantLock is an unfair lock, but it can also be set as a fair lock.
  • Exclusive lock/shared lock . An exclusive lock means that the lock can only be held by one thread at a time. A shared lock means that the lock can be held by multiple threads. Synchronized and ReentrantLock are exclusive locks. ReadWriteLock's read lock is a shared lock, and its write lock is an exclusive lock. Exclusive locks and shared locks are also implemented through AQS.
  • Optimistic lock/pessimistic lock . Pessimistic locking is to use various locks for writing operations. Optimistic locking is lock-free programming, and the CAS algorithm is often used. A typical example is the atomic class, which implements the update of atomic operations through CAS spin.
  • spin lock . It means that the thread trying to acquire the lock will not be blocked immediately, but will try to acquire the lock in a cyclic manner. The advantage of this is to reduce the consumption of thread context switching. The disadvantage is that the loop will consume CPU.

3.8 What is the difference between CAS and synchronized? Application scenario?

  • CAS(CompareAndSwap) . The CAS operation is simply compare and exchange . A CAS operation consists of three operands - the memory value to be read or written (V), the value to be compared (A), and the new value to be written (B). If and only if the value of V is equal to A, CAS atomically updates the value of V with the new value B, otherwise no operation is performed (comparison and replacement is an atomic operation). In general, it is a spin operation , that is, continuous retrying.
  • synchronized . The underlying implementation of Synchronized mainly relies on the Lock-Free queue. The basic idea is to block after spinning , and continue to compete for locks after the competition is switched. This slightly sacrifices fairness, but obtains high throughput . In the case of fewer thread conflicts, performance similar to CAS can be obtained; and in the case of severe thread conflicts, the performance is much higher than that of CAS.

CAS is suitable for multi-read scenarios with few conflicts , and synchronized is suitable for multi-write scenarios with many conflicts

3.9 What are the differences and advantages between the Lock interface and synchronized?

category synchronized Lock
level of existence Java keywords, at the jvm level is a class
lock release 1. After executing the synchronization code with the thread that acquired the lock, release the lock 2. When an exception occurs in the execution of the thread, jvm will let the thread release the lock The lock must be released in finally, otherwise it is easy to cause thread deadlock
lock acquisition Suppose thread A acquires the lock and thread B waits. If thread A is blocked, thread B will wait forever You can try to acquire the lock, the thread does not have to wait all the time
lock status Can't judge can judge
lock type reentrant, uninterruptible, unfair reentrant, interruptible, fair/unfair
performance few syncs mass synchronization

The advantages of LOCK are:

  • can make locks fairer
  • Allows threads to respond to interrupts while waiting for a lock
  • You can let the thread try to acquire the lock, and return immediately or wait for a period of time when the lock cannot be acquired
  • Locks can be acquired and released at different scopes and in different orders

3.10 What is the usage of synchronized keyword? Implementation principle?

  • Modified instance method : Acts on the current object instance to lock, and obtains the lock of the current object instance before entering the synchronization code.
  • Modified static method : that is, lock the current class, and obtain the lock of the current class before entering the synchronization code.
  • Modified code block : Specify the locked object, and obtain the lock of the specified object before entering the synchronized code block.

Summary: The synchronized keyword is added to the static static method and the synchronized (class) code block is to lock the Class class. The synchronized keyword is added to the instance method to lock the object instance. Try not to use synchronized(String a) because in the JVM, the string constant pool has a caching function.

Implementation principle JVM level

1. The implementation of the synchronized synchronization statement block uses the monitorenter and monitorexit instructions, where the monitorenter instruction points to the start position of the synchronization code block, and the monitorexit instruction indicates the end position of the synchronization code block.
When the monitorenter instruction is executed, the thread tries to acquire the lock, which is to acquire the monitor (the monitor object exists in the object header of each Java object, and the synchronized lock acquires the lock in this way, which is why any object in Java can be used as a lock. reason) ownership. When the counter is 0, it can be acquired successfully , and the lock counter is set to 1 after acquisition. Correspondingly, after executing the monitorexit instruction, set the lock counter to 0, indicating that the lock is released. If the acquisition of the object lock fails, the current thread will block and wait until the lock is released by another thread.

2. The synchronized method uses the ACC_SYNCHRONIZED flag , which indicates that the method is a synchronous method . The JVM uses the ACC_SYNCHRONIZED access flag to identify whether a method is declared as a synchronous method, thereby executing the corresponding synchronous call.

3.11 Singleton mode of double checking lock? (DCL singleton)

public class Singleton {
    
    
   private volatile static Singleton uniqueInstance;
   private Singleton() {
    
    
   }
   public static Singleton getUniqueInstance() {
    
    
       //先判断对象是否已经实例过,没有实例化过才进入加锁代码
       if (uniqueInstance == null) {
    
    
           //类对象加锁
           synchronized (Singleton.class) {
    
    
               if (uniqueInstance == null) {
    
    
                   uniqueInstance = new Singleton();
               }
           }
       }
       return uniqueInstance;
   }
}

Note: It is also necessary to modify uniqueInstance with the volatile keyword, uniqueInstance = new Singleton(); This code is actually divided into three steps:

1. Allocate memory space for uniqueInstance
2. Initialize uniqueInstance
3. Point uniqueInstance to the allocated memory address

But because JVM has the feature of instruction rearrangement, the execution sequence may become 1->3->2. Instruction rearrangement in a multi-threaded environment can cause a thread to obtain an instance that has not yet been initialized. Using volatile can prohibit the instruction rearrangement of the JVM to ensure that it can also run normally in a multi-threaded environment.

3.12 What is the implementation principle of ConcurrentHashMap?

Implementation of JDK1.7

In the JDK1.7 version, the data structure of ConcurrentHashMap is composed of a Segment array and multiple HashEntry .
The meaning of the Segment array is to divide a large table into multiple small tables for segmentation locking. Each Segment element stores a HashEntry array + linked list.

1. Put operation
For the data insertion of ConcurrentHashMap, Hash is performed twice to locate the storage location of the data . Segment inherits ReentrantLock, which also has the function of lock. When the put operation is performed, the hash of the key will be performed for the first time to locate the position of the segment. If the segment has not been initialized, the value is assigned through the CAS operation, and then the second The second hash operation finds the position of the corresponding HashEntry. Here, the characteristics of the inherited lock will be used. When inserting data into the specified HashEntry position (the end of the linked list), it will try to obtain it by inheriting the tryLock() method of ReentrantLock . If the lock is acquired successfully, it will be directly inserted into the corresponding position. If a thread has already acquired the lock of the Segment, the current thread will continue to call the tryLock() method to acquire the lock in a spinning manner.

2. The get operation
ConcurrentHashMap needs to go through a hash to locate the position of the Segment for the first time, and then hash to locate the specified HashEntry, and traverse the linked list under the HashEntry.

3. Size operation
In concurrent operations, when calculating the size, other threads are still inserting data concurrently, which may cause a discrepancy between the calculated size and the actual size. JDK1.7 uses two solutions to solve this problem:

  • Use the unlocked mode to try to calculate the size of ConcurrentHashMap multiple times, up to three times, and compare the results of the two calculations before and after. If the results are consistent, it is considered that no elements are currently added, and the calculation results are accurate.
  • If the first solution is not met, he will lock each Segment, and then calculate the size of ConcurrentHashMap.

Implementation of JDK1.8

The implementation of JDK1.8 has abandoned the concept of Segment, but directly uses the data structure of Node array + linked list + red-black tree . The concurrency control uses synchronized and CAS to operate . The whole looks like optimized and thread-safe HashMap.

That is: Optimistic lock CAS is used in concurrent processing, and concurrent processing is performed when there is a conflict Synchronized

1. Node
Node is the basic unit of ConcurrentHashMap storage structure, inherited from Entry in HashMap, and used to store data. It is a linked list, but it only allows data to be searched and no modification is allowed .

  • volatile : Both val and next will change during expansion, so volatile is added to maintain visibility and prohibit reordering.
  • final : final modifies the setValue() method, and does not allow updating the value.

2. TreeNode
TreeNode inherits from Node, but the data structure is replaced by a binary tree structure, which is a red-black tree data storage structure, used to store data in a red-black tree , when the number of nodes in the linked list is greater than 8 and the length of the table is greater than 64 It will be converted into a red-black tree structure. The ultimate purpose of converting the linked list into a red-black tree is to solve the problem of reduced read and write efficiency caused by too many elements in the map and large hash conflicts .

  • HashMap does not necessarily convert to a red-black tree when the number of elements in the linked list is greater than 8, but considers expansion first, and then converts after the expansion reaches the default limit.
  • The red-black tree of hashMap will not be converted into a linked list when it is not necessarily less than 6. When remove reduces elements , it is judged by whether the root node of the red-black tree and its child nodes are empty; after resize expansion , if the number of tree nodes is <=6 , turn the tree into a linked list.

3. TreeBin
TreeBin is a container that encapsulates TreeNode, which provides some conditions and lock control for converting black and red trees .

4. Put operation
Perform an unconditional self-loop on the current table until the put is successful, which can be divided into the following six-step process to summarize:

  1. If not initialized, call the initTable() method first to perform the initialization process
  2. If there is no hash conflict, insert directly into CAS
  3. If the expansion operation is still in progress, expand the capacity first
  4. If there is a hash conflict, add a lock to ensure thread safety. There are two situations here. One is to traverse directly to the end and insert in the form of a linked list, and the other is to insert a red-black tree according to the red-black tree structure.
  5. If the number of the linked list is greater than the threshold 8, the operation of converting the black-red tree is performed, and break enters the loop again
  6. If the addition is successful, call the addCount() method to count the size, and check whether expansion is required

5. get operation

  1. Calculate the hash value, locate the index position of the table, and return if the first node matches
  2. If expansion is encountered, the find method of the ForwardingNode that marks the node being expanded will be called, and the node will be searched, and the match will be returned
  3. If none of the above matches, it will traverse down the node and return if it matches, otherwise it will return null at the end

3.13 What does the volatile keyword do? Implementation principle? scenes to be used?

  • Variables modified with the volatile keyword ensure their visibility among multiple threads , that is, each time a volatile variable is read, it must be the latest data.
  • Using volatile will prohibit the JVM from reordering instructions , which of course reduces code execution efficiency to a certain extent.

Note: volatile cannot guarantee atomicity . If it can be guaranteed, it is only atomic for reading/writing a single volatile variable , but it is powerless for compound operations like volatile++

Implementation principle: lock prefix instruction .

The lock prefix instruction is actually equivalent to a memory barrier, which provides the following functions:

  1. When reordering, the following instructions cannot be reordered to the position before the memory barrier

  2. Make the Cache of this CPU write to the memory

  3. The write action will also cause other CPUs or other cores to invalidate their Cache, which is equivalent to making the newly written value visible to other threads.

Usage scenario: singleton mode of state volume marking and double-check lock (DCL)

4. Spring articles

4.1 Spring's IOC and AOP mechanism

IOC(控制反转)Inversion of control is also called dependency injection. By using 工厂模式the object to be managed by the container, you only need to configure the corresponding bean in the spring configuration file, and set related properties, so that the spring container can generate the instance object of the class and manage the object . When the spring container starts, spring will initialize all the beans you configured in the configuration file, and then assign the initialized beans to the classes you need to call these beans when you need to call them.

AOP(面向切面编程)Use 代理模式implementation. (Aspect-Oriented Programming) AOP can be said to be a supplement and improvement to OOP. OOP introduces concepts such as encapsulation, inheritance, and polymorphism to create an object hierarchy that models a collection of common behaviors. When we need to introduce public behavior for decentralized objects, OOP is powerless. That is, OOP allows you to define top-to-bottom relationships, but is not suitable for defining left-to-right relationships . For example log function. Logging code tends to be spread horizontally through all object hierarchies, regardless of the core functionality of the object it is spread into. In OOP design, it leads to a lot of code duplication, which is not conducive to the reuse of various modules. AOP encapsulates the cross-business logic in the program (such as security, log, transaction, etc.) into an aspect, and then injects it into the target object (specific business logic).
The technology to realize AOP is mainly divided into two categories: one is to use 动态代理技术the method of intercepting the message to decorate the message to replace the execution of the original object behavior ; the other is to use 静态织入的方式and introduce specific syntax to create "section", This allows the compiler to weave code about "aspects" during compilation .

There are two main ways of dynamic proxy in Spring AOP, JDK动态代理and CGLIB动态代理:

JDK dynamic proxy only provides interface proxy and does not support class proxy . Core InvocationHandler接口and Proxy类, InvocationHandler calls the code in the target class through the reflection of the invoke() method, and dynamically weaves the cross-cutting logic and business together; then, Proxy uses InvocationHandler to dynamically create an instance that conforms to a certain interface, and generates the target class proxy object.
②If the proxy class does not implement the interface, then Spring AOP will choose to use CGLIB to dynamically proxy the target class. CGLIB (Code Generation Library) is a class library for code generation, which can dynamically generate a subclass object of a specified class at runtime, and override specific methods and add enhanced codes to achieve AOP. CGLIB is a dynamic proxy through inheritance, so if a class is marked as final, it cannot use CGLIB as a dynamic proxy .

4.2 What is the difference between Autowired and Resource annotations in Spring?

similarities and differences @Autowired @Resource
common ground Both can be written on fields or setter methods. If it is written on the field, then there is no need to write the setter method.
the difference Annotations from different sources Annotations provided by Spring Provided by J2EE
different injection methods Inject by type (byType), if you want to assemble by name (byName), you can combine @Qualifier annotation The default is to inject by name (byName), and you can use the name and type attributes to specify the injection method
Assembly optional The required attribute is provided (the default value is true) to avoid throwing an exception when the injection is empty, and set @Autowired to false There is no optional assembly feature, and an exception will be thrown if it cannot be assembled
constructor injection can be written on the constructor cannot be written on the constructor

4.3 How many ways are there for dependency injection? Advantages and disadvantages?

1. Constructor injection
The dependent object is injected into the dependent object through the parameters of the constructor, and injected when the object is initialized.

advantage:

  • 对象初始化完成后便可获得可使用的对象。

缺点:

  • 当需要注入的对象很多时,构造器参数列表将会很长;
  • 不够灵活。若有多种注入方式,每种方式只需注入指定几个依赖,那么就需要提供多个重载的构造函数,麻烦。

二、setter方法注入
IoC Service Provider通过调用成员变量提供的setter方法将被依赖对象注入给依赖类。

优点:

  • 灵活。可以选择性地注入需要的对象。

缺点:

  • 依赖对象初始化完成后由于尚未注入被依赖对象,因此还不能使用。

三、接口注入
依赖类必须要实现指定的接口,然后实现该接口中的一个函数,该函数就是用于依赖注入。该函数的参数就是要注入的对象。

优点

  • 接口注入中,接口的名字、函数的名字都不重要,只要保证函数的参数是要注入的对象类型即可。

缺点:

  • 侵入行太强,不建议使用。

4.4 Spring的常用的注解有哪些?

@Component、@Controller、@Service、@Repository
@Bean:声明并注入实例对象
@Scope:设置Bean的作用域
@Import:导入其他组件到容器中
@Autowired:依赖注入
@Value:属性注入
@PostConstruct:初始化方法,注解由java提供
@PreDestory:销毁方法,注解由java提供
@Configuration:声明当前类为配置类
@ComponentScan:用于对Component进行扫描
@Aspect:声明一个切面
@After:在方法执行之后执行(方法上)
@Before:在方法执行之前执行(方法上)
@Around:在方法执行之前与之后执行(方法上)
@PointCut:声明切点
@Enable***:这些注解主要是用来开启对xxx的支持

4.5 Spring MVC的工作原理

insert image description here

  1. 用户发送请求至前端控制器DispatcherServlet
  2. DispatcherServlet收到请求调用HandlerMapping处理器映射器
  3. 处理器映射器找到具体的Handler处理器(可以根据xml配置、注解进行查找),生成处理器对象及处理器拦截器(如果有则生成)一并返回给DispatcherServlet。
  4. DispatcherServlet调用HandlerAdapter处理器适配器
  5. HandlerAdapter经过适配调用具体的Controller处理器(也叫后端控制器)。
  6. Controller执行完成返回ModelAndView
  7. HandlerAdapter将controller执行结果ModelAndView返回给DispatcherServlet
  8. DispatcherServlet将ModelAndView传给ViewReslover视图解析器
  9. ViewReslover解析后返回具体View
  10. DispatcherServlet根据View进行渲染视图(即将模型数据填充至视图中)
  11. DispatcherServlet响应用户

4.6 Spring MVC常用的注解有哪些?

@Controller:用于控制层注解
@Service:用于对业务逻辑层进行注解
@Repository:对Dao实现类进行注解
@Component:在类定义之前添加@Component注解,他会被spring容器识别,并转为bean
@RequestMapping:用于处理请求 url 映射的注解,可用于类或方法上。用于类上,则表示类中的所有响应请求的方法都是以该地址作为父路径
@RequestParam:用于获取传入参数的值
@PathViriable:用于定义路径参数值
@RequestBody:注解实现接收http请求的json数据,将json转换为java对象
@ResponseBody:注解实现将conreoller方法返回值转化为json对象响应给用户
@CookieValue:用于获取请求的Cookie值
@ModelAttribute:用于把参数保存到model中
@SessionAttributes:使得model中的数据存储一份到session域中

4.7 Spring框架中都用到了哪些设计模式?

  • 工厂模式:Spring使用工厂模式,通过BeanFactory和ApplicationContext来创建对象
  • 单例模式:Bean默认为单例模式
  • 策略模式:例如Resource的实现类,针对不同的资源文件,实现了不同方式的资源获取策略
  • 代理模式:Spring的AOP功能用到了JDK的动态代理和CGLIB字节码生成技术
  • 模板方法:可以将相同部分的代码放在父类中,而将不同的代码放入不同的子类中,用来解决代码重复的问题。比如RestTemplate, JmsTemplate, JpaTemplate
  • 适配器模式:Spring AOP的增强或通知(Advice)使用到了适配器模式,Spring MVC中也是用到了适配器模式适配Controller
  • 观察者模式:Spring事件驱动模型就是观察者模式的一个经典应用。
  • 桥接模式:可以根据客户的需求能够动态切换不同的数据源。比如项目需要连接多个数据库

4.8 Spring的@ControllerAdvice注解使用

@ControllerAdvice / @RestControllerAdvice 有三方面的功能
1. 全局异常处理

  • 使用@ExceptionHandler 注解用来指明处理异常的类型
  • 可以定义多个方法,不同的方法处理不同的异常

2. 全局数据绑定

  • 使用@ModelAttribute注解标记该方法的返回数据是一个全局数据,name属性用来指定数据的key
  • 在项目的任何Controller中都可以使用Model对象直接获取该数据
  • model.asMap()将数据转成map
  • 通过指定的key来获取数据,未指定key则默认名称为map

3. 全局数据预处理

比如给实体类的对象属性增加前缀

5. SpringBoot篇

5.1 Spring Boot 优缺点

Spring Boot用来简化spring应用开发,约定大于配置,去繁从简

优点

  • 独立运行Spring Boot而且内嵌了各种servlet容器,Tomcat、Jetty等,现在不再需要打成war包部署到容器中,Spring Boot只要打成一个可执行的jar包就能独立运行,所有的依赖包都在一个jar包内。
  • 简化配置spring-boot-starter-web启动器自动依赖其他组件,简少了maven的配置。
  • 自动配置无需XML配置文件就能完成所有配置工作,这一切都是借助于条件注解完成的,这也是Spring4.x的核心功能之一。
  • 应用监控Spring Boot Actuator提供一系列端点可以监控服务及应用,做健康检测。

缺点

  • 版本迭代速度很快,一些模块改动很大。
  • 由于不用自己做配置,报错时很难定位。

5.2 Spring Boot 的核心注解

启动类上面的核心注解是@SpringBootApplication主要组合包含了以下3 个注解:

@SpringBootConfiguration:组合了 @Configuration 注解,实现配置文件的功能。
@EnableAutoConfiguration:打开自动配置的功能,也可以关闭某个自动配置的选项,如关闭数据源
自动配置功能: @SpringBootApplication(exclude = { DataSourceAutoConfiguration.class })。
@ComponentScan:Spring组件扫描。

5.3 Spring Boot 的核心配置文件

Spring Boot 的核心配置文件是 application配置文件 和 bootstrap 配置文件。

  • application 配置文件主要用于 Spring Boot 项目的自动化配置。
  • bootstrap 配置文件有以下几个应用场景:

1、使用 Spring Cloud Config 配置中心时,这时需要在 bootstrap 配置文件中添加连接到配置中心的配置属性来加载外部配置中心的配置信息;
2、一些固定的不能被覆盖的属性;
3、一些加密/解密的场景;

yml与properties配置文件格式的区别

区别 .yml .properties
语法不同 配置以“:”进行分割,缩进使用两个空格,不能使用 TAB键 配置以“.”进行分割
数据格式不同 通过“: ”赋值,且Key的冒号后面一定要加一个空格 通过“=”赋值
数据类型不同 支持键值对数据,支持数组格式 "- "表示数组,支持对象格式 只支持键值对数据
配置加载顺序 加载有先后顺序,有序 不保证加载顺序,无序
配置加载方式 无法使用@PropertySource注解加载自定义yml文件 @PropertySource

profile配置,指定配置文件

  • application.properties
  • application-dev.propertie
spring.profiles.active=dev  

日志配置

推荐使用:Slf4j + logbak

  • 控制台输出:logging.level+包名
logging.level.com.example.mapper=debug
  • 日志文件输出
logging.file.name=D:\logs\demo.log
logging.file.path=D:\logs

5.4 如何在Spring Boot启动的时候运行一些特定的代码?

  • 实现ApplicationRunner接口
  • 实现CommandLineRunner接口
  • 实现ServletContextListener接口

5.5 如何理解 Spring Boot 中的 Starters?

Starters可以理解为启动器,它包含了一系列可以集成到应用里面的依赖包,你可以一站式集成 Spring及其他技术,而不需要到处找示例代码和依赖包。如你想使用 Spring JPA 访问数据库,只要加入spring-boot-starter-data-jpa 启动器依赖就能使用了。

5.6 spring-boot-starter-parent有什么作用?

spring-boot-starter-parent主要提供了如下默认配置:

  1. java版本默认使用1.8
  2. 编码格式默认使用utf-8
  3. 提供统一的maven依赖版本管理
  4. 默认的资源过滤与插件管理

5.7 Spring Boot 自动配置原理是什么?

注解 @EnableAutoConfiguration, @Configuration, @ConditionalOnXxx条件注解 就是自动配置的核心,首先它得是一个配置文件,其次根据类路径下是否满足这个条件去自动配置。

6. SpringCloud篇

6.1 Spring Cloud 的核心组件有哪些?

服务注册与发现 - Netflix Eureka

由两个组件组成:Eureka服务器和Eureka客户端。Eureka服务器用作服务注册服务器。Eureka客户端是一个java客户端,用来简化与服务器的交互、作为轮询负载均衡器,并提供服务的故障切换支持。

Eureka详细介绍可以参考文章:SpringCloud 服务注册与发现-Eureka

客服端负载均衡 - Netflix Ribbon

Ribbon,主要提供客户端的软件负载均衡算法。Ribbon客户端组件提供一系列完善的配置选项,比如连接超时、重试、重试算法等。Ribbon内置可插拔、可定制的负载均衡组件。

熔断器 - Netflix Hystrix

断路器可以防止一个应用程序多次试图执行一个操作,即很可能失败,允许它继续而不等待故障恢复或者浪费 CPU 周期,而它确定该故障是持久的。断路器模式也使应用程序能够检测故障是否已经解决。如果问题似乎已经得到纠正,应用程序可以尝试调用操作。

Hystrix详细介绍可以参考文章:SpringCloud 熔断器-Hystrix

服务网关 - Spring Cloud Gateway

Gateway 作为 Spring Cloud 生态系统中的网关,目标是替代 Zuul。而为了提升网关的性能,Gateway是基于WebFlux框架实现的,而WebFlux框架底层则使用了高性能的通信框架Netty。 旨在为微服务架构提供一种简单有效的统一的 API 路由管理方式。

声明式服务调用 - Spring Cloud OpenFeign

OpenFeign,它是 Spring 官方推出的一种声明式服务调用与负载均衡组件,对 Ribbon 进行了集成,利用 Ribbon 维护了可用服务清单,并通过 Ribbon 实现了客户端的负载均衡。它具有 Feign 的所有功能,并在 Feign 的基础上增加了对 Spring MVC 注解的支持。

分布式配置中心 - Spring Cloud Config

Spring Cloud Config为分布式系统中的外部配置提供服务器和客户端支持,可以方便的实现分布式统一配置管理。分为Config Server和Config Client两部 分。Config Server负责读取配置文件,并且暴露Http API接口,Config Client通过调用Config Server的接口来读取配置文件。

Spring Cloud Config是静态的,得配合Spring Cloud Bus实现动态的配置更新。

消息总线 - Spring Cloud Bus

Spring Cloud Bus 使用轻量级的消息代理来连接微服务架构中的各个服务,可以将其用于广播状态更改(例如配置中心配置更改)或其他管理指令

Spring Cloud Bus 配合 Spring Cloud Config 使用可以实现配置的动态刷新。

目前 Spring Cloud Bus 支持两种消息代理:RabbitMQKafka

6.2 Spring Cloud的优缺点

优点:

  • 耦合度比较低。不会影响其他模块的开发。
  • 配置比较简单,基本用注解就能实现,不用使用过多的配置文件。
  • 微服务跨平台的,可以用任何一种语言开发。
  • 每个微服务可以有自己的独立的数据库也有用公共的数据库。
  • 只需要关注自己的后端代码即可,然后暴露接口,通过组件进行服务通信。

缺点:

  • 部署比较麻烦。
  • 针对数据的管理比麻烦,因为微服务可以每个微服务使用一个数据库。
  • 统集成测试比较麻烦。
  • 能的监控比较麻烦。【最好开发一个大屏监控系统】

6.3 微服务之间是如何独立通讯的?

  • 同步调用。也就是我们常说的服务的注册与发现,直接通过远程过程调用来访问别的service。

    • RestTemplate
    • OpenFegin

    优点: 简单,常见,因为没有中间件代理,系统更简单
    缺点:

    • 只支持请求/响应的模式;
    • 降低了可用性,因为客户端和服务端在请求过程中必须都是可用的
  • 异步调用

    • Spring Cloud Bus
    • MQ消息中间件

    优点:

    • 把客户端和服务端解耦,更松耦合
    • 提高可用性,因为消息中间件缓存了消息,直到消费者可以消费
    • 支持很多通信机制比如通知、请求/异步响应、发布/订阅、发布/异步响应

    缺点: 消息中间件有额外的复杂

6.4 说说 RPC 的实现原理

RPC,全称 Remote Procedure Call(远程过程调用),即调用远程计算机上的服务。大致分4个步骤:

  1. 建立通信。首先需要有处理网络连接通讯的模块,负责连接建立、管理和消息的传输。
  2. 服务寻址。需要Registry来注册服务的地址。
  3. 网络传输。需要有编解码的模块。因为网络通讯都是传输的字节码,需要将我们使用的对象序列化和反序列化。
  4. 服务调用。服务器端暴露要开放的服务接口;客户端调用服务接口的一个代理实现,这个代理实现负责收集数据、编码并传输给服务器然后等待结果返回。

PRC架构组件

insert image description here

  • 客户端(Client): 服务调用方(服务消费者)

  • 客户端存根(Client Stub):存放服务端地址信息,将客户端的请求参数数据信息打包成网络消息,再通过网络传输发送给服务端

  • 服务端存根(Server Stub):接收客户端发送过来的请求消息并进行解包,然后再调用本地服务进行处理

  • 服务端(Server):服务的真正提供者

RPC具体调用过程

1、服务消费者(client客户端)通过调用本地服务的方式调用需要消费的服务;
2、客户端存根(client stub)接收到调用请求后负责将方法、入参等信息序列化(组装)成能够进行网络传输的消息体;
3、客户端存根(client stub)找到远程的服务地址,并且将消息通过网络发送给服务端;
4、服务端存根(server stub)收到消息后进行解码(反序列化操作);
5、服务端存根(server stub)根据解码结果调用本地的服务进行相关处理;
6、本地服务执行具体业务逻辑并将处理结果返回给服务端存根(server stub);
7、服务端存根(server stub)将返回结果重新打包成消息(序列化)并通过网络发送至消费方;
8、客户端存根(client stub)接收到消息
9、客户端存根(client stub)对收到的消息进行解码(反序列化);
10、服务消费方(client客户端)得到最终结果;

RPC框架的实现目标则是将上面的第2-10步完好地封装起来,也就是把调用、编码/解码的过程给封装起来,让用户感觉上像调用本地服务一样的调用远程服务。

7. MyBatis篇

7.1 #{}和${}的区别是什么?

  • #{}是预编译处理,会将sql中的#{}替换为?号,调用PreparedStatement的set方法来赋值;
  • $ {}是字符串替换,就是把$ {}替换成变量的值。

使用#{}可以有效的防止SQL注入,提高系统安全性。

7.2 当实体类中的属性名和表中的字段名不一样 ,怎么办 ?

  1. 通过在查询的sql语句中定义字段名的别名,让字段名的别名和实体类的属性名一致。
  2. 使用resultMap标签来映射字段名和实体类属性名的对应关系。

7.3 mapper接口的工作原理是什么?接口方法能重载吗?

Mapper 接口的工作原理是JDK动态代理Mybatis运行时会使用JDK动态代理为Mapper接口生成代理对象proxy,代理对象会拦截接口方法,根据类的全限定名+方法名,唯一定位到一个MapperStatement并调用执行器执行所代表的sql,然后将sql执行结果返回

Mapper接口里的方法,是不能重载的,因为是使用 全限定名+方法名 的保存和寻找策略

7.4 Mybatis是如何进行分页的?分页插件的原理是什么?

Mybatis使用RowBounds对象进行分页,它是针对ResultSet结果集执行的内存分页,而非物理分页。
可以在sql内直接书写带有物理分页的参数来完成物理分页功能,也可以使用分页插件来完成物理分页。

分页插件的基本原理是使用Mybatis提供的插件接口,实现自定义插件,在插件的拦截方法内拦截待执行的sql,然后重写sql,根据dialect方言,添加对应的物理分页语句和物理分页参数。

7.5 Mybatis的一级、二级缓存

  • 一级缓存: 基于 PerpetualCacheHashMap 本地缓存,其存储作用域为Session,当 Session flush 或 close 之后,该 Session 中的所有 Cache 就将清空,默认打开一级缓存。
  • 二级缓存与一级缓存其机制相同,默认也是采用 PerpetualCache,HashMap 存储,不同在于其存储作用域为 Mapper(Namespace),并且可自定义存储源,如 Ehcache。默认不打开二级缓存,要开启二级缓存,使用二级缓存属性类需要实现Serializable序列化接口(可用来保存对象的状态),可在它的映射文件中配置 ;
  • 对于缓存数据更新机制,当某一个作用域(一级缓存 Session/二级缓存Namespaces)的进行了C/U/D操作后,默认该作用域下所有 select 中的缓存将被 clear 掉并重新更新,如果开启了二级缓存,则只根据配置判断是否刷新。

8. Mysql篇

8.1 数据库的三范式是什么?

  • 第一范式:列不可再分
  • 第二范式:行可以唯一区分,主键约束
  • 第三范式:表的非主属性不能依赖与其他表的非主属性,外键约束

且三大范式是一级一级依赖的,第二范式建立在第一范式上,第三范式建立第一第二范式上。

8.2 mysql引擎 InnoDB与MyISAM的区别

  1. InnoDB支持事务,MyISAM不支持,对于InnoDB每一条SQL语言都默认封装成事务,自动提交,这样会影响速度,所以最好把多条SQL语言放在begin和commit之间,组成一个事务;
  2. InnoDB支持外键,而MyISAM不支持。对一个包含外键的InnoDB表转为MYISAM会失败;
  3. InnoDB是聚集索引,数据文件是和索引绑在一起的,必须要有主键,通过主键索引效率很高。但是辅助索引需要两次查询,先查询到主键,然后再通过主键查询到数据。因此,主键不应该过大,因为主键太大,其他索引也都会很大。而MyISAM是非聚集索引,数据文件是分离的,索引保存的是数据文件的指针。主键索引和辅助索引是独立的。
  4. InnoDB不保存表的具体行数,执行select count(*) from table时需要全表扫描。而MyISAM用一个变量保存了整个表的行数,执行上述语句时只需要读出该变量即可,速度很快;
  5. Innodb不支持全文索引,而MyISAM支持全文索引,查询效率上MyISAM要高;

8.3 数据库分页

1、mysql:limit关键字

两个参数分别表示查询初始位置(从0开始)和查询长度,一个参数表示查询长度

2、oracle:rownum隐藏列

  • 查询必须要指定rownum字段;
  • 直接使用rownum时只能使用< <=,不能使用> >= ;
  • 要使用> >=,必须结合子查询,并且必须对rownum字段重命名

3、SQLServer:top

8.4 数据库的事务

多条sql语句,要么全部成功,要么全部失败。

1. 事务的特性

原子性(Atomic):组成一个事务的多个数据库操作是一个不可分割的原子单元,只有所有操作都成功,整个事务才会提交。任何一个操作失败,已经执行的任何操作都必须撤销,让数据库返回初始状态。
一致性(Consistency):事务操作成功后,数据库所处的状态和它的业务规则是一致的。即数据不会被破坏。如A转账100元给B,不管操作是否成功,A和B的账户总额是不变的。
隔离性(Isolation):在并发数据操作时,不同的事务拥有各自的数据空间,它们的操作不会对彼此产生干扰
持久性(Durabiliy):一旦事务提交成功,事务中的所有操作都必须持久化到数据库中。

2. MySQL执行事务的语法和流程

InnoDB 存储引擎事务主要通过 UNDO日志REDO日志实现,MyISAM 存储引擎不支持事务。

  • UNDO 日志:复制事务执行前的数据,用于在事务发生异常时回滚数据。
  • REDO 日志:记录在事务执行中,每条对数据进行更新的操作,当事务提交时,该内容将被刷新到磁盘。

开始事务
BEGIN;或START TRANSACTION;这个语句显式地标记一个事务的起始点。

提交事务
COMMIT; 表示提交事务,将事务中所有对数据库的更新都写到磁盘上的物理数据库中,事务正常结束。一旦执行了该命令,将不能回滚事务。

回滚(撤销)事务
ROLLBACK; 表示撤销事务,即在事务运行的过程中发生了某种故障,事务不能继续执行,系统将事务中对数据库的所有已完成的操作全部撤销,回滚到事务开始时的状态。这条语句也标志着事务的结束。

3. 事务之间相互影响的种类

  • 脏读:一个事务读取了另一个事务未提交的数据。
  • 不可重复读:就是在一个事务范围内,两次相同的查询会返回两个不同的数据,是因为在此间隔内有其他事务对数据进行了修改。
  • 幻读:指当用户读取某一范围的数据行时,另一个事务又在该范围内插入了新行,当用户再读取该范围的数据行时,会发现有新的“幻影” 行。
  • 丢失更新:两个事务同时读取同一条记录,A先修改记录,B也修改记录(B是不知道A修改过),B提交数据后覆盖了A的修改结果。

4. 事务隔离级别
数据库的事务隔离级别(TRANSACTION ISOLATION LEVEL)是为了尽可能的避免上述事务之间的影响而产生的隔离级别。

隔离级别 脏读 丢失更新 不可重复读 幻读 并发模型 更新冲突检测
未提交读: Read Uncommited 悲观
已提交读: Read commited 悲观
可重复读: Repeatable Read 悲观
可串行读: Serializable 悲观

事务隔离是通过锁来实现的,通过阻塞来隔离上述影响,级别越高,加的锁越多,效率越低下

  • 未提交读:在读取数据时不会加任何锁,也不会进行检测,可能会读到没有提交的数据。
  • 已提交读:只读取提交的数据等待其他事务释放排他锁,读数据的共享锁在读操作完成后会立即释放。这个隔离级别是大多数数据库默认的隔离级别。
  • 可重复读:像已提交读一样,但共享锁会保持到事务结束才会释放。MySQL数据库默认的隔离级别。
  • 可串行读:类似于可重复读,但锁不仅会锁定所查询的数据,也会锁定所查询的范围,这样就阻止了新数据插入所查询的范围。

8.5 mysql 索引

索引是对数据库表中一个或多个列的值进行排序的结构,建立索引有助于快速获取信息。

1. mysql 有4种索引类型

  • 主键索引(PRIMARY)数据列不允许重复,不允许为NULL,一个表只能有一个主键索引。

    ALTER TABLE table_name ADD PRIMARY KEY (column_name)
    
  • 唯一索引(UNIQUE)数据列不允许重复,允许为NULL值,一个表允许多个列创建唯一索引。

    -- 创建唯一索引
    ALTER TABLE table_name ADD UNIQUE (column_name); 
    -- 创建唯一组合索引
    ALTER TABLE table_name ADD UNIQUE (column1,column2); 
    
  • 普通索引(INDEX)

    -- 使用 CREATE INDEX 语句创建索引
    CREATE INDEX indexName ON table_name (column_name)
    -- 修改表结构(添加索引)方式
    ALTER TABLE table_name ADD INDEX indexName (column_name)
    -- 创建组合索引
    ALTER TABLE table_name ADD INDEX indexName (column1,column2,column3);
    
  • 全文索引(FULLTEXT)

    -- 创建全文索引
    ALTER TABLE table_name ADD FULLTEXT (column_name);
    

索引一经创建不能修改,如果要修改索引,只能删除重建。

-- 删除索引
DROP INDEX indexName ON table_name;

2. 索引优缺点

  • 索引加快数据库的检索速度
  • 唯一索引可以确保每一行数据的唯一性
  • 通过使用索引,可以在查询的过程中使用优化隐藏器,提高系统的性能
  • 索引需要占物理和数据空间
  • 索引降低了插入、删除、修改等维护任务的速度

3. 索引设计的原则

  • 适合索引的列是出现在where子句中的列,或者连接子句中指定的列;
  • 基数较小的列,索引效果较差,没有必要在此列建立索引;
  • 使用短索引,如果对长字符串列进行索引,应该指定一个前缀长度,这样能够节省大量索引空间;
  • 不要过度索引。索引需要额外的磁盘空间,并降低写操作的性能。在修改表内容的时候,索引会进行更新甚至重构,索引列越多,这个时间就会越长。所以只保持需要的索引有利于查询即可。

8.6 索引优化、SQL优化

1. 查看索引的使用情况

通过SHOW STATUS LIKE 'Handler_read%';查看索引的使用情况:
insert image description here
Handler_read_key:如果索引正在工作,Handler_read_key的值将很高。
Handler_read_rnd_next:数据文件中读取下一行的请求数,如果正在进行大量的表扫描,值将较高,则说明索引利用不理想。

2. 索引优化规则

  1. 如果MySQL估计使用索引比全表扫描还慢,则不使用索引。
  2. 前导模糊查询不能命中索引,可优化为使用非前导模糊查询。
  3. 数据类型出现隐式转换的时候不会命中索引,特别是当列类型是字符串,一定要将字符常量值用引号引起来。
  4. 复合索引的情况下,查询条件不包含索引列最左边部分(不满足最左原则),不会命中符合索引。注意,最左原则并不是说是查询条件的顺序,而是查询条件中是否包含索引最左列字段。
  5. union、in、or都能够命中索引,建议使用in。查询的CPU消耗:or>in>union。
  6. 用or分割开的条件,如果or前的条件中列有索引,而后面的列中没有索引,那么涉及到的索引都不会被用到。
  7. 负向条件查询不能使用索引,可以优化为in查询。负向条件有:!=、<>、not in、not exists、not like等。
  8. 范围条件查询可以命中索引。范围条件有:<、<=、>、>=、between等。
  9. 数据库执行计算不会命中索引。
  10. 建立索引的列,不允许为null。即使IS NULL可以命中索引。

3. explain 分析SQL的执行计划

通过 explain 命令获取 select 语句的执行计划,通过 explain 我们可以知道以下信息:表的读取顺序,数据读取操作的类型,哪些索引可以使用,哪些索引实际使用了,表之间的引用,每张表有多少行被优化器查询等信息。
insert image description here
需要重点关注type、rows、filtered、Extra

type由上至下,效率越来越高

  • ALL 全表扫描
  • index 索引全扫描
  • range 索引范围扫描,常用于<,<=,>=,between,in等操作
  • ref 使用非唯一索引扫描或唯一索引前缀扫描,返回单条记录,常出现在关联查询中
  • eq_ref 类似ref,区别在于使用的是唯一索引,使用主键的关联查询
  • const/system 单条记录,系统会把匹配行中的其他列作为常数处理,如主键或唯一索引查询
  • null MySQL不访问任何表或索引,直接返回结果

Extra

  • Using filesort:MySQL需要额外的一次传递,以找出如何按排序顺序检索行。通过根据联接类型浏览所有行并为所有匹配WHERE子句的行保存排序关键字和行的指针来完成排序。然后关键字被排序,并按排序顺序检索行。
  • Using temporary:使用了临时表保存中间结果,性能特别差,需要重点优化
  • Using index:表示相应的 select 操作中使用了覆盖索引(Coveing Index),避免访问了表的数据行,效率不错!如果同时出现 using where,意味着无法直接通过索引查找来查询到符合条件的数据。
  • Using index condition:MySQL5.6之后新增的ICP,using index condtion就是使用了ICP(索引下推),在存储引擎层进行数据过滤,而不是在服务层过滤,利用索引现有的数据减少回表的数据。

4. SQL优化规则

  1. 查询语句中不要使用select *
  2. 尽量减少子查询,使用关联查询(left join,right join,inner join)替代
  3. 减少使用IN或者NOT IN,使用exists,not exists或者关联查询语句替代
  4. or 的查询尽量用 union或者union all 代替(在确认没有重复数据或者不用剔除重复数据时,union all会更好)
  5. 应尽量避免在 where 子句中使用!=或<>操作符,否则引擎将放弃使用索引而进行全表扫描。
  6. 应尽量避免在 where 子句中对字段进行 null 值判断,否则将导致引擎放弃使用索引而进行全表扫描。

5. 确定问题并采用相应的措施

  • 优化索引
  • 优化SQL语句:修改SQL、IN 查询分段、时间查询分段、基于上一次数据过滤
  • 改用其他实现方式:ES、数仓等
  • 数据碎片处理

8.7 大表如何优化?

  1. 限定数据查询的范围。禁止不带任何限制数据范围条件的查询语句。
  2. 读/写分离。经典的数据库拆分方案,主库负责写,从库负责读;
  3. 垂直分区。垂直拆分是指数据表列的拆分,把一张列比较多的表拆分为多张表。
    优点: 可以使得列数据变小,在查询时减少读取的Block数,减少I/O次数。此外,垂直分区可以简化表的结构,易于维护。
    缺点: 主键会出现冗余,需要管理冗余列,并会引起Join操作,可以通过在应用层进行Join来解决。此外,垂直分区会让事务变得更加复杂;
  4. 水平分区。水平拆分是指数据表行的拆分,保持数据表结构不变,通过某种策略存储数据分片。这样每一片数据分散到不同的表或者库中,达到了分布式的目的。 水平拆分最好分库 。
    优点: 能够支持非常大的数据量存储。
    缺点: 分片事务难以解决 ,跨节点Join性能较差,逻辑复杂。

补充一下数据库分片的两种常见方案:

  • 客户端代理: 分片逻辑在应用端,封装在jar包中,通过修改或者封装JDBC层来实现。 当当网的Sharding-JDBC 、阿里的TDDL是两种比较常用的实现。
  • 中间件代理: 在应用和数据中间加了一个代理层。分片逻辑统一维护在中间件服务中。 我们现在谈的 Mycat 、360的Atlas、网易的DDB等等都是这种架构的实现。

8.8 drop、delete与truncate的区别

SQL中的drop、delete、truncate都表示删除,但是三者有一些差别

  • delete和truncate只删除表的数据不删除表的结构;
  • delete语句是dml,这个操作会放到rollback segement中,事务提交之后才生效;
  • truncate,drop是ddl,操作立即生效,不能回滚;
  • 速度,一般来说: drop> truncate >delete

8.9 内联接、左外联接、右外联接

  • 内联接(Inner Join):匹配2张表中相关联的记录。
  • 左外联接(Left Outer Join):除了匹配2张表中相关联的记录外,还会匹配左表中剩余的记录,右表中未匹配到的字段用NULL表示。
  • 右外联接(Right Outer Join):除了匹配2张表中相关联的记录外,还会匹配右表中剩余的记录,左表中未匹配到的字段用NULL表示。
  • 在判定左表和右表时,要根据表名出现在Outer Join的左右位置关系。

9. Redis篇

9.1 Redis高性能的原因?

  • 完全基于内存,绝大部分请求是纯粹的内存操作,非常快速;
  • 数据结构简单,对数据操作也简单,Redis 中的数据结构是专门进行设计的;
  • 采用单线程,避免了不必要的上下文切换和竞争条件,不存在加锁释放锁操作;
  • 使用多路 I/O 复用模型,非阻塞 IO;
  • 使用底层模型不同,它们之间底层实现方式以及与客户端之间通信的应用协议不一样,Redis 直接自己构建了 VM 机制 ,因为一般的系统调用系统函数的话,会浪费一定的时间去移动和请求;

9.2 Redis数据类型?使用场景?

数据类型 可以存储的值 操作 应用场景
STRING 字符串、整数或者浮点数 对整个字符串或者字符串的其中一部分执行操作;对整数和浮点数执行自增或者自减操作 做简单的键值对缓存
LIST 列表 从两端压入或者弹出元素;对单个或者多个元素进行修剪,只保留一个范围内的元素 存储一些列表型的数据结构
SET 无序集合 添加、获取、移除单个元素;检查一个元素是否存在于集合中;计算交集、并集、差集;从集合里面随机获取元素 交集、并集、差集的操作
HASH 包含键值对的无序散列表 添加、获取、移除单个键值对;获取所有键值对;检查某个键是否存在 结构化的数据,比如一个对象
ZSET 有序集合 添加、获取、删除元素;根据分值范围或者成员来获取元素;计算一个键的排名 去重但可以排序,如获取排名前几名的用户

应用场景

  • 计数器
  • 缓存,将热点数据放到内存中
  • 会话缓存,可以使用 Redis 来统一存储多台应用服务器的会话信息
  • 分布式锁实现,SETNX 命令;RedLock 分布式锁
  • Set集合交集、并集、差集的操作
  • ZSet 可以实现排行榜等功能

9.3 Redis持久化

Redis 提供两种持久化机制 RDB(默认) 和 AOF 机制:

RDB:(Redis DataBase)快照
按照一定的时间将内存的数据以快照的形式保存到硬盘中,对应产生的数据文件为dump.rdb。通过配置文件中的save参数来定义快照的周期。
优点
1、只有一个文件 dump.rdb,方便持久化。
2、容灾性好,一个文件可以保存到安全的磁盘。
3、性能最大化。使用单独子进程来进行持久化,主进程不会进行任何 IO 操作,保证了 redis 的高性能。
4、相对于数据集大时,比 AOF 的启动效率更高。
缺点
1、数据安全性低。RDB 是间隔一段时间进行持久化,如果持久化之间 redis 发生故障,会发生数据丢失。所以这种方式更适合数据要求不严谨的时候

AOF持久化:(Append Only File)
将Redis执行的每次写命令记录到单独的日志文件中,当重启Redis会重新将持久化的日志中文件恢复数据。
当两种方式同时开启时,数据恢复Redis会优先选择AOF恢复。
优点
1、数据安全,aof 持久化可以配置 appendfsync 属性,有 always,每进行一次命令操作就记录到 aof 文件中一次。
2、通过 append 模式写文件,即使中途服务器宕机,可以通过 redis-check-aof 工具解决数据一致性问题。
3、AOF 机制的 rewrite 模式。AOF 文件没被 rewrite 之前(文件过大时会对命令 进行合并重写),可以删除其中的某些命令(比如误操作的 flushall))
缺点
1、AOF 文件比 RDB 文件大,且恢复速度慢。
2、数据集大的时候,比 rdb 启动效率低。

RDB和 AOF 机制对比

  • AOF文件比RDB更新频率高,如果两个都配了优先使用AOF还原数据。
  • AOF比RDB更安全也更大
  • RDB性能比AOF好

9.4 Redis key的过期时间和过期键的删除策略

  • EXPIRE:用于设置key的过期时间(seconds)。
  • PERSIST:用于删除给定 key 的过期时间,使得 key 永不过期。

过期键的删除策略

  • 定时过期:每个设置过期时间的key都需要创建一个定时器,到过期时间就会立即清除。该策略可以立即清除过期的数据,对内存很友好;但是会占用大量的CPU资源去处理过期的数据,从而影响缓存的响应时间和吞吐量。
  • 惰性过期:只有当访问一个key时,才会判断该key是否已过期,过期则清除。该策略可以最大化地节省CPU资源,却对内存非常不友好。极端情况可能出现大量的过期key没有再次被访问,从而不会被清除,占用大量内存。
  • 定期过期:每隔一定的时间,会扫描一定数量的数据库的expires字典中一定数量的key,并清除其中已过期的key。该策略是前两者的一个折中方案。

Redis中同时使用了惰性过期和定期过期两种过期策略。

9.5 Redis 缓存异常及解决方式

缓存雪崩
缓存雪崩是指缓存同一时间大面积的失效,所以,后面的请求都会落到数据库上,造成数据库短时间内承受大量请求而崩掉。
解决方案
1、缓存数据的过期时间设置随机,防止同一时间大量数据过期现象发生。
2、一般并发量不是特别多的时候,使用最多的解决方案是加锁排队。
3、给每一个缓存数据增加相应的缓存标记,记录缓存是否失效,如果缓存标记失效,则更新数据缓存。

缓存穿透
缓存穿透是指缓存和数据库中都没有的数据,导致所有的请求都落到数据库上,造成数据库短时间内承受大量请求而崩掉。
解决方案
1、接口层增加校验,如用户鉴权校验,id做基础校验,id<=0的直接拦截;
2、从缓存取不到数据,在数据库中也没有取到,也将key放入缓存中,值设置为null,缓存有效时间可以设置短点。需要定期的清理空值的key。避免内存被恶意占满。

缓存击穿
缓存击穿是指缓存中没有但数据库中有的数据(一般是缓存时间到期),这时由于并发用户特别多,同时读缓存没读到数据,又同时去数据库去取数据,引起数据库压力瞬间增大,造成过大压力。和缓存雪崩不同的是,缓存击穿指并发查同一条数据,缓存雪崩是不同数据都过期了,很多数据都查不到从而查数据库。
解决方案
1、设置热点数据永远不过期。
2、加互斥锁。

缓存降级
当访问量剧增、服务出现问题(如响应时间慢或不响应)或非核心服务影响到核心流程的性能时,仍然需要保证服务还是可用的,即使是有损服务。系统可以根据一些关键数据进行自动降级,也可以配置开关实现人工降级。
缓存降级的最终目的是保证核心服务可用,即使是有损的。而且有些服务是无法降级的(如加入购物车、结算)。
在进行降级之前要对系统进行梳理,看看系统是不是可以丢卒保帅;从而梳理出哪些必须誓死保护,哪些可降级;比如可以参考日志级别设置预案:
1、一般:比如有些服务偶尔因为网络抖动或者服务正在上线而超时,可以自动降级;
2、警告:有些服务在一段时间内成功率有波动(如在95~100%之间),可以自动降级或人工降级,并发送告警;
3、错误:比如可用率低于90%,或者数据库连接池被打爆了,或者访问量突然猛增到系统能承受的最大阀值,此时可以根据情况自动降级或者人工降级;
4、严重错误:比如因为特殊原因数据错误了,此时需要紧急人工降级。
缓存降级的目的,是为了防止Redis服务故障,导致数据库跟着一起发生雪崩问题。因此,对于不重要的缓存数据,可以采取服务降级策略,例如一个比较常见的做法就是,Redis出现问题,不去数据库查询,而是直接返回默认值给用户。

缓存更新
缓存服务(Redis)和数据服务(底层数据库)是相互独立且异构的系统,在更新缓存或更新数据的时候无法做到原子性的同时更新两边的数据,因此在并发读写或第二步操作异常时会遇到各种数据不一致的问题。如何解决并发场景下更新操作的双写一致是缓存系统的一个重要知识点。
缓存更新的设计模式有四种
1、Cache aside:查询:先查缓存,缓存没有就查数据库,然后加载至缓存内;更新:先更新数据库,然后让缓存失效;或者先失效缓存然后更新数据库;
2、Read through:在查询操作中更新缓存,即当缓存失效时,Cache Aside 模式是由调用方负责把数据加载入缓存,而 Read Through 则用缓存服务自己来加载;
3、Write through:在更新数据时发生。当有数据更新的时候,如果没有命中缓存,直接更新数据库,然后返回。如果命中了缓存,则更新缓存,然后由缓存自己更新数据库;
4、Write behind caching:俗称write back,在更新数据的时候,只更新缓存,不更新数据库,缓存会异步地定时批量更新数据库;
Cache aside
为了避免在并发场景下,多个请求同时更新同一个缓存导致脏数据,因此不能直接更新缓存而是令缓存失效
1、先更新数据库后失效缓存:并发场景下,推荐使用延迟失效(写请求完成后给缓存设置1s过期时间),在读请求缓存数据时若redis内已有该数据(其他写请求还未结束)则不更新。当redis内没有该数据的时候(其他写请求已令该缓存失效),读请求才会更新redis内的数据。这里的读请求缓存数据可以加上失效时间,以防第二步操作异常导致的不一致情况。
2、先失效缓存后更新数据库:并发场景下,推荐使用延迟失效(写请求开始前给缓存设置1s过期时间),在写请求失效缓存时设置一个1s延迟时间,然后再去更新数据库的数据,此时其他读请求仍然可以读到缓存内的数据,当数据库端更新完成后,缓存内的数据已失效,之后的读请求会将数据库端最新的数据加载至缓存内保证缓存和数据库端数据一致性;在这种方案下,第二步操作异常不会引起数据不一致,例如设置了缓存1s后失效,然后在更新数据库时报错,即使缓存失效,之后的读请求仍然会把更新前的数据重新加载到缓存内。
推荐使用先失效缓存,后更新数据库,配合延迟失效来更新缓存的模式

9.6 Redis 事务

1. Redis 事务概念
Redis 事务的本质是通过MULTI、EXEC、WATCH等一组命令的集合。事务支持一次执行多个命令,一个事务中所有命令都会被序列化。在事务执行过程,会按照顺序串行化执行队列中的命令,其他客户端提交的命令请求不会插入到事务执行命令序列中。

总结说:redis事务就是一次性、顺序性、排他性的执行一个队列中的一系列命令。

2. Redis事务的三个阶段

  • 事务开始 MULTI
  • 命令入队
  • 事务执行 EXEC

事务执行过程中,如果服务端收到有EXEC、DISCARD、WATCH、MULTI之外的请求,将会把请求放入队列中排队。

3. Redis事务相关命令
Redis事务功能是通过MULTI、EXEC、DISCARD和WATCH 四个原语实现的。
Redis会将一个事务中的所有命令序列化,然后按顺序执行。
Redis 不支持回滚,Redis 在事务失败时不进行回滚,而是继续执行余下的命令
如果在一个事务中的命令出现错误,那么所有的命令都不会执行
如果在一个事务中出现运行错误,那么正确的命令会被执行

  • WATCH 命令,是一个乐观锁,可以为 Redis 事务提供 check-and-set (CAS)行为。 可以监控一个或多个键,一旦其中有一个键被修改(或删除),之后的事务就不会执行,监控一直持续到EXEC命令。
  • MULTI命令,用于开启一个事务,它总是返回OK。 MULTI执行之后,客户端可以继续向服务器发送任意多条命令,这些命令不会立即被执行,而是被放到一个队列中,当EXEC命令被调用时,所有队列中的命令才会被执行。
  • EXEC命令,用于执行所有事务块内的命令。返回事务块内所有命令的返回值,按命令执行的先后顺序排列。 当操作被打断时,返回空值 nil 。
  • DISCARD命令,客户端可以清空事务队列,并放弃执行事务, 并且客户端会从事务状态中退出
  • UNWATCH命令,可以取消对所有key的监控

4. Redis事务特性

  • 具有ACID中的一致性隔离性
  • 当服务器运行在AOF持久化模式下,并且appendfsync选项的值为always时,事务也具有耐久性
  • Redis事务不保证原子性,且没有回滚,但Redis中单条命令是原子性执行的。

9.6 Redis 集群

Redis 集群方案有主从复制(master-slave)哨兵模式(sentinel)集群(Cluster) 三种方式。

主从复制(master-slave)
一主多从,主负责写,并且将数据复制到其它的 slave 节点,从节点负责读。所有的读请求全部走从节点。这样也可以很轻松实现水平扩容,支撑读高并发。

1、主从复制的工作原理
insert image description here

  1. 当slave启动和master建立MS关系后,会向master发送SYNC命令和master全量同步(初次连接)
  2. master接收到命令后会开始在后台保存快照(RDB持久化过程),并将期间接收到的写命令缓存起来
  3. 当快照完成后,master会将快照文件和所有缓存的写命令发送给slave
  4. slave接收到后,会先写入本地磁盘,然后再从本地磁盘加载到内存中
  5. 之后,master每当接收到写命令时就会将命令发送给slave,从而保证数据的一致

2. 主从复制优点

  • 主从复制主要用来进行横向扩容,做读写分离,扩容的 slave node 可以提高读的吞吐量。
  • redis 采用异步方式复制数据到 slave 节点,从 redis2.8 开始,slave 会周期性地确认自己每次复制的数据量;
  • 一个 master node 是可以配置多个 slave node 的,slave node 也可以连接其他的 slave node;
  • slave node 做复制的时候,不会阻塞 master node 的正常工作;
  • slave node 做复制的时候,也不会阻塞对自己的查询操作,它会用旧的数据集来提供服务;但是复制完成的时候,需要删除旧数据集,加载新数据集,这个时候就会暂停对外服务了;

3. 主从复制缺点

  • slave节点数据的复制和同步都由master节点来处理,会照成master节点压力太大;
  • 不具备自动容错和恢复功能,master或slave的宕机都会导致部分读写请求失败,需要人工介入;
  • master宕机,宕机前有部分数据未能及时同步到从机,切换IP后还会引入数据不一致的问题,降低了系统的可用性;
  • 如果多个 slave 断线了,需要重启的时候,尽量不要在同一时间段进行重启。因为只要 slave 启动,就会发送sync 请求和主机全量同步,可能会导致 master IO 剧增从而宕机;
  • Redis 较难支持在线扩容,在集群容量达到上限时在线扩容会变得很复杂;

Sentinel(哨兵)模式
哨兵模式是一种特殊的模式,首先 Redis 提供了哨兵的命令,哨兵是一个独立运行的进程。其原理是哨兵通过发送命令,等待Redis服务器响应,从而监控运行的多个 Redis 实例。

1. 哨兵模式的工作原理

insert image description here

  • 每个Sentinel进程以每秒钟一次的频率向整个集群中的 Master,Slave以及其他Sentinel发送一个 PING 命令。
  • 如果一个实例服务距离最后一次有效回复 PING 命令的时间超过 down-after-milliseconds选项所指定的值, 则这个实例会被 Sentinel进程标记为主观下线(SDOWN)
  • 如果一个 Master 被标记为主观下线(SDOWN),则正在监视这个 Master 的所有 Sentinel要以每秒一次的频率确认 Master 的确进入了主观下线状态
  • 当有足够数量的 Sentinel(大于等于配置文件指定的值)在指定的时间范围内确认 Master 主服务器进入了主观下线状态(SDOWN), 则 Master 主服务器会被标记为客观下线(ODOWN)
  • 在一般情况下, 每个 Sentinel进程会以每 10 秒一次的频率向集群中的所有 Master、Slave发送INFO命令。
  • 当 Master 被 Sentinel进程标记为客观下线(ODOWN)时,Sentinel进程向下线的 Master的所有 Slave 从服务器发送 INFO 命令的频率会从 10 秒一次改为每秒一次。
  • 若没有足够数量的 Sentinel进程同意 Master下线, Master 的客观下线状态就会被移除。若 Master 重新向 Sentinel进程发送 PING 命令返回有效回复,Master的主观下线状态就会被移除。

2. 哨兵的作用

哨兵(sentinel)是 redis 集群机构中非常重要的一个组件,哨兵用于实现 redis 集群的高可用,本身也是分布式的,作为一个哨兵集群去运行,互相协同工作。主要有以下功能:

  • 集群监控:负责监控 redis master 和 slave 进程是否正常工作。
  • 消息通知:如果某个 redis 实例有故障,那么哨兵负责发送消息作为报警通知给管理员。
  • 故障转移:如果 master node 挂掉了,会自动将 slave 切换成 master 。
  • 配置中心:如果故障转移发生了,通过发布订阅模式通知其他的slave node 和 client 客户端新的 master 地址,修改配置文件,让它们切换主机;

3. 哨兵模式的优点

  • 哨兵模式是基于主从模式的,所有主从的优点,哨兵模式都具有。
  • 主从可以自动切换,系统更健壮,可用性更高(可以看作自动版的主从复制)。

4. 哨兵模式的缺点

  • Redis较难支持在线扩容,在集群容量达到上限时在线扩容会变得很复杂。

Cluster 集群模式(Redis官方)

Redis Cluster是一种服务器 Sharding(分片) 技术,3.0版本开始正式提供。

Redis 的哨兵模式基本已经可以实现高可用,读写分离 ,但是在这种模式下每台 Redis 服务器都存储相同的数据,很浪费内存,所以在 redis3.0上加入了 Cluster 集群模式,实现了 Redis 的分布式存储,每台 Redis 节点上存储不同的内容。

1. Cluster 集群的数据分片
Redis Cluster 集群没有使用一致性 hash,而是引入了哈希槽(hash slot)的概念。Redis 集群有16384 个(2^14)哈希槽,每个 key 通过 CRC16 校验后对 16384 取模来决定放置哪个槽。集群的每个节点负责一部分hash槽

这种结构很容易添加或者删除节点。从一个节点将哈希槽移动到另一个节点并不会停止服务,所以无论添加删除或者改变某个节点的哈希槽的数量都不会造成集群不可用的状态。

在 Redis 的每一个节点上,都有这么两个东西,一个是插槽(slot),它的的取值范围是:0-16383。还有一个就是cluster,可以理解为是一个集群管理的插件。当我们的存取的 Key到达的时候,Redis 会根据 CRC16 的算法得出一个结果,然后把结果对 16384 求余数,这样每个 key 都会对应一个编号在 0-16383 之间的哈希槽,通过这个值,去找到对应的插槽所对应的节点,然后直接自动跳转到这个对应的节点上进行存取操作。

2. Cluster 集群的优点

  • 无中心架构,支持动态扩容,对业务透明
  • 具备Sentinel的监控和自动Failover(故障转移)能力
  • 所有的 redis 节点彼此互联(PING-PONG机制),内部使用二进制协议(gossip 协议,用于节点间进行高效的数据交换)优化传输速度和带宽。
  • 客户端不需要连接集群所有节点,连接集群中任何一个可用节点即可
  • 高性能,客户端直连redis服务,免去了proxy代理的损耗

3. Cluster 集群的缺点

  • 运维也很复杂,数据迁移需要人工干预
  • 只能使用0号数据库
  • 不支持批量操作(pipeline管道操作)
  • 分布式逻辑和存储模块耦合等

9.7 Redis分布式锁

1. SETNX命令

Redis为单进程单线程模式,采用队列模式将并发访问变成串行访问,且多客户端对Redis的连接并不存在竞争关系可以使用SETNX命令实现分布式锁。SETNX 是『SET if Not Exists』(如果不存在,则 SET)的简写。

返回值:设置成功,返回 1 。设置失败,返回 0 。

SETNX流程及事项如下

  • 使用SETNX命令获取锁,若返回0(key已存在,锁已存在)则获取失败,反之获取成功;
  • 为了防止获取锁后程序出现异常,导致其他线程/进程调用SETNX命令总是返回0而进入死锁状态,需要为该key设置一个“合理”的过期时间;
  • 释放锁,使用DEL命令将锁数据删除;

2. RedLock

Redis 官方站提出了一种权威的基于 Redis 实现分布式锁的方式名叫 Redlock,此种方式比原先的单节点的方法更安全。它可以保证以下特性:

  • 安全特性:互斥访问,即永远只有一个 client 能拿到锁
  • 避免死锁:最终 client 都可能拿到锁,不会出现死锁的情况,即使原本锁住某资源的 client crash 了或者出现了网络分区
  • 容错性:只要大部分 Redis 节点存活就可以正常提供服务

10. RabbitMQ 篇

10.1 MQ优缺点?

优点:

  • 异步处理:多应用对消息队列中同一消息进行处理,应用间并发处理消息,相比串行处理,减少处理时间;
  • 应用解耦:多应用间通过消息队列对同一消息进行处理,避免调用接口失败导致整个过程失败;
  • 限流削峰:避免流量过大导致应用系统挂掉的情况;

缺点:

  • 系统可用性降低:系统引入,MQ崩溃,整套系统崩溃。
  • 系统复杂度提高:会产生其他问题。如重复消费、消息丢失、消息传递的顺序性等问题。
  • 数据一致性问题

10.2 ActiveMQ、RabbitMQ、RocketMQ、Kafka 对比

特性 ActiveMQ RabbitMQ RocketMQ Kafka
单机吞吐量 万级,比 RocketMQ、Kafka 低一个数量级 同 ActiveMQ 10 万级,支撑高吞吐 10 万级,高吞吐,一般配合大数据类的系统来进行实时数据计算、日志采集等场景
topic 数量对吞吐量的影响 topic 可以达到几百/几千的级别,吞吐量会有较小幅度的下降,这是 RocketMQ 的一大优势,在同等机器下,可以支撑大量的topic topic 从几十到几百个时候,吞吐量会大幅度下降,在同等机器下,Kafka 尽量保证 topic 数量不要过多,如果要支撑大规模的 topic,需要增加更多的机器资源
时效性 ms 级 微秒级,这是 RabbitMQ 的一大特点,延迟最低 ms 级 延迟在 ms 级以内
可用性 高,基于主从架构实现高可用 同 ActiveMQ 非常高,分布式架构 非常高,分布式,一个数据多个副本,少数机器宕机,不会丢失数据,不会导致不可用
消息可靠性 有较低的概率丢失数据 基本不丢 经过参数优化配置,可以做到 0 丢失 同 RocketMQ
功能支持 MQ 领域的功能极其完备 基于 erlang 开发,并发能力很强,性能极好,延时很低 MQ 功能较为完善,还是分布式的,扩展性好 功能较为简单,主要支持简单的 MQ 功能,在大数据领域的实时计算以及日志采集被大规模使用

10.3 RabbitMQ 有哪些重要的角色?

  • 生产者:消息的创建者,负责创建和推送数据到消息服务器;
  • 消费者:消息的接收方,用于处理数据和确认消息;
  • 代理:就是 RabbitMQ 本身,用于扮演“快递”的角色,本身不生产消息,只是扮演“快递”的角色。

10.4 RabbitMQ 有哪些重要的组件?

  • ConnectionFactory(连接工厂):应用程序与Rabbit之间建立连接的管理器,程序代码中使用。
  • Channel(信道):消息推送使用的通道。
  • Exchange(交换器):用于接受、分配消息。
  • Queue(队列):用于存储生产者的消息。
  • RoutingKey(路由键):用于把生成者的数据分配到交换器上。
  • BindingKey(绑定键):用于把交换器的消息绑定到队列上。

10.5 RabbitMQ 的消息是怎么发送的?

首先客户端必须连接到 RabbitMQ 服务器才能发布和消费消息,客户端和 rabbit server 之间会创建一个 tcp 连接,一旦 tcp 打开并通过了认证(认证就是你发送给 rabbit 服务器的用户名和密码),你的客户端和 RabbitMQ 就创建了一条 amqp 信道(channel),信道是创建在“真实” tcp 上的虚拟连接,amqp 命令都是通过信道发送出去的,每个信道都会有一个唯一的 id,不论是发布消息,订阅队列都是通过这个信道完成的。

10.6 RabbitMQ 怎么保证消息的稳定性(避免消息丢失)?

1. 生产者发出后保证到达了MQ

RabbitMQ引入了事务机制发送方确认机制(publisher confirm),由于事务机制过于耗费性能所以一般不用。发送方确认机制就是消息发送到MQ那端之后,MQ会回一个确认收到的消息给我们。

2. MQ收到消息保证分发到了消息对应的Exchange
消息找不到对应的Exchange。找不到对应的Queue。这两种情况都可以用RabbitMQ提供的mandatory参数来解决,它会设置消息投递失败的策略,有两种策略:自动删除或返回到客户端。

3. Exchange分发消息入队之后保证消息的持久性
消息持久化,以便MQ重新启动之后消息还能重新恢复过来。消息的持久化要做,还要做队列的持久化Exchange的持久化。创建Exchange和队列时只要设置好持久化,发送的消息默认就是持久化消息。如果出现服务器宕机或者磁盘损坏则上面的手段统统无效,必须引入镜像队列,做异地多活来抵御这种不可抗因素。

4. 消费者收到消息之后保证消息的正确消费
消费者的消息确认

10.7 RabbitMQ 怎么保证消息的顺序性?

RabbitMQ 保证消息的顺序性
RabbitMQ 的问题是由于不同的消息都发送到了同一个 queue 中,多个消费者都消费同一个 queue 的消息。解决这个问题,我们可以给 RabbitMQ 创建多个 queue,每个消费者固定消费一个 queue 的消息,同一个 queue 的消息是一定会保证有序的

Kafka 保证消息的顺序性
对于 Kafka 来说,一个 topic 下同一个 partition 中的消息肯定是有序的,导致最终乱序是由于消费者端需要使用多线程并发处理消息来提高吞吐量。可以在线程处理前增加个内存队列,每个线程只负责处理其中一个内存队列的消息

RocketMQ 保证消息的顺序性
对于 RocketMQ 来说,每个 Topic 可以指定多个 MessageQueue,当我们写入消息的时候,会把消息均匀地分发到不同的 MessageQueue 中。要解决 RocketMQ 的乱序问题,我们只需要想办法让同一个Topic 进入到同一个 MessageQueue 中就可以了。因为同一个 MessageQueue 内的消息是一定有序的,一个 MessageQueue 中的消息只能交给一个 Consumer 来进行处理,所以 Consumer 消费的时候就一定会是有序的。

10.8 RabbitMQ 怎么保证消息的幂等性(避免重复消费)?

消息的幂等性:就是即使多次收到了消息,也不会重复消费

  • 生产者不重复发送消息给MQ

mq内部可以为每条消息生成一个全局唯一的消息id,当mq接收到消息时,会先根据该id判断消息是否重复发送,mq再决定是否接收该消息。

  • 消费者不重复消费

即使MQ重复发送了消息,消费者拿到了消息之后,要判断是否已经消费过,如果已经消费,直接丢弃。
1、从MQ拿到数据存到数据库,可以根据数据创建唯一约束。
2、拿到的数据是直接放到redis的set中

11. Elasticsearch基础篇

11.1 ES 中的基本概念

  • index 索引:索引类似于mysql中的数据库,是存数据的地方,包含一堆有相似结构的文档数据。
  • document 文档:类似于mysql中的一行,不同之处在于 ES 中的每个文档可以有不同的字段。文档是es中的最小数据单元,可以认为一个文档就是一条记录
  • Field 字段:Field是Elasticsearch的最小单位,一个document里面有多个field
  • shard 分片:单台机器无法存储大量数据,es可以将一个索引中的数据切分为多个shard,分布在多台服务器上存储。有了shard就可以横向扩展,存储更多数据,让搜索和分析等操作分布到多台服务器上去执行,提升吞吐量和性能。
  • replica 副本:任何一个服务器随时可能故障或宕机,此时 shard 可能会丢失,因此可以为每个 shard 创建多个 replica 副本。多个replica还可以提升搜索操作的吞吐量和性能。primary shard(建立索引时一次设置,不能修改,默认5个),replica shard(随时修改数量,默认1个),默认每个索引10个 shard,5个primary shard,5个replica shard,最小的高可用配置,是2台服务器。

11.2 ES中的倒排索引是什么?

传统的检索方式是通过文章,逐个遍历找到对应关键词的位置。
倒排索引是通过分词策略,形成了词和文章的映射关系表,也称倒排表,这种词典 + 映射表即为倒排索引。 其中词典中存储词元,倒排表中存储该词元在哪些文中出现的位置。时间复杂度O(1)

倒排索引的底层实现是基于:FST(Finite State Transducer)数据结构。

  • 空间占用小。通过对词典中单词前缀和后缀的重复利用,压缩了存储空间;
  • 查询速度快。O(len(str)) 的查询时间复杂度。

11.3 ES集群部署

ES集群部署可以参考 elasticsearch环境集群部署,此文介绍比较详细。

11.4 ES中的索引、文档的相关操作

ES基本操作可以参考 elasticsearch入门基本操作,此文介绍比较详细。

11.5 text 和 keyword类型的区别?

两个的区别主要分词的区别:keyword 类型是不会分词的,直接根据字符串内容建立倒排索引,keyword类型的字段只能通过精确值搜索到;text 类型在存入 Elasticsearch 的时候,会先分词,然后根据分词后的内容建立倒排索引

11.6 query 和 filter 的区别?

  • query:查询操作不仅仅会进行查询,还会计算分值,用于确定相关度;
  • filter:查询操作仅判断是否满足查询条件,不会计算任何分值,也不会关心返回的排序问题,同时,filter 查询的结果可以被缓存,提高性能。

12. Linux篇

12.1 绝对路径、当前路径、根目录、主目录

  • 绝对路径: 如/etc/init.d
  • 当前目录和上层目录: ./ ../
  • 主目录: ~/
  • 切换目录: cd
  • 切换至主目录:cd ~cd $HOME
  • 切换至上次所在目录:cd -
  • 查看当前路径:pwd

12.2 进程相关命令

  • 查看进程:ps aux

a:显示当前终端下的所有进程信息,包括其他用户的进程。
u:使用以用户为主的格式输出进程信息。
x:显示当前用户在所有终端下的进程。

  • 查看进程:ps -elf

-e:显示系统内的所有进程信息。
-l:使用长(long)格式显示进程信息。
-f:使用完整的(full)格式显示进程信息。

ps命令可以配合管道命令grep 一起使用 ps -elf | grep java

  • 查看被进程打开文件的信息:lsof ,此命令需要安装
  • 列出在指定端口上打开的文件:lsof -i:端口号

结束进程

  • 结束某一个进程:kill -9 PID
  • 结束指定用户的所有进程:kill -9 'lsof -t -u tt'

lsof -u tt 是列出tt用户所有打开的文件,加上 -t 选项之后表示结果只列出PID列

12.3 目录/文件相关命令

Linux常见的处理目录的命令:

  • ls(list files): 列出目录及文件名
  • cd(change directory):切换目录
  • pwd(print work directory):显示目前的目录
  • mkdir(make directory):创建一个新的目录
  • rmdir(remove directory):删除一个空的目录
  • cp(copy file): 复制文件或目录
  • rm(remove): 删除文件或目录
  • mv(move file): 移动文件与目录,或修改文件与目录的名称

ls (list files)命令:用于列出目前工作目录所含文件及子目录。

ls [-alrtAFR] [name...]

-a 显示所有文件及目录 (. 开头的隐藏文件也会列出)
-l 除文件名称外,亦将文件型态、权限、拥有者、文件大小等资讯详细列出 可简写为ll
-r 将文件以相反次序显示(原定依英文字母次序)
-t 将文件依建立时间之先后次序列出
-A 同 -a ,但不列出 “.” (目前目录) 及 “…” (父目录)
-F 在列出的文件名称后加一符号;例如可执行档则加 “*”, 目录则加 “/”
-R 若目录下有文件,则以下之文件亦皆依序列出

mkdir(make directory):创建一个新的目录

mkdir [-mp] 目录名称

-m :配置文件的权限
-p :递归创建多级目录

rmdir:删除空的目录

rmdir [-p] 目录名称

-p :从该目录起,递归删除多级空目录

cp:复制文件或目录

cp [options] source1 source2 .... destination

-a:相当于 -pdr 的意思,至于 pdr 请参考下列说明;(常用)
-d:若来源档为连结档的属性(link file),则复制连结档属性而非文件本身;
-f:为强制(force)的意思,若目标文件已经存在且无法开启,则移除后再尝试一次;
-i:若目标档(destination)已经存在时,在覆盖时会先询问动作的进行(常用)
-l:进行硬式连结(hard link)的连结档创建,而非复制文件本身;
-p:连同文件的属性一起复制过去,而非使用默认属性(备份常用);
-r:递归持续复制,用于目录的复制行为;(常用)
-s:复制成为符号连结档 (symbolic link),亦即『捷径』文件;
-u:若 destination 比 source 旧才升级 destination !

scp:用于 Linux 之间复制文件和目录

scp [可选参数] [[user@]host1]file_source [[user@]host1]file_target

rm:删除文件或目录

rm [-fir] 文件或目录

-f :强制删除,忽略不存在的文件,不会出现警告信息;
-i :互动模式,在删除前会询问使用者是否动作
-r :递归删除

mv:移动文件与目录,或修改名称

mv [-fiu] source destination

-u :若目标文件已经存在,且 source 比较新,才会执行 (update)

Linux系统中使用以下命令来查看文件的内容:

  • cat 由第一行开始显示文件内容
  • tac 从最后一行开始显示,可以看出 tac 是 cat 的倒着写!
  • nl 显示的时候,顺道输出行号!
  • more 一页一页的显示文件内容
  • less 与 more 类似,但是比 more 更好的是,他可以往前翻页!
  • head 只看头几行
  • tail 只看尾几行

cat:由第一行开始显示文件内容

cat [options] 文件

-A :相当于 -vET 的整合选项,可列出一些特殊字符而不是空白而已;
-b :列出行号,仅针对非空白行做行号显示,空白行不标行号!
-E :将结尾的断行字节 $ 显示出来;
-n :列出行号,连同空白行也会有行号,与 -b 的选项不同;
-T :将 [tab] 按键以 ^I 显示出来;
-v :列出一些看不出来的特殊字符

more:一页一页的显示文件内容

more 文件

  • 空格键 (space):代表向下翻一页;
  • Enter :代表向下翻『一行』;
  • /字串 :代表在这个显示的内容当中,向下搜寻『字串』这个关键字;
  • :f :立刻显示出档名以及目前显示的行数;
  • q :代表立刻离开 more ,不再显示该文件内容。
  • b :代表往回翻页,不过这动作只对文件有用,对管线无用。

less:一页一页的显示文件内容,可以向上翻页和搜索

less 文件

  • 空格键 :向下翻动一页;
  • [pagedown]键:向下翻动一页;
  • [pageup]键 :向上翻动一页;
  • /字串 :向下搜寻『字串』的功能;
  • ?字串 :向上搜寻『字串』的功能;
  • n :重复前一个搜寻 (与 / 或 ? 有关)
  • N :反向的重复前一个搜寻 (与 / 或 ? 有关)
  • q :离开 less 这个程序;

head:查看文件前面几行

head [-n number] 文件

-n :后面接数字,代表显示几行

tail:查看文件后面几行,一般用于查看日志信息

tail [-fn number] 文件

-f :循环读取
-n :后面接数字,代表显示几行

12.4 搜索文件命令

linux查找文件的命令:

  • find命令,可以查找任何想要的文件;
  • locate命令,查不到最新变动过的文件;
  • whereis命令,只搜索二进制文件
  • which命令,只搜索二进制文件
  • grep命令

find [指定目录] [指定条件] [指定动作]

1. 根据名称查找,可以使用*,?通配符

  • -name 区分大小写
  • -iname 不区分大小写
find /etc -name init

2. 根据文件大小查找 -size

find / -size +204800

这条命令的功能是,在根目录下查找大于 100MB 的文件。因为它的单位是数据块,而一个数据块是 0.5KB,所以 100MB 是 204800 个数据块,所以要写 204800。
若把「+」换成「-」,就是查找小于 100MB 的文件;若换成「=」,就是查找等于 100MB 的文件。

3. 根据所有者查找 -user
4. 根据所有组查找 -group

5. 根据时间查找

find /etc -cmin -5

这条命令的功能是,在 /etc 下查找 5 分钟内被修改过属性的文件和目录。

  • -amin 访问时间(a - access)
  • -cmin 文件属性(c - change)
  • -mmin 文件内容 (m - modify)

6. 根据文件类型查找 -type

find /etc -type f

这条命令的功能是查找 etc 目录下的所有文件。

  • -type f,文件
  • -type d,目录
  • -type l,软链接

7. 连接符 -a、-o

若查找的条件有多个,可通过连接符将不同的选项连接起来。其中,-a表示 and,即通过「-a」连接的多个条件要同时满足,-o表示 or,通过「-o」连接的多个条件只满足其中一个即可。

locate 文件

find 命令是通过遍历磁盘来查找文件,搜索速度相对较慢,而 locate 命令是在 Linux 系统内的一个文件数据库中查找你所需要的文件,查找速度比 find 快很多。

# 查找init文件,区分大小写。若想忽略大小写 加 -i
locate init

locate 的缺点

  • 创建一个文件后立马使用locate搜索这个文件会搜不到,因为这个文件还没被更新到文件数据库里去。可以使用updatedb手动更新一下数据库
  • locate 不查找 tmp 目录下的文件

which 命令

which 是查找命令文件的命令,可以查找命令所在位置。

which ls

whereis 命令

whereis 也是查找命令文件的命令,可以查找命令所在位置,和其帮助文档所在位置。

whereis ls

grep 字串 [目录/文件]

grep是查找文件内容

grep multiuser /etc/inittab

忽略大小写 -i

grep -i multiuser /etc/inittab

排除指定字符串 -v

如果想看配置文件的内容,但是不想看注释,就可以在搜索文件内容时排除「#」所在的行。但是有的注释并不是单独一行,而是写在配置语句的后面,这样的话单纯地排除「#」所在的行就会把配置语句也排除掉,造成误伤。也就是说我们只能排除掉以「#」开头的行。-v ^#

grep -v ^# /etc/inittab

12.5 网络相关命令

  • ping:查看网络是否连通
  • netstat:检验本机各端口的网络连接情况
  • ifconfig:查看 ip 地址
  • hostname:显示主机名字
  • ssh:登录到其他系统

12.6 压缩/解压缩命令

1、.tar

解包:tar -xvf FileName.tar
打包:tar -cvf FileName.tar DirName
(注:tar是打包,不是压缩)

2、.gz

解压1:gunzip FileName.gz
解压2:gzip -d FileName.gz
压缩:gzip FileName

3、.tar.gz 和 .tgz

解压:tar -zxvf FileName.tar.gz
压缩:tar -zcvf FileName.tar.gz DirName

4、.zip

解压:unzip FileName.zip
压缩:zip FileName.zip DirName

5、.rar

解压:rar -x FileName.rar
压缩:rar -a FileName.rar DirName

12.7 权限相关命令

  • chgrp:修改文件和目录的所属组
chgrp [-R] 所属组 文件或目录
  • chown:修改文件和目录的所有者和所属组
chown [-R] 所有者 文件或目录
chown [-R] 所有者:所属组 文件或目录
  • chmod:修改文件或目录的权限
chmod [-R] 权限值 文件名

12.8 环境变量相关命令

  • 设置环境变量:export
  • 删除环境变量:unset
  • 设置只读变量:readonly,不能用unset删除
  • 查看所有环境变量 env
  • 查看某个环境变量,如 home: env $HOMEecho $HOME

Linux变量可分为两类:

  • 永久的:需要修改配置文件,变量永久生效。
  • 临时的:使用export命令声明即可,变量在关闭shell时失效。

设置变量的三种方法

  • /etc/profile文件中添加变量【对系统所有用户生效(永久的)】
vi /etc/profile
export CLASSPATH=./JAVA_HOME/lib;$JAVA_HOME/jre/lib

注:修改文件后要想马上生效还要运行source /etc/profile不然只能在下次重进此用户时生效。

  • 在用户目录下的.bash_profile文件中增加变量【对单一用户生效(永久的)】
vi /home/zhangsan/.bash.profile
export CLASSPATH=./JAVA_HOME/lib;$JAVA_HOME/jre/lib

注:修改文件后要想马上生效还要运行source /home/zhangsan/.bash_profile不然只能在下次重进此用户时生效。

  • 直接运行export命令定义变量【只对当前shell(BASH)有效(临时的)】

在shell的命令行下直接使用export 变量名=变量值定义变量,该变量只在当前的shell(BASH)或其子shell(BASH)下是有效的,shell关闭了,变量也就失效了,再打开新shell时就没有这个变量,需要使用的话还需要重新定义。

12.9 系统服务相关命令

  • chkconfig 命令用于检查,设置系统的各种服务

chkconfig [param][系统服务] 或 chkconfig [系统服务][on/off/reset]

--add 增加所指定的系统服务,让 chkconfig 指令得以管理它,并同时在系统启动的叙述文件内增加相关数据。
--del  删除所指定的系统服务,不再由 chkconfig 指令管理,并同时在系统启动的叙述文件内删除相关数据。
--list 列出chkconfig 所知道的所有命令。

  • service 命令的作用是去 /etc/init.d 目录下寻找相应的服务,可以启动、停止、重启系统服务,还可以显示所有系统服务的当前状态

service 系统服务 status/start/stop/restart

  • systemctl管理系统服务,启动、停止、重启、禁用、查看系统服务,该命令集成了命令 service、chkconfig、setup、init 的大部分功能于一身。

systemctl status/start/stop/restart 系统服务.service

12.10 软件安装与卸载命令

yum 查找、安装、删除某一个、一组甚至全部软件包

yum [options] [command] [package ...]

options:可选,选项包括 -y(当安装过程提示选择全部为 “yes”),-q(不显示安装的过程)等等。
command:要进行的操作。
package:安装的包名。

  • 列出所有可更新的软件清单命令:yum check-update
  • 更新所有软件命令:yum update
  • 仅安装指定的软件命令:yum -y install <package_name>
  • 仅更新指定的软件命令:yum update <package_name>
  • 列出所有可安裝的软件清单命令:yum list
  • 删除软件包命令:yum remove <package_name>
  • 查找软件包命令:yum search <keyword>

rpm 查找、安装、删除某一个、一组甚至全部软件包

rpm [options] [package ...]

-a Query all packages.
-e Deletes the specified package.
-h List flags when package is installed.
-i Display information about the package.
-l Displays a file listing for the package.
-p Query the specified RPM package file.
-q Use the query mode. When encountering any problems, the rpm command will first ask the user.
-v Displays the progress of the command execution.
--nodeps  Interdependence of package files is not verified.

  • rpm -qa | grep <package_name>View software installation version information
  • rpm -qi <package_name>View software installation details
  • rpm -ql <package_name>View the software installation directory
  • rpm -ivh <package_name>install software
  • rpm -evh <package_name>uninstall software

12.11 Create soft link (shortcut) command

  • Soft link:ln -s slink source
  • Hard link:ln link source

12.12 Other common commands

  • history: View the list of used commands
  • df -h: Check the disk space usage of the file system.
  • su: switch user
  • sudo: Execute commands as a system administrator

Guess you like

Origin blog.csdn.net/weixin_45698637/article/details/123882459