2023最新---java面试题大全

java面试大全

自己辛苦整理,相对简化,适用于面试突击。
希望对初中级java开发的面试有所帮助。
毕竟现在的就业环境太差了。
有问题、有补充欢迎评论指出,虚心求教,有错麻溜改。
对你有帮助的话,记得点赞收藏。
朋友要找工作的话,记得转发给他哦~

文章目录

JAVA basics

The difference between JDK, JRE, JVM

​ JDK: java development tools; JRE: java runtime environment; JVM: java virtual machine.

object oriented

​ Object-oriented is two different angles of dealing with problems compared to process-oriented.

​ Process-oriented focuses on steps, and object-oriented pays more attention to the participants (objects) who complete these tasks.

​ Process-oriented is more direct and efficient, while object-oriented is easier to reuse, expand and maintain.

  • Encapsulation: The significance of encapsulation is to clearly identify all member functions and data items that are allowed to be used externally. Internal details are transparent to external calls, and external calls do not need to be modified or concerned about internal implementation. (private attributes, common get and set methods)
  • Inheritance: Inherit the methods of the base class and make your own changes and/or extensions. The common methods or properties of the subclasses can be directly used by the parent class, instead of redefining them, only need to extend their own personalized ones.
  • Polymorphism: Based on the different classes of objects, external calls to the same method have different logics that are actually executed. (The parent class reference points to the subclass object, and the same method call executes different logic because of the implementation of different subclass objects)

The difference between == and equals

  • == compares the values ​​in the stack, including the value of the basic data type and the address of the reference data type.
  • equals is a method in the top-level parent object class. It uses == to complete the comparison without rewriting. It is usually rewritten, and the content is compared according to the rewriting rules.
  • In the java source code, equals is rewritten by String and Integer, so the comparison is whether the contents of the objects are equal.

hashCode与equals

  • If the hashCode of two objects is not the same, then the two objects must be different objects.
  • If the hashCode of two objects is the same, it does not mean that the two objects must be the same object, or they may be two objects (equals are not necessarily equal).
  • If two objects are equal (equals are equal), then their hashCode must be the same.

final

  • The modified class cannot be inherited, the modified method cannot be overridden by subclasses, and the modified variable cannot be modified (the reference type cannot modify the address).

  • If the final modification is a class variable (static), the initial value can only be specified in the static initialization block or when declaring the class variable.

  • If the final modification is a member variable, the initial value can be executed in the non-static initialization block, declaration of the variable or constructor.

  • The system does not initialize local variables, which must be explicitly initialized by the programmer. Therefore, when using final to modify local variables,

    That is, you can specify the default value at the time of definition (the following code cannot reassign the variable), or you can not specify the default value, but in the following code

    Assign an initial value to the final variable (only once).

  • Local inner classes and anonymous inner classes can only access local final variables. (The inner class and the outer class are at the same level. The inner class will not be destroyed as the method is executed because it is defined in the method. When the method of the outer class ends, the local variable will be destroyed. But the inner class object may still exist. Here is a contradiction: the inner class object accesses a variable that does not exist. In order to solve this problem, the local variable is copied as a member variable of the inner class, so that when the local variable After death, the inner class can still access it, and what is actually accessed is the "copy" of the local variable. This seems to extend the life cycle of the local variable (final variable)

final、finally、finalize

  • Final: Used to declare properties (properties are immutable), methods (cannot be overridden), classes (classes modified by final cannot be inherited).
  • Finally: It is used when handling exceptions, indicating that it is always executed.
  • Finalize: A method of the object class, garbage collection.

String、StringBuffer、StringBuilder

  • String is final modified, immutable, and the bottom layer is implemented with char array. Each operation will generate a new String object.

  • Both StringBuffer and StringBuilder operate on the original object.

  • StringBuffer is thread-safe, but StringBuilder is not thread-safe.

  • StringBuffer methods are synchronized modified.

    性能:StringBuilder > StringBuffer > String 。

​ Scenario: Use the latter two when you often need to change the content of the string.

​ Use StringBuilder first, and StringBuffer when using shared variables in multiple threads.

The difference between overloading and overriding

  • Overloading: occurs in the same class, the method name must be the same, the parameter types are different, the number is different, the order is different, the method return value and access

Modifiers can be different, which happens at compile time.

  • Override: Occurs in the parent and child classes, the method name and parameter list must be the same, the return value range is less than or equal to the parent class, and the exception range thrown is less than

Equal to the parent class, the scope of the access modifier is greater than or equal to the parent class; if the access modifier of the parent class method is private, the subclass cannot override the method.

The difference between interface and abstract class

  • An abstract class can have ordinary member functions, but an interface can only have public abstract methods (1.8 added default methods).
  • Member variables in an abstract class can be of various types, while member variables in an interface can only be of public static final type.
  • An abstract class can only inherit one, and an interface can implement more than one.

The design purpose of the interface is to constrain the behavior of the class; the design purpose of the abstract class is code reuse.

access modifier

  • Private: If this class is private, it cannot be accessed when there is an integrated property or method that can inherit the parent class
  • Default: (package access rights) can only be accessed by all classes in the same package, and must be the same level of package
  • Protected: (inherited access rights) can only be accessed by all classes in the same package and subclasses of different packages
  • Public: can be accessed anywhere

Static

Static keywords, usage includes static variables and static methods.

  • Static variables: (class variables) are shared by all objects.
  • Static method: Non-static member variables and non-static member methods of the class cannot be accessed in static methods

Commonly used APIs

  • length(), returns the current string length
  • substring(): intercept a string
  • equals(): comparison
  • charAt(): Extract the character at the specified position from the string
  • tocharArray(): Turn a string into a character array
  • trim(): remove spaces
  • split(): Split an array of strings
  • getBytes, the string is converted to a byte array

Object class API

  • getClass(): returns the class of the object
  • hashCode(): returns the hash value of the object
  • equals(): comparison
  • clone(): Copy
  • toString(): returns the object string
  • notify(): Wake up a single thread waiting
  • notifyAll(): wake up all waiting threads
  • wait(): Let the thread wait
  • finalize(): garbage collection

Time common API

  • Date
//创建一个Date日期对象:代表了系统当前此刻日期时间信息
Date d = new Date();

//获取时间毫秒值的形式:从19700101 0:0:0开始走到此刻的总毫秒值
long time = d.getTime();  // long time = System.currentTimeMillis();

time += (60 * 60 + 123) * 1000;
//把时间毫秒值转换成日期对象
Date d2 = new Date(time);
 
// 与上述代码逻辑一样,只是写法不同
Date d2 = new Date();
d2.setTime(time); // 修改日期对象成为time这个时间
  • SimpleDateFormat
//日期对象
Date d = new Date();

//开始格式化:创建一个简单日期格式化对象
// 注意:参数是格式化之后的时间形式,必须申明!
SimpleDateFormat sdf = new SimpleDateFormat("yyyy年MM月dd日 HH:mm:ss EEE a");

//开始格式化日期对象成为字符串形式
String result = sdf.format(d);

//格式化时间毫秒值----------
long time = d.getTime() + 60 * 1000;
sdf.format(time)
    
//SimpleDateFormat解析字符串时间成为日期对象
String timeStr = "2022年05月27日 12:12:12";

SimpleDateFormat sdf = new SimpleDateFormat("yyyy年MM月dd日 HH:mm:ss");
Date d = sdf.parse(timeStr); // 解析
  • Calendar
// 拿到系统此刻日历对象
Calendar rightNow = Calendar.getInstance();

// 获取日历的信息:public int get(int field):取日期中的某个字段信息。
int year = rightNow.get(Calendar.YEAR);

int mm = rightNow.get(Calendar.MONTH);

int days = rightNow.get(Calendar.DAY_OF_YEAR);

//public void add(int field,int amount):为某个字段增加/减少指定的值
// 请问64天后是什么时间
rightNow.add(Calendar.DAY_OF_YEAR , 64);

//拿到此刻时间毫秒值
long time = rightNow.getTimeInMillis();
System.out.println(new SimpleDateFormat("yyyy/MM/dd HH:mm:ss").format(time));

  • LocalDate
//获取本地日期对象。
LocalDate nowDate = LocalDate.now();

int year = nowDate.getYear();

int month = nowDate.getMonthValue();

int day = nowDate.getDayOfMonth();

//当年的第几天
int dayOfYear = nowDate.getDayOfYear();

//星期
System.out.println(nowDate.getDayOfWeek());
System.out.println(nowDate.getDayOfWeek().getValue());

//月份
System.out.println(nowDate.getMonth());
System.out.println(nowDate.getMonth().getValue());

//直接传入对应的年月日
LocalDate bt = LocalDate.of(2025, 5, 20);

//相对上面只是把月换成了枚举
System.out.println(LocalDate.of(2025, Month.MAY, 20));
  • LocalTime
//获取本地时间对象。
LocalTime nowTime = LocalTime.now();

int hour = nowTime.getHour();//时

int minute = nowTime.getMinute();//分

int second = nowTime.getSecond();//秒

int nano = nowTime.getNano();//纳秒

LocalTime time = LocalTime.of(8, 30);
System.out.println(time);//时分
System.out.println(LocalTime.of(8, 20, 30));//时分秒
  • LocalDateTime
// 日期 时间
LocalDateTime nowDateTime = LocalDateTime.now();
//今天是:
System.out.println("今天是:" + nowDateTime);
System.out.println(nowDateTime.getYear());//年
System.out.println(nowDateTime.getMonthValue());//月
System.out.println(nowDateTime.getDayOfMonth());//日
System.out.println(nowDateTime.getHour());//时
System.out.println(nowDateTime.getMinute());//分
System.out.println(nowDateTime.getSecond());//秒

//日:当年的第几天
System.out.println(nowDateTime.getDayOfYear());
//星期
System.out.println(nowDateTime.getDayOfWeek());//枚举
System.out.println(nowDateTime.getDayOfWeek().getValue());//数组
//月份
System.out.println(nowDateTime.getMonth());//枚举
System.out.println(nowDateTime.getMonth().getValue());//数组

//转日期
LocalDate ld = nowDateTime.toLocalDate();
//转时间
LocalTime lt = nowDateTime.toLocalTime();
  • DateTimeFormatter
LocalDateTime ldt = LocalDateTime.now();

//格式化器
DateTimeFormatter dtf = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");

String ldtStr1 = dtf.format(ldt);

//解析
DateTimeFormatter dtf1 = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
// 解析当前字符串时间成为本地日期时间对象
LocalDateTime ldt1 = LocalDateTime.parse("2022-11-11 11:11:11" ,  dtf1);
System.out.println(ldt1.getDayOfYear());

Bubble Sort

for (int i = 0; i <arr.length-1; i++) {
    
     

    //标志位
    boolean flag = true;

    for (int j = 0; j <arr.length-1-i ; j++) {
    
    

        if(arr[j] > arr[j+1]){
    
    
            int temp = arr[j+1];
            arr[j+1] = arr[j];
            arr[j] = temp;
            flag = false;
        }
    }

    //当不再发生交换时,则结束比较
    if(flag){
    
    
        break;
    }
}

gather

The difference between List and Set

  • List: Ordered, save objects in the order they entered, repeatable, allow multiple Null element objects, and use Iterator to take them out

All elements are traversed one by one, and get(int index) can also be used to obtain the element with the specified subscript.

  • Set: Unordered, non-repeatable, at most one Null element object is allowed, and when fetching elements, only the Iterator interface can be used to obtain all elements

Elements, each element is traversed one by one.

List

​ Idea: Introduce the characteristics of list --> briefly introduce the underlying implementation of Arraylist and LinkedList --> talk about the difference between Arraylist and LinkedList --> finally, it can be said that they are not thread-safe, and introduce the idea of ​​copy-on-write.

  • List is an ordered, repeatable collection. Its implementation classes include ArrayList, LinkedList, and Vector.

  • The bottom layer of ArrayList is implemented by dynamic array. A dynamic array is not fixed in length, and becomes longer as the data increases. When instantiating Arraylist, if the length is not specified, the default is 10. When adding elements, they are added in order from the head to the back.

    ​ When using the no-argument constructor ArrayList() to create an ArrayList object, the length of the underlying array is not defined. When the add(E e) method is called for the first time, the length of the underlying array is initially defined as 10, and then add(E e) is called , if expansion is required, call grow(int minCapacity) to expand the capacity, and the length will be 1.5 times of the original.

    ​ Because the length of the array is fixed, it is necessary to create a new array when storing data beyond the length, and then copy the data of the old array to the new array. If the data is not inserted at the end, it will also involve the movement of elements, so the efficiency of adding and deleting is average.

    ​ However, since each element occupies the same memory and is arranged consecutively, when searching, any element in the array can be quickly accessed according to the subscript of the element, and the query efficiency is very high.

  • The bottom layer of LinkedList is the implementation of the data structure of the doubly linked list. Each node includes: the reference address of the previous node and the next node and data are used to store data. The doubly linked list is not arranged continuously and can occupy a discontinuous memory space.

    ​ When a new element is inserted, you only need to modify the reference of the previous element and the reference of the next element at the position to be inserted.

    ​ Deletion only needs to modify two references, and the current element will not point to it, and it will become a garbage object and be recycled. efficient. However, when querying, you need to start searching from the first element until you find the required data, so the query efficiency is relatively low.

What is the difference between ArrayList and Linkedlist?

  • The bottom layer of ArrayList is an array implementation, and the bottom layer of LinkedList is a linked list implementation

  • Arraylist is suitable for random search, LinkedList is suitable for deletion and addition

  • Both implement the List interface, but LinkedList also implements the Deque interface at the same time, and can also be used as a double-ended queue.

  • ArrayList is fast through subscript query, and LinkedList needs to traverse all through subscript query, but it is very fast to check the first and last one

  • The addition of ArrayList needs to be expanded, and the specified position is added, and the element needs to be moved by the array. Adding LinkedList does not require expansion, and adding at a specified location requires traversal to find the location

  • ArrayList implements the Random Access interface, LinkedList does not. To implement the Random Access interface, you can use ordinary for loop traversal. If you don’t implement it, you can use foreach and iterators. ArrayList uses for loops, and LinkedList uses iterators.

  • Both ArrayList and LinkedList are thread-unsafe. (When adding operations, it may be completed in two steps: 1. Store the element at the position of items[size], 2. Increase the value of size, which will cause thread safety issues at this time.) If you want to solve the current For this problem, CopyOnWriteArrayList can be used for copy-on-write.

How does Arraylist remove duplicate elements?

  • You can use the set collection, because the set is not repeatable, you can add data to the set collection, and then convert it to a list to remove the duplication.
  • You can use the distinct deduplication keyword of the stream object to deduplicate, and then collect them into a new list.

There are many null values ​​in Arraylist, how to delete them?

  • The first type: list.stream().filter(Objects::nonNull).collect(Collectors.toList());
  • The second type: list.removeIf(Objects::isNull);

Set

​ Unordered, elements cannot be repeated.

  • HashSet: The internal data structure is a hash table (not thread-safe, high efficiency), the elements are unordered and unique (whether the storage element type is guaranteed by rewriting the hashCode and equals methods), and null elements can be stored.
  • TreeSet: The internal data structure is a binary tree, the elements are unique, ordered (thread unsafe), and the set elements are unique. TreeSet will call the compareTo(Object obj) method of the collection elements to compare the size relationship of the elements, whether the comparison returns 0, if it returns 0, they are equal and then arrange the elements in ascending order.

Map

​ A collection of key-value pairs key value, data of any reference type can be used, the key cannot be repeated, and the corresponding value can be obtained through the specified key.

What is the difference between HashMap and HashTable?

  • The HashMap method has no synchronized modification and is not thread-safe, while HashTable is thread-safe.
  • Due to thread safety, HashTable is not as efficient as HashMap.
  • HashMap can use null as key or value, but HashTable does not allow it.
  • The default initialization array size of HashMap is 16, and the size of HashTable is 11. When the former is expanded, it will be doubled, and the latter will be doubled + 1 (2n+1).
  • HashMap needs to recalculate the hash value, while HashTable directly uses the hashCode of the object.

What has changed from Jdk1.7 to Jdk1.8 HashMap?

  • The bottom layer in 1.7 is array + linked list, and the bottom layer in 1.8 is array + linked list + red-black tree. The purpose of adding red-black tree is to improve the overall efficiency of HashMap insertion and query.
  • Linked list insertion in 1.7 uses the head insertion method, and linked list insertion in 1.8 uses the tail insertion method. Because in 1.8, when inserting keys and values, it is necessary to determine the number of linked list elements, so it is necessary to traverse the linked list to count the number of linked list elements. number, so just use the tail interpolation method directly.
  • The hash algorithm in 1.7 is more complicated, and there are various right shift and XOR operations. It is simplified in 1.8, because the purpose of the complex hash algorithm is to improve the hashability and provide the overall efficiency of HashMap. In 1.8, a red-black tree is added, so the hash algorithm can be appropriately simplified to save CPU resources.

Talk about the Put method of HashMap

Let me first talk about the general flow of the Put method of HashMap:

  • According to the Key, the array subscript is obtained through hash algorithm and AND operation
  • If the element at the subscript position of the array is empty, the key and value are encapsulated as Entry objects (Entry objects in JDK1.7, and Entry objects in JDK1.8

is a Node object) and put in the location

  • If the element at the subscript position of the array is not empty, it should be discussed on a case-by-case basis
    • If it is JDK1.7, first judge whether expansion is needed, if expansion is necessary, then expand, if not, generate Entry object, and add it to the linked list at the current position using the header insertion method
    • If it is JDK1.8, it will first judge the type of Node at the current position to see if it is a red-black tree Node or a linked list Node
      • If it is a red-black tree Node, the key and value will be encapsulated into a red-black tree node and added to the red-black tree. In this process, it will be judged whether the current key exists in the red-black tree, and if so, the value will be updated.
      • If the Node object at this position is a linked list node, then encapsulate the key and value into a linked list Node and insert it into the last position of the linked list through the tail insertion method. Because it is the tail insertion method, it is necessary to traverse the linked list. When traversing the linked list During the process, it will judge whether the current key exists, and if so, update the value. After traversing the linked list, insert the new linked list Node into the linked list. After inserting into the linked list, it will check the number of nodes in the current linked list. If it is greater than is equal to 8, then the linked list will be converted into a red-black tree
      • Encapsulate the key and value as Node and insert them into the linked list or red-black tree, then judge whether to expand the capacity, if necessary, expand the capacity, if not, end the PUT method

HashMap's expansion mechanism principle

version 1.7

  • Create a new array first
  • Traverse each element on the linked list at each position in the old array
  • Take the key of each element, and calculate the subscript of each element in the new array based on the length of the new array
  • Add elements to the new array
  • After all the elements have been transferred, assign the new array to the table property of the HashMap object

version 1.8

  • Create a new array first
  • Traverse the linked list or red-black tree at each position in the old array
  • If it is a linked list, directly recalculate the subscript of each element in the linked list and add it to the new array
  • If it is a red-black tree, first traverse the red-black tree, and first calculate the subscript position of each element in the red-black tree corresponding to the new array
    • Count the number of elements at each subscript position
    • If the number of elements at this position exceeds 8, generate a new red-black tree and add the root node to the corresponding position of the new array
    • If the number of elements at this position does not exceed 8, then generate a linked list and add the head node of the linked list to the corresponding position of the new array
  • After all the elements have been transferred, assign the new array to the table property of the HashMap object

What problems will occur when the iterator modifies the map during the iteration process?

​ Using the Fail-Fast mechanism, the bottom layer records the number of modifications through a modCount value, and the modification operation on the HashMap will increase this value. The iterator will assign this value to exceptedModCount in the initial process. During the iteration process, if the values ​​of modCount and exceptedModCount are found to be inconsistent, it means that other threads have modified the Map, and an exception will be thrown immediately.

Why is HashMap thread-unsafe?

  • In the case of multi-threading, when the put operation is performed, if the inserted element exceeds the range of the capacity, the expansion operation will be triggered, which is rehash, which will re-hash the contents of the original array to the new expansion array. In multi-threading In an environment where put operations are performed at the same time, if the hash values ​​are the same, they may appear in the same array and be represented by a linked list, resulting in a closed loop and an infinite loop of get.

How to deal with it:

  • Hashtable itself is thread-safe, because all his multi-threaded operations have added the synchronized keyword, but the efficiency is too low. Therefore, in a multi-threaded state, it is recommended to use ConcurrentHashMap to solve the problem of multi-thread insecurity. ConcurrentHashMap is also divided into 1.7 and 1.8. ConcurrentHashMap does not support that the key and value cannot be empty. If it is empty, a null pointer exception will be reported.
  • 1.7 uses the data structure of array + segments segment lock + HashEntry linked list. The implementation of the lock uses the Lock + CAS + UNSAFE class. Its expansion method supports simultaneous expansion of multiple segments. (Implementation principle: It is equivalent to splitting a large ConcurrentHashMap into 16 small hashtables, and each hashtable has an independent table[], so in the put operation, first calculate which segment object is stored, and then calculate the existing object In which position, the header insertion method is used, the default initial segment size is 16, and the initial capacity of the hashEntry object is 2).
  • 1.8 uses array + linked list + red-black tree, directly uses node array to save data, cancels the segment design, and uses CAS+synchronized lock mechanism to ensure concurrent updates and support concurrent expansion. It is equivalent to locking each array subscript.

HashMap load factor

​ loadFactor load factor 0.75f.

ConcurrentHashMap

1.7

​ In JDK7, ConcurrentHashMap uses the "segment lock" mechanism to achieve thread safety. The data structure can be regarded as "Segment array + HashEntry array + linked list". A ConcurrentHashMap instance contains an array of several Segment instances. Each Segment instance It also contains several buckets, and each bucket is a linked list linked by several HashEntry objects. .

​ Because the Segment class inherits the ReentrantLock class, it can act as a lock. By dividing the ConcurrentHashMap into different parts through the segment segment, different locks can be used to control the modification of different parts of the hash table, allowing multiple concurrent write operations By default, 16 threads are supported to execute concurrent write operations, and any number of threads can read operations.

1.8

​ In JDK8 and above versions, the underlying data structure of ConcurrentHashMap still adopts "array + linked list + red-black tree", but in terms of thread safety, the concept of segment lock in JDK7 version is abandoned, and the concept of segment lock is adopted instead. synchronized + CAS algorithm to ensure thread safety. In ConcurrentHashMap, Unsafe.compareAndSwapXXX method is widely used. This method uses a CAS algorithm to realize lock-free modification value operation, which can greatly reduce the performance consumption caused by locking. The basic idea of ​​this algorithm is to constantly compare whether the variable value in the current memory is equal to your expected variable value. If they are equal, accept the modified value, otherwise reject your operation.

the difference

  • Data structure: The data structure of JDK7 is Segment array + HashEntry array + linked list, and the data structure of JDK8 is HashEntry array + linked list + red-black tree. When the length of the linked list exceeds 8, the linked list will be converted into a red-black tree, thereby reducing the time Complexity (from O(n) to O(logN)), improving efficiency.
  • Lock implementation: The lock of JDK7 is a segment, which is implemented based on ReentronLock and contains multiple HashEntry; while JDK8 reduces the granularity of locks and uses table array elements as locks, so as to lock each row of data and further reduce concurrency conflicts probability, and use synchronized instead of ReentrantLock, because in the low-grained locking method, synchronized is not worse than ReentrantLock, and in coarse-grained locking, ReentrantLock can control each low-granularity boundary through Condition, which is more flexible. In low granularity, the advantages of Condition are gone.
  • The method of counting the size of the elements in the collection: JDK7 first tries twice to count the size of each segment by not locking the segment. If the count of the container changes during the counting process, then lock it Count the size of all segments; in JDK8, the calculation of size has already been processed in the expansion and addCount() methods, and the number of elements will be returned directly when size() is called.

CopyOnWriteArrayList

  • First of all, CopyOnWriteArrayList is also implemented internally by arrays. When adding elements to CopyOnWriteArrayList, a new array will be copied, write operations are performed on the new array, and read operations are performed on the original array.
  • Moreover, the write operation will be locked to prevent the problem of data loss caused by concurrent writing.
  • After the write operation is completed, the original array will point to the new array.
  • CopyOnWriteArrayList allows data to be read during write operations, which greatly improves the performance of reading, so it is suitable for application scenarios that read more and write less , so it is not suitable for scenarios with high real-time requirements.

abnormal

Exception Hierarchy in Java

  • All exceptions in Java come from the top-level parent class Throwable.
  • There are two subclasses Exception and Error under Throwable.
  • Error indicates very serious errors, such as java.lang.StackOverFlowError and Java.lang.OutOfMemoryError. Usually, when these errors occur, they cannot be solved by the program itself, which may be at the virtual machine, disk, or operating system level There are problems, so it is usually not recommended to capture these Errors in the code, because the capture is of little significance, because the program may not run at all.
  • Exception means exception, which means that when an exception occurs in the program, it can be solved by the program itself, such as NullPointerException, IllegalAccessException, etc., we can catch these exceptions for special processing.
  • The subclasses of Exception can usually be divided into two categories: RuntimeException and non-RuntimeException.
    • RunTimeException represents a runtime exception, which means that the exception is thrown during the running of the code. These exceptions are unchecked exceptions, and the program can choose to catch and process them or not. These exceptions are generally caused by program logic errors, and the program should try to avoid such exceptions from a logical perspective, such as NullPointerException, IndexOutOfBoundsException, ClassCastException, etc.
    • Non-RuntimeException means a non-runtime exception, which is what we often call a checked exception. It is an exception that must be handled. If it is not handled, the program cannot pass the checked exception. Such as IOException, SQLException, ClassNotFoundException, FileNotFoundException, etc., as well as user-defined exceptions.

In Java's exception handling mechanism, when should an exception be thrown and when should it be caught?

​ The exception is equivalent to a reminder. If we throw an exception, it is equivalent to telling the upper-level method that I threw an exception. You also need to decide whether you can handle this exception, and whether you need to hand it over to its upper layer.

​ During our work, exception handling should be determined by business needs. For example, we can throw an exception prompt when judging illegal parameters, and define a unified exception handler to uniformly process and respond to our exceptions. . However, when we open a transaction in an interface, we cannot catch exceptions at this time, which will cause the manager to be unable to perform rollback operations through exception information.

thread

How to create threads

  • Inherit the Thread class
  • Implement the Runnable interface
  • Implement the Callable interface
    • Create an implementation class of the Callable interface, and implement the call() method. The call() method will be used as the thread execution body, and it can return a value and handle exceptions at the same time.
    • Create an instance of the Callable implementation class, use the FutureTask class to wrap the Callable object, and the FutureTask object encapsulates the return value of the call() method of the Callable object.
    • Create and start a new thread using the FutureTask object as the target of the Thread object.
    • Call the get() method of the FutureTask object to obtain the return value after the execution of the child thread is completed.

state of the thread

Threads usually have five states, created, ready, running, blocked and dead.

​ The state of the thread defined in the source code:

public enum State {
    
    
    NEW,  //新生
    RUNNABLE,  //就绪状态
    BLOCKED,   //阻塞状态
    WAITING,   //等待
    TIMED_WAITING,   //限时等待
    TERMINATED;    //终结状态
}

What is the difference between sleep() and wait()?

  • sleep(): The method is a static method of the thread class (Thread), which allows the calling thread to enter the sleep state and give up execution opportunities to other threads. After the sleep time is over, the thread enters the ready state and competes with other threads for CPU execution time. Because sleep() is a static method, it cannot change the machine lock of the object. When the sleep() method is called in a synchronized block, although the thread enters sleep, the machine lock of the object is not released, and other threads still cannot access this object.
  • wait(): wait() is a method of the Object class. When a thread executes the wait method, it enters a waiting pool related to the object, and at the same time releases the machine lock of the object so that other threads can access it. notify, notifyAll methods to wake up waiting threads

What are the ways to create a thread pool?

  • newFixedThreadPool(int nThreads): Create a fixed-length thread pool.
  • newCachedThreadPool(): Creates a cacheable thread pool.
  • newSingleThreadExecutor(): This is a single-threaded Executor.
  • newScheduledThreadPool(int corePoolSize): Creates a fixed-length thread pool, and executes tasks in a delayed or timing manner, similar to Timer.
  • The above method is not recommended in practical applications (because its default length (maximum number of threads\blocking queue length) is Integer.MAX_VALUE, equivalent to unlimited length, which is not conducive to thread management), it is recommended to use a custom thread pool.

custom thread pool

public ThreadPoolExecutor(int corePoolSize,
                          int maximumPoolSize,
                          long keepAliveTime,
                          TimeUnit unit,
                          BlockingQueue<Runnable> workQueue,
                          ThreadFactory threadFactory,
                          RejectedExecutionHandler handler)
  • int corePoolSize: the maximum number of core threads in the thread pool
  • int maximumPoolSize: the maximum number of threads in the thread pool
  • long keepAliveTime: the idle timeout period of non-core threads in the thread pool
  • TimeUnit unit: time unit
  • BlockingQueue workQueue: Defines a blocking queue
  • ThreadFactory threadFactory: thread factory, generally use the default
  • RejectedExecutionHandler handler: rejection strategy
    • AbortPolicy (default): Throwing RejectedExecutionException directly prevents the system from running normally.
    • CallerRunsPolicy: "Caller Run" is a calling mechanism. This policy neither abandons tasks nor throws exceptions, but returns some tasks to the caller, thereby reducing the flow of new tasks.
    • DiscardOldestPolicy: Discard the longest waiting task in the queue.
    • DiscardPolicy: This policy silently discards tasks that cannot be processed.

Thread pool execution process

insert image description here

  • After the thread pool is created, the number of threads in the thread pool is zero (lazy loading).
  • When the execute() method is called to add a request task, the thread pool will make the following judgments:
  • If the number of running threads is less than corePoolSize, then immediately create a thread to run this task;
  • If the number of running threads is greater than corePoolSize, then put this task into a blocking queue;
  • If the blocking queue is full at this time, and the number of running threads is less than maximumPoolSize, then a non-core thread must be created to run the task immediately;
  • If the queue is full and the number of running threads is greater than or equal to maximumPoolSize, then the thread pool will start the saturation rejection strategy.
  • When a thread completes a task, it fetches the next task from the queue to execute.
  • When a thread has nothing to do for more than a certain period of time (keepAliveTime), the thread will judge: if the number of currently running threads is greater than corePoolSize, then the thread will be stopped. So after all the tasks of the thread pool are completed, it will eventually shrink to the size of corePoolSize.

What is the difference between the submit() and execute() methods in the thread pool?

  • The parameters received are different. The Execute() method can only receive parameters of the Runnable type, while the submit() method can receive parameters of both Callable and Runnable types. Tasks of the Callable type can return execution results, but tasks of the Runnable type cannot return execution results.
  • submit has a return value, but execute does not. The execute() method is mainly used to start the execution of the task, and the caller does not care about the execution result of the task and possible exceptions. The submit() method is also used to start the execution of the task, but after the start, it will return a Future object, which represents an asynchronous execution instance, and the result can be obtained through the asynchronous execution instance.
  • submit is convenient for Exception handling. After the execute() method starts task execution, the caller does not care about the exceptions that may occur during task execution. The Future object (asynchronous execution instance) returned by the submit() method can capture exceptions during asynchronous execution.

How to use threads in spring

  • The first is to add @EnableAsync to the configuration class to enable asynchronous calls, and add @Async annotations to the specified method to execute the method asynchronously

  • the second

    • Thread pool configuration
    @Configuration
    //开启异步调用
    @EnableAsync
    public class AysncConfig {
          
          
        //配置线程池, 不同的异步方法, 可以交给特定的线程池来完成
        @Bean("myThreadPool")
        public ExecutorService myThreadPool() {
          
          
    
            //这个类则是spring包下的, 是spring为我们提供的线程池类
            ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
    
            //核心线程数
            threadPoolTaskExecutor.setCorePoolSize(10);
            //最大线程数
            threadPoolTaskExecutor.setMaxPoolSize(50);
            //阻塞队列大小
            threadPoolTaskExecutor.setQueueCapacity(50);
            //超时时长
            threadPoolTaskExecutor.setKeepAliveSeconds(30);
            //线程名前缀
            threadPoolTaskExecutor.setThreadNamePrefix("lz-thread-");
            //拒绝策略
            threadPoolTaskExecutor.setRejectedExecutionHandler(new ThreadPoolExecutor.AbortPolicy());
    
            //初始化
            threadPoolTaskExecutor.initialize();
    
            return threadPoolTaskExecutor.getThreadPoolExecutor();
        }
    }
    
    • use
    @Async("myThreadPool")
    public CompletableFuture<String> getString1(Integer msg) {
          
          
    
    	//方法体
        return CompletableFuture.completedFuture(String.valueOf(msg));
    }
    

The principle and usage scenarios of ThreadLocal

​ Each Thread object contains a member variable threadLocals of ThreadLocalMap type, which stores all ThreadLocal objects and their corresponding values ​​in this thread.

​ ThreadLocalMap is composed of Entry objects. Entry inherits from WeakReference<ThreadLocal<?>>, and an Entry is composed of ThreadLocal objects and Object. It can be seen that the key of Entry is a ThreadLocal object, and it is a weak reference. When there is no strong reference to the key, the key will be reclaimed by the garbage collector.

​ When executing the set method, ThreadLocal first obtains the current thread object, and then obtains the ThreadLocalMap pair of the current thread

elephant. Then use the current ThreadLocal object as the key to store the value in the ThreadLocalMap object.

The execution process of the get method is similar. ThreadLocal will first get the current thread object, and then get the ThreadLocalMap of the current thread

object. Then use the current ThreadLocal object as the key to get the corresponding value. Since each thread has its own private ThreadLocalMap container, these containers are independent of each other and do not affect each other, so there is no thread safety problem, so there is no need to use a synchronization mechanism to ensure the mutual exclusion of multiple threads accessing the container.

scenes to be used:

  • When transferring objects across layers, using ThreadLocal can avoid multiple transfers and break the constraints between layers.
  • Data isolation between threads.
  • Perform transaction operations and store thread transaction information.
  • Database connection, Session session management.

(The Spring framework will bind a Jdbc Connection to the current thread at the beginning of the transaction. During the entire transaction process, the connection bound to the thread is used to perform database operations, realizing the isolation of the transaction. The Spring framework uses ThreadLocal to achieve this isolation)

Causes of ThreadLocal memory leaks and how to avoid them

static class Entry extends WeakReference<ThreadLocal<?>> {
    
    
    /** The value associated with this ThreadLocal. */
    Object value;
    Entry(ThreadLocal<?> k, Object v) {
    
    
        super(k);
        value = v;
    }
}

​ The Entry in ThreadLocal uses ThreadLocal as the Key and saves the value as value. It inherits from WeakReference. Note the first line of code super(k) in the constructor, which means that the ThreadLocal object is a "weak reference".

​ Since the ThreadLocal object is a weak reference, if there is no strong external reference pointing to it, it will be recycled by GC, causing the Key of Entry to be null. If there is no strong external reference pointing to it at this time, the value will never be accessed , should also be recycled by GC, but because the Entry object is still strongly referencing the value, the value cannot be recycled. At this time, a "memory leak" occurs, and the value becomes something that can never be accessed, but cannot be recycled object.

​ The Entry object belongs to ThreadLocalMap, and ThreadLocalMap belongs to Thread. If the life cycle of the thread itself is very short, it will be destroyed in a short time, then the "memory leak" will be resolved immediately. As long as the thread is destroyed, the value will be recycled accordingly . The problem is that the thread itself is a very precious computer resource. It is rarely created and destroyed frequently. It is usually used through the thread pool, which greatly prolongs the life cycle of the thread, and the impact of "memory leak" will also getting bigger.

Correct use of ThreadLocal

  • Every time ThreadLocal is used, its remove() method is called to clear the data. (recommended)
  • Define the ThreadLocal variable as private static, so that there will always be a strong reference to ThreadLocal, and it can also ensure that the value of Entry can be accessed through the weak reference of ThreadLocal at any time, and then cleared.

What is the difference between synchronized and Lock?

  • First of all, synchronized is a built-in keyword of java, at the jvm level, and Lock is a java class;
  • Synchronized cannot judge whether to acquire the state of the lock, but Lock can judge whether to acquire the lock;
  • Synchronized will automatically release the lock (a thread will release the lock after executing the synchronization code; b will release the lock when an exception occurs during thread execution), Lock needs to manually release the lock in finally (the unlock() method releases the lock), otherwise it will easily cause the thread to die Lock;
  • In the process of lock waiting, interrupt can be used to interrupt the waiting, while synchronized can only wait for the release of the lock and cannot respond to interruption; (two threads 1 and 2 using the synchronized keyword, if the current thread 1 acquires the lock, thread 2 waits .If thread 1 is blocked, thread 2 will wait forever, and the Lock lock will not necessarily wait forever. If the lock cannot be obtained, the thread can end without waiting)
  • Synchronized locks are reentrant, uninterruptible, and unfair, while Lock locks are reentrant, determinable, and fair (both are acceptable);
  • Lock locks are suitable for synchronization problems of a large number of synchronized codes (fine thread scheduling can be completed through Condition objects), and synchronized locks are suitable for synchronization problems with a small amount of code.

Volatile

​ Volatile is a lightweight synchronization mechanism provided by java. The essence of volatile is to tell jvm that the value of the current variable in the register (working memory) is uncertain and needs to be read from the main memory; synchronized is to lock the current variable, only the current thread can access the variable, and other threads are blocked .

​ Data visibility and order can be guaranteed, but atomicity cannot be guaranteed.

Tell me about the principle of atomic?

​ The basic feature of the classes in the Atomic package is that in a multi-threaded environment, when multiple threads operate on a single (including basic type and reference type) variable at the same time, they are exclusive, that is, when multiple threads operate on the variable at the same time When the value is updated, only one thread can succeed, and the unsuccessful thread can continue to try like a spin lock until the execution is successful.

​ The core methods in the classes of the Atomic series will call several local methods in the unsafe class. The unsafe class contains a large number of operations on C code, including many direct memory allocation and atomic operation calls. Atomic is the spin call atomic operation.

Lock

Optimistic lock/pessimistic lock

​ Optimistic locking: Every time the data is taken, it is assumed that others will not modify the data, so it will not be locked. When updating, it will be judged whether anyone has updated the data during this period. It is suitable for applications with multiple reads and improves throughput.

​ Pessimistic locking: Every time you go to get the data, you think that others will modify the data, so you will lock it every time, and when others want to get it, you will block until it gets the lock.

​ Pessimistic locks are suitable for scenarios with many writes, and optimistic locks are suitable for scenarios with many reads (no locking improves performance)

​ Pessimistic locking in java is to use various locks; optimistic locking uses lock-free programming in java, using CAS algorithm

CAS

​ CAS (Compare and Swap comparison and exchange), when multiple threads try to use CAS to update the same variable at the same time, only one of the threads can update the value of the variable, while the other threads fail, and the failed thread will not be suspended , but was told to lose this competition and to try again.

​ The CAS operation contains three operands—the memory location to be read and written (V), the expected original value to be compared (A), and the new value to be written (B). If the value of the memory location V matches the expected original value A, then the processor automatically updates the location value to the new value B, otherwise the processor does nothing.

Exclusive lock/shared lock

​ An exclusive lock (exclusive lock) means that the lock can only be held by one thread at a time.

A shared lock means that the lock can be held by multiple threads.

​ For Java ReentrantLock, it is an exclusive lock. But for another implementation class of Lock, ReadWriteLock, its read lock is a shared lock. Its write lock is an exclusive lock, and the shared lock of the read lock can ensure that concurrent reading is very efficient. The process of reading and writing, writing and reading, and writing and writing are mutually exclusive. Exclusive locks and shared locks are also implemented through AQS, by implementing different methods to achieve exclusive or shared. For Synchronized, of course it is an exclusive lock.

read-write lock

​ ReentrantReadWriteLock's read lock is a shared lock, and its write lock is an exclusive lock.

​ The exclusive lock/shared lock mentioned above is a broad term, and the mutex lock/read-write lock is the specific implementation.

The specific implementation of mutex in Java is ReentrantLock.

The specific implementation of read-write lock in Java is ReadWriteLock.

reentrant lock

​ In units of threads, when a thread acquires an object lock, the thread can acquire the lock on the object again, while other threads cannot. The significance of reentrant locks is to prevent deadlocks.

​ Implementation principle:

​Associate each lock with a request counter and a thread that owns it. When the count is 0, the lock is considered unoccupied; when a thread requests an unoccupied lock, the JVM will record the occupant of the lock and set the request counter to 1.

​ The same thread requests the lock again, and the counter is incremented;

​ Each time the occupying thread exits the synchronization code block, the counter is decremented.

​ Until the counter is 0, the lock is released.

fair lock/unfair lock

​ Fair lock: The thread that waits the longest on the lock will get the right to use the lock.

​ Unfair lock: It means that the order in which multiple threads acquire locks is not in the order of applying for locks. It is possible that threads that apply later may acquire locks first than threads that apply first. Possibly, it will cause priority inversion or starvation.

​ When multiple threads are locked, they directly try to acquire the lock. If they cannot acquire it, they will wait at the end of the waiting queue. But if the lock is just available at this time, then this thread can directly acquire the lock without blocking, so unfair locks may occur where the thread that applies for the lock acquires the lock first. The advantage of unfair lock is that it can reduce the overhead of evoking threads, and the overall throughput efficiency is high, because threads have a chance to obtain locks directly without blocking, and the CPU does not have to wake up all threads. The disadvantage is that threads in the waiting queue may starve to death, or wait a long time to acquire the lock.

Bias lock/Lightweight lock/Heavyweight lock

​ These three locks refer to the state of the lock and are for Synchronized. In Java 5, efficient Synchronized is realized by introducing a lock upgrade mechanism. The status of these three locks is indicated by the fields in the object header of the object monitor.

​ Biased lock means that a piece of synchronization code has been accessed by a thread, then the thread will automatically acquire the lock. Reduce the cost of acquiring locks.

​ Lightweight lock means that when the lock is a biased lock and is accessed by another thread, the biased lock will be upgraded to a lightweight lock, and other threads will try to acquire the lock in the form of spin, without blocking and improving performance.

​ Heavyweight lock means that when the lock is a lightweight lock, although another thread is spinning, the spinning will not continue forever. When the lock is not acquired after a certain number of spins, it will enter Blocking, the lock expands to a heavyweight lock. Heavyweight locks will cause the thread he applied for to block and reduce performance.

spin lock

​ In Java, a spin lock means that the thread trying to acquire the lock will not be blocked immediately, but will try to acquire the lock in a cyclical way. The advantage of this is to reduce the consumption of thread context switching. The disadvantage is that the loop will consume CPU.

JVM

insert image description here

class loader

​ JDK comes with three class loaders: bootstrap ClassLoader, ExtClassLoader, AppClassLoader.

​ Inherit ClassLoader to customize the class loader.

Parental Delegation Mechanism

​ Delegate up, load down.

The benefits of the parental delegation model:

  • The main reason is for security, to avoid dynamic replacement of some core Java classes, such as String, by classes written by users themselves.
  • At the same time, it also avoids the repeated loading of classes, because different classes are distinguished in the JVM, not only based on the class name, but the same class file loaded by different ClassLoaders is two different classes.

The process of class loading

insert image description here

The above picture shows the life cycle of a class.

The process of class loading includes: loading, linking, and initialization.

  • Loading (loading): through the fully qualified class name, find the location of the specified class in the hard disk. Load (transfer) this object into memory via a stream. After the transmission is completed, open a space in the memory to store its Class information. (from disk to memory).
  • Linking
    • Verification: The purpose is to ensure that the information contained in the byte stream of the Class file meets the requirements of the current virtual machine, to ensure the correctness of the loaded class, and not to endanger the safety of the virtual machine itself. Such as: file format verification, bytecode verification, etc.
    • Preparation: Responsible for allocating memory for the class variables (variables modified by static) of the class, and setting the default initialization value. The static modified with final is not included here, because the final will be allocated at compile time, and the initialization will be displayed in the preparation stage. Here, instance variables will not be allocated for initialization, class variables will be allocated in the method area, and instance variables will be allocated to the Java heap along with the object.
    • Parsing: the process of converting symbolic references in the constant pool to direct references
  • Initialization: The initialization phase is the process of executing the class constructor method (), this method does not need to be defined, it is the combination of the assignment actions of all class variables in the class automatically collected by the javac compiler and the statements in the static code block. (Assignment to class variables) In the preparation stage, the class variables have been assigned the initial value required by the system once, and in the initialization stage, the class variables and other resources are initialized according to the subjective plan specified by the programmer through the program.

runtime data area

  • Stack: For each thread, a separate runtime stack will be created. For each method call, an entry is made in stack memory, called a stack frame. All local variables will be created in stack memory. The stack area is thread-safe because it is not a shared resource. (values ​​of primitive data types and addresses of reference data types)
    • The stack frame consists of local variable table, operand stack, dynamic linked list, and return address.
  • PC register: Each thread has a separate PC register (program counter), which is used to save the address offset of the currently executing instruction. Once an instruction is executed, the PC register will be updated by the next instruction.
  • Local method stack: The local method stack saves the local method information, the same as the stack.
  • Method area: All class-level data will be stored here, including static variables. Each jvm has only one method area, which belongs to thread shared resources. In the method area, information of each class (including class name, method information, and field information), static variables, constants, and code compiled by the compiler are stored. In the Class file, in addition to the descriptive information such as the fields, methods, and interfaces of the class, there is also a constant pool, which is used to store literal and symbolic references generated during compilation.
    • The constant pool is mainly used to store two types of constants: Literal and Symbolic References. Literals are equivalent to the concept of constants at the Java language level, such as text strings, constant values ​​declared as final, etc. Symbolic References belong to the concept of compilation principles, including the following three types of constants: fully qualified names of classes and interfaces, field names and descriptors, method names and descriptors.
    • Iterative version changes in the method area: 1.7 permanent generation, 1.8 yuan space (using local physical memory).
  • Heap: All objects and their corresponding instance variables and arrays will be stored here. Each jvm also has only one heap area, which belongs to thread shared resources.

How does GC judge that an object can be recycled

  • Reference counting method: Each object has a reference count attribute. When a new reference is added, the count is incremented by 1, and when the reference is released, the count is decremented by 1. When the count is 0, it can be recycled. (java does not use)
  • Reachability analysis method: starting from the GC Roots and searching downwards, the path traveled by the search is called the reference chain. When an object does not have any reference chain connected to GC Roots, it proves that the object is unavailable, and the virtual machine judges it to be a recyclable object. (√)

The objects of GC Roots are:

  • Objects referenced in the virtual machine stack (local variable table in the stack frame)
  • Objects referenced by class static properties in the method area
  • Objects referenced by constants in the method area
  • Objects referenced by JNI (generally referred to as Native methods) in the local method stack

Tell me about jvm tuning tools?

JDK comes with many monitoring tools, all of which are located in the bin directory of JDK, the most commonly used of which are the two view monitoring tools jconsole and jvisualvm.

  • jconsole: used to monitor memory, threads and classes in the JVM;
  • jvisualvm: The all-round analysis tool that comes with JDK, which can analyze: memory snapshot, thread snapshot, program deadlock, monitoring memory changes, gc changes, etc.

How to troubleshoot JVM problems in your project

For systems that are still functioning:

  • You can use jmap to view the usage of each area in the JVM
  • You can use jstack to view the running status of threads, such as which threads are blocked and whether there is a deadlock
  • You can use the jstat command to check the status of garbage collection, especially fullgc. If you find that fullgc is frequent, you need to tune it
  • Analyze through the results of each command, or tools such as jvisualvm
  • First, guess the reason for frequent fullgc. If fullgc occurs frequently but there is no memory overflow, it means that fullgc has actually recycled a lot of objects, so these objects should be recycled directly during the younggc process. To prevent these objects from entering the old age, in this case, it is necessary to consider whether these objects that do not survive for a long time are relatively large, causing the young generation to be unable to fit, and directly enter the old age, try to increase the size of the young generation If the fullgc decreases after the modification, it proves that the modification is effective.
  • At the same time, you can also find the thread that occupies the most CPU, locate the specific method, optimize the execution of this method, and see if you can avoid the creation of certain objects, thereby saving memory

For systems where OOM has occurred:

  • Generally, in the production system, it is set to generate the current dump file when OOM occurs in the system (-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/base)
  • We can use tools such as jsisualvm to analyze dump files
  • Find the abnormal instance object and abnormal thread (high CPU usage) according to the dump file, and locate the specific code
  • Then perform detailed analysis and debugging

What garbage collection algorithms does the JVM have?

  • Mark-and-sweep algorithm: This algorithm is divided into two stages, the mark stage: mark out the garbage memory, and the clear stage directly reclaims the garbage memory. This algorithm is relatively simple, but it has a very serious problem, that is, it will generate a lot of memory fragmentation.
  • Replication algorithm: In order to solve the memory fragmentation problem of the mark-and-sweep algorithm, a replication algorithm was created. The replication algorithm divides the memory into two halves of equal size, and only uses one half at a time. During garbage collection, all surviving objects in the current block are copied to the other half, and then the current half of the memory can be cleared directly. This algorithm has no memory fragmentation, but his problem is that it wastes space.
  • Marking sorting algorithm: In order to solve the defects of the copying algorithm, a marking sorting algorithm is proposed. This algorithm is the same as the mark-and-clear algorithm in the marking phase, but after the marking is completed, instead of directly cleaning up the garbage memory, it moves the surviving objects to one end, and then directly clears all the memory outside the end boundary.

What garbage collectors does the JVM have?

  • New generation collectors: Serial, ParNew, Parallel Scavenge
  • Old age collectors: CMS (Mark Sweep), Serial Old, Parallel Old
  • Whole heap collector: G1

What is a three-color mark?

Three-color mark: It is a logical abstraction. Divide each memory object into three colors:

  • Black: Indicates that both the self and member variables have been marked.
  • Gray: It has been marked by itself, but the member variables have not been completely marked.
  • White: Not marked by itself.

What are the commonly used jvm tuning parameters?

  • -Xms2g: The initial push size is 2g;
  • -Xmx2g: The maximum heap memory is 2g;
  • -XX:NewRatio=4: Set the ratio of young and old memory to 1:4;
  • -XX:SurvivorRatio=8: Set the ratio of Eden and Survivor in the new generation to 8:2;
  • –XX:+UseParNewGC: Specifies to use the ParNew + Serial Old garbage collector combination;
  • -XX:+UseParallelOldGC: Specifies to use the ParNew + ParNew Old garbage collector combination;
  • -XX:+UseConcMarkSweepGC: Specifies to use the CMS + Serial Old garbage collector combination;
  • -XX:+PrintGC: enable printing gc information;
  • -XX:+PrintGCDetails: Print gc details.

MySQL

SQL practice

http://www.cnblogs.com/qixuejia/p/3637735.html

Paging query

limit, for a small offset, there is no problem with directly using limit to query, but after the amount of data increases, the offset of the limit statement increases, and the query efficiency will decrease, which can be solved by subquery paging or JOIN paging .

common functions

  • Character functions: concat, connect; substr, intercept substring; upper, uppercase; lower, lowercase; replace, replace; length, get byte length; trim, remove spaces before and after; lpad, left padding; instr, get the index of the first occurrence of the substring;
  • Mathematical functions: ceil, round up; round, round up; mod: modulo floor, round down; truncate, truncate; rand, get a random number, return a decimal between 0-1;
  • Date function: now, return current date + time; year, return year; month, return month; day, return day; date format, convert date into character; curdate, return current date; str to date, convert character into date ;curtime, returns the current time; hour, hour; minute, minute; second, second; datediff, returns the number of days between two dates; monthname, returns the month in English;
  • Other functions: version, the version of the current database server; database, the currently opened database; user, the current user; password (character '), returns the password form of the character; md5 (character '), returns the md5 encrypted form of the character;
  • Aggregation functions: avg(), average value; count(), returns the number of items in the specified group; max(), maximum value; min(), minimum value; sun(), summation;

index

​ Index: An ordered data structure that helps MySQL efficiently retrieve data.

  • Primary key index: After setting the primary key, the database will automatically create an index, and InnoDB is a clustered index.

  • Single-valued index: That is, an index contains only a single column, and a table can have multiple single-valued indexes.

  • Unique index: The value of the index column must be unique, but null values ​​are allowed.

  • Composite index: That is, an index contains multiple columns.

  • Full Text Index (Full Text): Supports full-text search of values ​​on the columns defining the index, allowing duplicate and null values ​​to be inserted in these index columns.

    special:

  • Clustered index: The data storage and index are put together, and the child nodes of the index structure store row data.

    • In innodb, the default primary key is a clustered index, and the index not created on the clustered index is called an auxiliary index (except for the primary key, the rest are auxiliary indexes)
  • Non-clustered index: store the data and the index separately, and the leaf nodes of the index structure point to the corresponding position of the data.

B+ tree

​ B+ tree non-leaf nodes do not store data, only key values ​​are stored. And the data is arranged in order and connected through a doubly linked list. The three layers store about 2000W+ pieces of data.

Principles of Index Design

Faster queries and smaller footprint

  • Columns suitable for indexing are the columns that appear in the where clause, or the columns specified in the join clause
  • For a table with a small cardinality, the index effect is poor, so there is no need to build an index on this column
  • Use a short index. If you index a long string column, you should specify a prefix length, which can save a lot of index space. If the search term exceeds the index prefix length, use the index to exclude unmatched rows. The rest of the lines are then checked for possible matches.
  • Don't over index. Indexes require additional disk space and slow down the performance of write operations. When modifying table content, the index will be updated or even restructured. The more index columns, the longer this time will be. So only keep the required indexes to facilitate the query.
  • Data columns defined with foreign keys must be indexed.
  • Frequently updated fields are not suitable for creating indexes
  • If the columns that cannot effectively distinguish the data are not suitable for index columns (such as gender, male and female unknown, there are only three types at most, and the degree of discrimination is too low)
  • Extend the index as much as possible, do not create a new index. For example, if the index of a already exists in the table, and now you want to add the index of (a, b), then you only need to modify the original index.
  • Do not create indexes for columns defined as text, image, and bit data types.

What is index covering

​ Index coverage means that when a SQL is executed, the index can be used to quickly search, and the fields to be queried by the SQL are included in the fields corresponding to the current index, then it means that the SQL does not need to go back after the index is finished. table, the required fields exist on the leaf nodes of the current index, and can be returned directly as results.

What is the leftmost prefix principle

​ When a SQL wants to use an index, it must provide the leftmost field among the fields corresponding to the index, that is, the field at the top. For example, for the three fields a, b, and c If you set up a joint index, you must provide the condition of field a when writing a sql, so that the joint index can be used. This is because when building the joint index of the three fields a, b, and c, The underlying B+ tree is sorted according to the size of the three fields a, b, and c from left to right, so if you want to use the B+ tree for quick search, you must also comply with this rule.

index failure

  • For a multi-column index, the filter condition must use the index, which must be satisfied in the order in which the index is created. Once a field is skipped, the fields behind the index cannot be used.
  • Computations, functions, type conversions (automatic or manual) invalidate indexes
  • The column index on the right side of the range condition is invalid
  • Not equal ( != or <> ) index invalidation
  • is null can use the index, is not null cannot use the index
  • like The index starting with the wildcard % is invalid
  • If there are non-indexed columns before and after OR, the index will fail
  • select *

Explain

column name describe
id If the id is the same, it will be executed from top to bottom; if the id is different, the bigger the id, the higher the priority.
select_type The type of query is mainly used to distinguish: common query, joint query, sub query and other complex queries.
table Table Name
partitions Matching Partition Information
type For the single-table query method, its parameters provide an important basis for judging whether the query is efficient. The order of efficiency from best to worst is: system>const>eq_ref>ref>range>index>ALL.
possible_keys Indexes that may be used
key The index actually used
key_len The actual index length used
ref When using the index column equivalent query, the object information for equivalent matching with the index column
rows Estimated number of records that need to be read
filtered Percentage of remaining records after a table is filtered by search conditions
Extra Some additional information, such as Using filesort, Using index, Using where

SQL optimization ideas

  • Enable slow log for slow sql screening
  • Optimization of the Join statement
    • Always drive large result sets with small result sets
    • Ensure that the Join condition field on the driven table in the Join statement has been indexed
  • ORDER BY keyword optimization
    • For the ORDER BY clause, try to use the Index method for sorting, and avoid using the FileSort method for sorting
    • Complete the sorting operation on the index column as much as possible, and create an index according to the best left prefix
  • GROUP BY keyword optimization
    • The essence of GROUP BY is to sort first and then group, and create an index according to the best left prefix
    • The performance of WHERE is higher than that of HAVING. If you can write the conditions limited by WHERE, don’t go to HAVING.
  • Query the execution plan to see if the index index is invalid
    • full value match
    • The best left prefix rule, which means that the query starts from the leftmost front column of the index and does not skip columns in the index
    • Do not do any operation (calculation, function, (automatic or manual) type conversion) on the index column, it will cause the index to fail and switch to full table scan
    • The storage engine cannot use columns to the right of the range condition in the index
    • Try to use the covering index (query that only accesses the index (the index column or query column is consistent)), reduce select *
    • When MySQL uses not equal (!= or <>), the inability to use the index will cause a full table scan
    • is null, is not null can not use the index
    • like starts with a wildcard (%abc...), the MySQL index will become invalid, and it will become a full table scan operation
    • Therefore, you can use like abc%, or use a covering index to solve the problem of index failure like '%string%'
    • If the string does not add single quotes, the index will fail
    • Use less or, when using it to connect, it will cause the index to fail

The difference between MyISAM and InnoDB

  • MyISAM:
    • Transactions are not supported, but each query is atomic;
    • Supports table-level locks, that is, each operation locks the entire table;
    • The total number of rows in the storage table;
    • A MYISAM table has three files: index file, table structure file, and data file;
    • With a non-clustered index, the data field of the index file stores a pointer to the data file. The auxiliary index is basically the same as the main index, but the uniqueness of the auxiliary index is not guaranteed.
  • InnoDb:
    • Supports ACID transactions and supports four isolation levels for transactions;
    • Supports row-level locks and foreign key constraints: so it can support write concurrency;
    • does not store the total number of rows;
    • An InnoDb engine is stored in one file space (shared table space, the size of the table is not controlled by the operating system, and a table may be distributed in multiple files), or it may be multiple (set as an independent table empty , the table size is limited by the file size of the operating system, generally 2G), limited by the file size of the operating system;
    • The primary key index adopts a clustered index (the data field of the index stores the data file itself), and the data field of the secondary index stores the value of the primary key; therefore, to find data from the secondary index, you need to find the primary key value through the secondary index first, and then access the secondary index; It is best to use an auto-incrementing primary key to prevent large adjustments to the file in order to maintain the B+ tree structure when inserting data.

affairs

​ Transaction is the basic unit of concurrency control. A set of operations either executes all or none of them.

​ Maintain the consistency of database data, and ensure data consistency at the end of each transaction.

Four characteristics of business

​ ACID

  • Atomicity: All operations in a transaction are either completed or not completed, and will not end in a middle link. If an error occurs during the execution of the transaction, it will be rolled back (Rollback) to the state before the transaction started, as if the transaction had never been executed.
    • Implementation principle undo log rollback log. Undo log is the key to achieve atomicity. When a transaction modifies the database, InnoDB will generate a corresponding undo log; if the transaction execution fails or rollback is called, causing the transaction to be rolled back, the information in the undo log can be used to save the data Roll back to the way it was before the modification. Undo log is a logical log, which records information related to sql execution. When a rollback occurs, InnoDB will do the opposite of the previous work according to the contents of the undo log.
  • Consistency: Transactions transform the database from one state to another, ensuring that transactions are executed consistently. Atomicity, isolation, and persistence are all to ensure the consistency of the database state. (For example: A transfers money to B, it is impossible for A to deduct the money, but B does not receive it)
  • Isolation: The ability of the database to allow multiple concurrent transactions to read, write and modify its data at the same time. Isolation can prevent data inconsistency caused by cross-execution when multiple transactions are executed concurrently.
  • Durability: Once a transaction is committed, the data changes in the database are permanent, even if the database system encounters a failure, the operation of committing the transaction will not be lost.
    • In order to improve performance, InnoDB provides a buffer pool BufferPool. Read data: first read from the buffer pool, if there is no buffer pool, then read from the disk and put it into the buffer pool. Write data: write to the buffer pool first, and the data in the buffer pool will be periodically synchronized to the disk (when the data is written to the buffer pool but not recorded to the disk, then the current database page is a dirty page, and the dirty page will be recorded The process to the disk is brushing dirty). But data may be lost when mysql is down. Therefore, the redo log is used to ensure that the data is not lost. l Only the part that needs to be written is included in the redo log, and invalid IO is greatly reduced. Redo log: Physical log, implemented by the InnoDB storage engine, whose content is based on disk pages, ensuring that MySQL downtime does not affect persistence. The redo log will be synchronized to the disk when the transaction is committed, and the time for flushing is not necessarily fixed.

Spring

what is spring

​ Spring is a lightweight aspect-oriented and control inversion open source framework, which improves development efficiency and maintainability.

​ Spring can help us decouple and help manage dependencies between objects.

​ AOP in Spring helps us improve code reusability and scalability. (Security, Transactions, Permissions, etc.)

​ Spring does not need to care about the creation of objects, it only needs to configure the configuration file, which simplifies development.

Spring is a simple and powerful declarative transaction manager.

IOC

​ IOC: Inversion of Control, which refers to the transfer of control of objects to the Spring framework, and Spring is responsible for controlling the life cycle of objects (such as creation, destruction) and dependencies between objects.

The most intuitive expression is that in the past, the timing and initiative of creating an object were all controlled by oneself. If you use another object in an object, you must actively create a dependent object through the new command, and you need to destroy it after use. (such as Connection, etc.), the object will always be coupled with other interfaces or classes. IOC, on the other hand, uses a special container to help create objects, and registers all classes in the Spring container. When you need an object, you no longer need to take the initiative to go to new, just tell the Spring container, and then Spring will When the system is running at the right time, it will give you the object you want. That is to say, for a specific object, the life cycle of the object it refers to was previously controlled by itself, but in IOC, all objects are controlled by Spring, and it is no longer the person who references it that controls the life cycle of the object. Objects, but Spring containers, which help us create, find and inject dependent objects, and reference objects only passively accept dependent objects, so this is called inversion of control

FROM

​ Simply put, when Spring creates an object, it assigns values ​​to its properties, which is called dependency injection.

One of the key points of IOC is to dynamically provide an object with other objects it needs when the program is running. This is achieved through DI (Dependency Injection), that is, the application relies on the IOC container to inject dynamically at runtime. External dependencies required by the object. Spring's DI is specifically injected through reflection. Reflection allows the program to dynamically generate objects, execute object methods, and change object properties when it is running.

​ Example: When a Java instance needs another Java instance, the traditional method is to create an instance of the callee (new) by the caller, but after using the Spring framework, the instance of the callee is no longer created by the caller. Instead, it is created by the Spring container, which is called inversion of control.

When the Spring container creates an instance of the callee, it will automatically inject the object instance needed by the caller into the caller, so that the caller obtains the callee instance through the Spring container, which is called dependency injection.

injection method

  • Setter method injection
  • constructor injection
  • Automatic injection (autowire)
    • byName
    • byType
  • Annotation automatic injection
    • @Autowired automatically injects based on type
    • @Resoure automatically injects based on name

AOP

​ OOP is object-oriented and allows developers to define vertical relationships, but it is not suitable for defining horizontal relationships, which will lead to a lot of code duplication, which is not conducive to the reuse of various modules.

​AOP, commonly known as aspect-oriented, as a supplement to object-oriented, is used to extract and encapsulate public behaviors and logic that have nothing to do with business but affect multiple objects into a reusable module. The module is named "Aspect", which reduces the duplication of code in the system, reduces the coupling degree between modules, and improves the maintainability of the system. It can be used for authority authentication, logging, and transaction processing.

​ IOC keeps the components that cooperate with each other loosely coupled, while AOP programming allows you to separate the functions throughout the various layers of the application to form reusable functional components.

dynamic proxy

The key to AOP implementation lies in the proxy mode. The so-called dynamic proxy means that the AOP framework will not modify the bytecode, but will temporarily generate an AOP object for the method in memory each time it runs. This AOP object contains all of the target object. method, and enhanced processing at a specific cut point, and calls back the method of the original object.

There are two main ways of dynamic proxy in Spring AOP, JDK dynamic proxy and CGLIB dynamic proxy:

  • JDK dynamic proxy only provides interface proxy, does not support class proxy, and requires the proxy class to implement the interface. The core of the JDK dynamic proxy is the InvocationHandler interface and the Proxy class. When obtaining the proxy object, use the Proxy class to dynamically create the proxy class of the target class (that is, the final real proxy class, which inherits from Proxy and implements the interface we defined) , when the proxy object calls the method of the real object, InvocationHandler calls the code in the target class through the reflection of the invoke() method, and dynamically weaves the cross-cutting logic and business together;
  • InvocationHandler's invoke(Object proxy,Method method,Object[] args): proxy is the final generated proxy object; method is a specific method of the proxy target instance; args is the specific input parameter of a method of the proxy target instance, Used when method reflection is invoked.
  • If the proxy class does not implement the interface, then Spring AOP will choose to use CGLIB to dynamically proxy the target class. CGLIB (Code Generation Library) is a class library for code generation, which can dynamically generate a subclass object of a specified class at runtime, and override specific methods and add enhanced codes to achieve AOP. CGLIB is a dynamic proxy through inheritance, so if a class is marked as final, it cannot use CGLIB as a dynamic proxy.

The difference between static proxy and dynamic proxy

​ The difference lies in the timing of generating AOP proxy objects. Relatively speaking, AspectJ's static proxy method has better performance (it will weave aspects into Java bytecode at the compilation stage, and it will be the enhanced object at runtime) , but AspectJ requires a specific compiler for processing, while Spring AOP does not require a specific compiler for processing.

The concept of several nouns in AOP

  • Join point: refers to the method executed during the running of the program. In Spring AOP, a join point always represents a method execution.

  • Aspect: The extracted public modules. Aspect aspect can be regarded as a combination of Pointcut pointcut and Advice notification, and an aspect aspect can be composed of multiple pointcut points and advice. In Spring AOP, aspects can be implemented on classes using the @AspectJ annotation.

  • Pointcut: Pointcut is used to define which Join points to intercept.

    • The cut point is divided into execution mode and annotation mode. The execution method can use path expressions to specify which methods to intercept, such as specifying to intercept add and search. The annotation method can specify the code modified by which annotations to intercept.
  • Advice: Refers to the action to be performed on the Join Point, that is, the enhanced logic, such as permission checksum, logging, etc. There are various types of notifications, including Around, Before, After, After returning, After throwing.

  • Target object (Target): The object that contains the connection point, also known as the object that is notified (Advice). Since Spring AOP is implemented through a dynamic proxy, this object will always be a proxy object.

  • 织入(Weaving):通过动态代理,在目标对象(Target)的方法(即连接点Join point)中执行增强逻辑(Advice)的过程。

  • 引入(Introduction):添加额外的方法或者字段到被通知的类。Spring允许引入新的接口(以及对应的实现)到任何被代理的对象。例如,你可以使用一个引入来使bean实现 IsModified 接口,以便简化缓存机制。

  • 代理(Proxy):将通知织入到目标对象之后形成的代理对象
    insert image description here

Advice的类型

  • 前置通知(Before Advice):在连接点(Join point)之前执行的通知。
  • 后置通知(After Advice):当连接点退出的时候执行的通知(不论是正常返回还是异常退出)。
  • 环绕通知(Around Advice):包围一个连接点的通知,这是最强大的一种通知类型。 环绕通知可以在方法调用前后完成自定义的行为。它也可以选择是否继续执行连接点或直接返回它们自己的返回值或抛出异常来结束执行。
  • 返回后通知(AfterReturning Advice):在连接点正常完成后执行的通知(如果连接点抛出异常,则不执行)
  • 抛出异常后通知(AfterThrowing advice):在方法抛出异常退出时执行的通知

Advice的执行顺序

  • 没有异常:around before advice->before advice->target method 执行->after advice->around after advice->afterReturning advice
  • 有异常:around before advice->before advice->target method 执行->after advice->around after advice->afterThrowing advice->java.lang.RuntimeException:异常发生

BeanFactory和ApplicationContext

​ BeanFactory和ApplicationContext是Spring的两大核心接口,都可以当做Spring的容器。

​ BeanFactory是Spring里面最底层的接口,是IoC的核心,定义了IoC的基本功能,包含了各种Bean的定义、加载、实例化,依赖注入和生命周期管理。ApplicationContext接口作为BeanFactory的子类,除了提供BeanFactory所具有的功能外,还提供了更完整的框架功能:资源文件访问(ClassPathXmlApplicationContext),提供在监听器中注册bean的事件。它是在容器启动时,一次性创建了所有的Bean。在容器启动时,我们就可以发现Spring中存在的配置错误,这样有利于检查所依赖属性是否注入。 ApplicationContext启动后预载入所有的单实例Bean,所以在运行的时候速度比较快,因为它们已经创建好了。

Bean生命周期

​ 从对象的创建到销毁的过程。而Spring中的一个Bean从开始到结束经历很多过程,但总体可以分为六个阶段Bean定义、实例化、属性赋值、初始化、生存期、销毁。

​ 首先进行实例化bean对象,然后进入对bean的属性进行设置,然后对BeanNameAware(让Spring容器获取bean的名称)

​ 设置对象属性(依赖注入)

​ 然后处理Aware接口

​ 实现BeanNameAware接口,让Spring容器获取bean的名称

​ 实现BeanFactoryAware接口,让bean的BeanFactory调用容器的服务

​ 实现ApplicationContextAware接口,让bean当前的applicationContext可以调用Spring容器的服务

​ 实现了BeanPostProcessor接口

​ 配置init-method属性就会自动调用初始化方法

​ 清理-销毁,调用destory()方法结束生命周期

bean的作用域

  • singleton: Default scope, singleton bean, only one instance of bean in each container. (thread unsafe)
  • prototype: Create an instance for each bean request.
  • request: An instance is created for each request request. After the request is completed, the bean will be invalidated and reclaimed by the garbage collector.
  • session: Similar to the request scope, the same session session shares one instance, and different sessions use different instances.
  • global-session: Global scope, all sessions share one instance. If you want to declare a storage variable shared by all sessions, then this global variable needs to be stored in global-session.

Are beans in the Spring framework thread-safe? If it's not thread safe, how do you deal with it?

​ The Spring container itself does not provide a Bean thread safety policy, so it can be said that the Bean itself in the Spring container does not have thread safety features, but the specific situation should be discussed in conjunction with the scope of the Bean.

  • For prototype-scoped beans, a new object is created each time, that is, there is no bean sharing between threads, so there will be no thread safety issues.

  • For singleton-scoped beans, all threads share a singleton instance of the bean, so there are thread safety issues. But if the singleton bean is a stateless bean, that is, the operations in the thread will not perform operations other than queries on the members of the bean, then the singleton bean is thread-safe. For example, Controller class, Service class, Dao, etc. Most of these beans are stateless and only focus on the method itself.

    For stateful beans (such as Model and View), you need to ensure thread safety by yourself. The most obvious solution is to change the scope of stateful beans from "singleton" to "prototype".

    ThreadLocal can also be used to solve thread safety issues, providing each thread with an independent variable copy, and different threads only operate the copy variables of their own threads.

Spring autowiring

  • Autowiring of xml configuration

    • no: The default method is not to perform automatic assembly, and the bean is assembled by manually setting the ref attribute.
    • byName: Automatically assemble by bean name, if the property of a bean is the same as the name of another bean, it will be automatically assembled.
    • byType: Autowire by the data type of the parameter.
    • constructor: The constructor is used for assembly, and the parameters of the constructor are assembled by byType.
    • autodetect: Automatic detection, if there is a construction method, it will be automatically assembled by construct, otherwise it will be automatically assembled by byType.
  • Autowiring of annotations

    • Use @Autowired, @Resource annotations to autowire specified beans. Before using the @Autowired annotation, it needs to be configured in the Spring configuration file. When starting spring IoC, the container automatically loads an AutowiredAnnotationBeanPostProcessor post processor. When the container scans @Autowied, @Resource or @Inject, it will automatically find the required bean in the IoC container and assemble the properties of the object. When using @Autowired, first query the bean of the corresponding type in the container:

      If the query result is exactly one, assemble the bean to the data specified by @Autowired;

      If there is more than one query result, @Autowired will search by name;

      If the result of the above lookup is empty, an exception will be thrown. As a workaround, use required=false.

Difference between @Autowired and @Resource

  • @Autowired is injected according to type assembly by default. By default, it requires that dependent objects must exist (you can set its required attribute to false).
  • @Resource is assembled and injected by name by default, and only when no bean matching the name is found will it be assembled and injected by type.

Types of Spring transactions

  • Programmatic transaction management uses TransactionTemplate.
  • Declarative transaction management is built on top of AOP. Its essence is to intercept the method before and after the method through the AOP function, and weave the function of transaction processing into the intercepted method, that is, start a transaction before the start of the target method, and submit or roll back the transaction according to the execution status after the target method is executed. .

The biggest advantage of declarative transactions is that there is no need to mix transaction management codes in business logic codes. You only need to declare relevant transaction rules in the configuration file or use @Transactional annotations to apply transaction rules to business logic. , to reduce the pollution of business code. The only downside is that the finest granularity can only be applied to the method level, and it cannot be applied to the code block level like programmatic transactions.

Spring's transaction propagation mechanism

The propagation mechanism of spring transactions refers to how spring handles the behavior of these transactions when multiple transactions exist at the same time. The transaction propagation mechanism is actually implemented using a simple ThreadLocal, so if the called method is called in a new thread, the transaction propagation will actually fail.

  • propagation_required: (default propagation behavior) If there is no current transaction, create a new transaction; if there is a current transaction, join the transaction.
  • propagation_requires_new: Regardless of whether there is a current transaction, a new transaction is created for execution.
  • propagation_supports: If there is a transaction currently, join the transaction; if there is no transaction currently, execute it as a non-transaction.
  • propagation_not_supported: Perform operations in a non-transactional manner. If there is a current transaction, suspend the current transaction.
  • propagation_nested: If there is currently a transaction, it will be executed in a nested transaction; if there is no current transaction, it will be executed according to the REQUIRED attribute.
  • propagation_mandatory: If there is a transaction currently, join the transaction; if there is no transaction currently, an exception is thrown.
  • propagation_never: Execute in a non-transactional manner, and throw an exception if there is a current transaction.

@Transactional

  • 参数一: propagation(REQUIRED、SUPPORTS、MANDATORY、REQUIRES_NEW、 NOT_SUPPORTED、NEVER、NESTED)
  • Parameter two: transaction timeout setting: timeout
  • Parameter three: transaction isolation level: isolation
  • Parameter four: The readOnly attribute is used to set whether the current transaction is a read-only transaction, set to true means read-only, false means read-write,
  • Parameter five: rollbackFor specifies multiple exception classes: @Transactional(rollbackFor={RuntimeException.class, Exception.class})
  • 参数六: rollbackForClassName @Transactional(rollbackForClassName=“RuntimeException”)
  • Parameter seven: noRollbackForClassName @Transactional(noRollbackForClassName="RuntimeException")
  • Parameter eight: noRollbackFor @Transactional(noRollbackFor=RuntimeException.class)

Transaction does not take effect

  • Access permission issues (must be public, required by dynamic proxy)
  • The method is decorated with final (needed by dynamic proxy)
  • Method internal calls, that is, self-invocation ("dynamic proxy", which means to generate a proxy class, then we cannot directly call transaction methods in a class. The solution can inject itself, and call it through the injected object )
  • not managed by spring
  • multi-thread call
  • Table does not support transactions
  • Not open transaction

transaction is not rolled back

  • Error Propagation Properties
  • swallowed the exception
  • Manually throw non-RuntimeException exceptions (because of spring transactions, by default, only RuntimeException (runtime exception) and Error (error) will be rolled back, for ordinary Exception (non-runtime exception), it will not be rolled back. For example, if CheckedExceptions have occurred, such as fileNotfundException will not be rolled back,
    so when customizing exceptions, still inherit RuntimeException)
  • exception mismatch

What design patterns are used in the Spring framework?

  • Factory mode: Spring uses the factory mode to create objects through BeanFactory and ApplicationContext
  • Singleton mode: Bean defaults to singleton mode
  • Proxy mode: Spring's AOP function uses JDK's dynamic proxy and CGLIB bytecode generation technology
  • Adapter mode: The enhancement or advice (Advice) of Spring AOP uses the adapter mode, and the adapter mode is also used in Spring MVC to adapt the Controller

SpringMVC

MVC: is a design pattern

m: model model layer, mainly used for data encapsulation

v: view view layer, used for data display

c: controller control layer, used for logic control operation

Advantages: It is conducive to the division of labor in development, to the reuse of components, to decoupling, to develop in parallel in the system, and to improve the efficiency of development.

mvc execution process

  • The user sends a request to the front controller DispatcherServlet;
  • After the DispatcherServlet receives the request, it calls the HandlerMapping processor mapper to request the Handler;
  • The processor mapper finds the specific processor Handler according to the request url, generates the processor object and the processor interceptor (if any), and returns them to the DispatcherServlet;
  • DispatcherServlet calls the HandlerAdapter processor adapter to request the execution of the Handler;
  • HandlerAdapter is adapted to call specific processors to process business logic;
  • Handler execution is completed and returns to ModelAndView;
  • HandlerAdapter returns the Handler execution result ModelAndView to DispatcherServlet;
  • DispatcherServlet passes ModelAndView to ViewResolver for resolution;
  • ViewResolver returns a specific View after parsing;
  • DispatcherServlet renders the view to the View (that is, fills the model data into the view)
  • DispatcherServlet responds to the user.

How to solve the problem of get and post garbled characters

  • Solve the garbled characters of the post request: we can configure a CharacterEncodingFilter filter in web.xml. Set to utf-8.
  • Solve the garbled characters of the get request: There are two methods. There are two solutions to the garbled characters in the Chinese parameters of the get request:
    • Modify the tomcat configuration file to add the code consistent with the project code.
    • Another way to re-encode the parameter String userName = New String(Request.getParameter(“userName”).getBytes(“ISO8859-1”), “utf-8”);

global exception handling

  • @ControllerAdvice identifies a class as a global exception handling class.
  • @ExceptionHandler identifies a method as a global exception handler. Complete exception handling logic.

custom interceptor

​ The custom Interceptor class must implement Spring's HandlerInterceptor interface.

​ Three methods are defined in the HandlerInterceptor interface, and we use these three methods to intercept and process user requests.

  • preHandle: Execute before the Controller method processes the request, and execute forward according to the order defined by the interceptor.

  • postHandle: Execute after the Controller method processes the request, and execute in reverse according to the order defined by the interceptor. It will only be called when all preHandle methods need to return true.

  • afterCompletion: View post-rendering processing method: reverse execution according to the order defined by the interceptor. It will be called when preHandle returns true.

    After writing the interceptor, we also need to write the MVC configuration class. Inherit WebMvcConfigurer, rewrite addInterceptors, add custom interceptors, set interception path and non-interception path.

My shoe

What is Mybatis?

Mybatis is a semi-ORM (object-relational mapping) framework. It encapsulates JDBC internally, loads drivers, creates connections, creates statements and other complicated processes. Developers only need to pay attention to how to write SQL statements when developing, and can strictly control SQL execution performance. High flexibility.

working principle

  • Read the MyBatis configuration file: mybatis-config.xml is the global configuration file of MyBatis, which configures information such as the operating environment of MyBatis, such as database connection information.
  • Load the mapping file: The mapping file is the SQL mapping file, which configures the SQL statements for operating the database and needs to be loaded in the MyBatis configuration file mybatis-config.xml. The mybatis-config.xml file can load multiple mapping files, and each file corresponds to a table in the database.
  • Construct a session factory: Construct a session factory SqlSessionFactory through MyBatis environment and other configuration information.
  • Create a session object: The SqlSession object is created by the session factory, which contains all the methods for executing SQL statements, and is an interface that can send sq execution and return results, and can also obtain mapper.
  • Executor: The bottom layer of MyBatis defines an Executor interface to operate the database. It will dynamically generate SQL statements to be executed according to the parameters passed by SaSession, and is responsible for the maintenance of the query cache.
  • MappedStatement object: There is a parameter of MappedStatement type in the execution method of the Executor interface, which encapsulates the mapping information and is used to store the id, parameters and other information of the SQL statement to be mapped.
  • Input parameter mapping: The input parameter type can be collection types such as Map and List, or basic data types and POJO types. The input reference mapping process is similar to the JDBC process of setting parameters for the preparedStatement object.
  • Output result mapping: The output result type can be collection types such as Map and List, or basic data types and POJO types. The output result mapping process is similar to the analysis process of JDBC for the result set.

Mybatis programming steps

  • Create a SqlSessionFactory
  • Create SqlSession through SqlSessionFactory
  • Perform database operations through sqlsession
  • Call seesion.commit() to commit the transaction
  • Call session.close() to close the session

The difference between #{} and ${}:

​ #{} is precompilation processing, ${} is string replacement.

​ When mybatis processes #{}, it will replace #{} in sql with? number, and then call the set method of preparedStatement to assign;

​ Mybatis will treat {} when processing, will replace {} with the variable value.

​ Using #{} can prevent sql injection and improve system security.

mapper interface call specification

  • The Mapper interface method name is consistent with the id of each SQL defined in mapper.xml.
  • The input type of the Mapper interface method is the same as the parameterType type of each sql defined in mapper.xml.
  • The output type of the Mapper interface method is the same as the resultType type of each sql defined in mapper.xml.
  • The anamespace in the Mapper.xml file is the classpath name of the mapper interface.

Level 1 cache and Level 2 cache

  • Level 1 cache: the scope is Session, which is enabled by default
  • Level 2 cache: It is at the mapper level. For the first time, the SQL under the mapper is called to query user information, and the queried information will be stored in the level 2 cache area corresponding to the mapper. In the mapper mapping file of the namespace that is called for the second time, the same sql is used to query user information, and the result will be retrieved from the corresponding secondary cache.

Returns the primary key ID during the Insert operation

<insert id="方法名" parameterType="实体类路径" keyProperty="uuid" useGeneratedKeys="true"></insert>

​ keyProperty indicates which property of the object to save the returned id to;

​ useGeneratedKeys indicates that the primary key id is in automatic growth mode;

The insert method always returns an int value, which represents the number of rows inserted. If the self-growth strategy is adopted, the automatically generated key value can be set to the incoming parameter object after the insert method is executed.

one to one

<association></association> 

one-to-many

<collection></collection>

Label

​ In addition to the common select|insert|updae|delete tags, ,,,,, plus 9 tags of dynamic sql trim | where | set | foreach | if | choose | when | otherwise | bind, etc., where sql Fragment tags, introduce sql fragments through tags, and generate policy tags for primary keys that do not support auto-increment.

springboot

​ Spring Boot is a sub-project under the Spring open source organization. It is a one-stop solution for Spring components. It mainly simplifies the
difficulty of using Spring, saves heavy configuration, and provides various starters to enable developers to get started quickly. .

  • Rapid development, rapid integration, simplified configuration, embedded service container

core annotation

​ The annotation above the startup class is @SpringBootApplication, which is also the core annotation of Spring Boot. The main combination includes
the following three annotations:

  • @SpringBootConfiguration: The @Configuration annotation is combined to realize the function of the configuration file.
  • @EnableAutoConfiguration: Turn on the automatic configuration function.
  • @ComponentScan: Spring component scan.

What is the automatic configuration principle of SpringBoot

​ Mainly the core annotation SpringBootApplication annotates the main configuration class on the startup class of Spring Boot. With this main configuration
class, an @EnableAutoConfiguration annotation automatic configuration function will be enabled for SpringBoot when the main configuration class starts.
​ With this EnableAutoConfiguration, it will:

  • Load possible auto-configuration classes from the configuration file META_INF/Spring.factories
  • Remove duplicates and exclude classes carried by the exclude and excludeName attributes
  • Filter and return the automatic configuration class that meets the condition (@Conditional)

cross domain

​ Use directly: @CrossOrigin

​ or @Configuration

​ Implement WebMvcConfigurer to rewrite the addCorsMapperings method

@Configuration
public class CorsConfig implements WebMvcConfigurer {
    
    
    @Override
    public void addCorsMappings(CorsRegistry registry) {
    
    
        registry.addMapping("/**")//项目中的所有接口都支持跨域
                .allowedOrigins("*")//所有地址都可以访问,也可以配置具体地址
                .allowCredentials(true) //是否允许请求带有验证信息
                .allowedMethods("*")//"GET", "HEAD", "POST", "PUT", "DELETE", "OPTIONS"
                .allowedHeaders("*").maxAge(3600);// 跨域允许时间
    }
}

annotation

  • springboot
    • @SpringBootApplication(@SpringBootConfiguration、@EnableAutoConfiguration、@ComponentScan)
    • @SpringBootConfiguration: alternative to @Configuration
  • spring
    • @Component: refers to various components (@Controller, @Service, @Repository can all be called @Component)
    • @Configuration: Declare the current class as a configuration class
    • @Bean: Declare the bean
    • @Scope: Set the scope of the bean
    • @Import: Import additional configuration files
    • @EnableTransactionManagement: Enable annotation transaction support
    • @Transactional: open transaction
    • @Autowired: injected bean, type
    • @Resource: injected bean, name
    • @Primary: Declare the default bean
    • @PostConstruct: After all the properties of the bean are injected, the way of annotation annotation is executed for initialization
    • @Lazy: Make the bean lazy load, cancel the bean pre-initialization
    • @Value: ${} is to find the parameters of the external configuration and assign the value
  • aop
    • @Aspec: Declare an aspect
    • @After: Executed after the method is executed (on the method)
    • @Before: Executed before the method is executed (on the method)
    • @Around: Executed before and after the method is executed (on the method)
    • @PointCut: Declare a point cut
  • mvc
    • @Controller
    • @ResponseBody: return json data
    • @RestController: This annotation is a combined annotation, which is equivalent to the combination of @Controller and @ResponseBody
    • @RequestMapping: Used to map web requests, including access paths and parameters.
    • @RequestBody: The request parameter is json data
    • @PathVariable: used to receive path parameters
    • @ControllerAdvice:
      • Global exception handling (commonly used)
      • global data binding
      • Global Data Preprocessing
    • @ExceptionHandler: Used to handle exception handling in the global controller

Use of SpringBoot things

​ First use the annotation EnableTransactionManagement to open things, and then add the annotation Transactional to the Service method.

Multi-environment configuration

​ Profile configuration: spring.profiles.active

Multiple application configuration files can be configured in the project. According to different application scenarios, a configuration file can be enabled through application-.

common starter

  • spring-boot-starter core starter
  • spring-boot-starter-web
  • spring-boot-starter-test
  • spring-boot-starter-jdbc
  • spring-boot-starter-amqp
  • spring-boot-starter-data-redis
  • spring-boot-starter-data-elasticsearch
  • spring-boot-starter-data-mongodb
  • spring-boot-starter-freemarker
  • spring-boot-starter-mail
  • spring-boot-starter-aop

Linux

Order effect
pwd View the path of the current directory
ls View the files in the directory
cd switch directory
mkdir create dir -p cascade create
is rm delete directory
touch Create a file
rm delete command -f force delete -r recursive delete
echo Output commands, you can enter variables, string values
> and >> Output symbol, output the content to the file, > means overwrite (the original file content will be deleted) >> means append
cat View all contents of the file
more View file content by page
tail View the file contents from the end of the file. -nn is a positive integer, which means to view the last n rows of data in the file. -f dynamically view the last few lines of the file
cp cp [parameters] path of the original file to the path of the target file. copy command
mv Move command, which can move files and rename files
free Commands to view system memory
chmod Use 3 numbers to set the permissions of files or directories, the first number indicates user permissions, the second number indicates user group permissions, and the third number indicates other user permissions

Git

insert image description here

Redis

​ remote dictionary service: It is an open source memory-based high-performance key-value pair (key-value) database developed in C language. It provides a variety of key-value data types to meet the storage requirements in different scenarios.

Application Scenario

  • Cache (data query, short connection, news content, commodity content, etc.) (most used)
  • Session separation in distributed cluster architecture
  • Task queue (second kill, snap up, 12306, etc.)
  • App Leaderboard
  • Website Access Statistics
  • Data expiration processing (accurate to milliseconds) timeliness information control, such as verification code control, voting control, etc.
  • distributed lock
  • Recommended by mutual friends

type of data

  • String
    • incr key: auto-increment. Database primary key self-increment, website access statistics
    • setex key seconds value: Set the key value with expiration time. verification code
    • Commonly used for caching: key naming (table name: primary key name: primary key value: field name), value (json string of object)
  • Hash
    • It is often used as a cache. Compared with String, it is more convenient to change object properties and saves the serialization step.
  • List
    • lpush, rpop: can realize simple message queue
    • lrange key start stop: Likes in the circle of friends, and the friends who like them are displayed in order (stored in an orderly manner)
  • Set
    • sinterstore: intersection. You may know/recommended by common friends
    • UV (the number of times the website is visited by different users) statistics, cookie statistics, set deduplication
    • IP (the total number of times the website was visited by different IP addresses) ditto
  • Sorted_set
    • leaderboard

Persistence

RDB

​ The RDB (Redis DataBase) persistence method can store snapshots of your data at specified time intervals. By default, Redis saves database snapshots in a binary file named dump.rdb.

Implementation modalities

​ When Redis needs to save the dump.rdb file, the server performs the following operations:

  • Redis calls forks() to have both parent and child processes.
  • The child process writes the dataset to a temporary RDB file.
  • When the child process finishes writing to the RDB file, Redis will replace the old RDB file with the new RDB file and delete the old RDB file.

trigger mechanism

  • save: Synchronous operation, blocking all client requests.
  • bgsave: Perform an asynchronous operation to save a snapshot of all data in the form of an RDB file.
  • save configuration: You can set Redis through the configuration file, so that it will automatically save the data set when the condition that the data set has at least M changes within N seconds is met. Same as bgsave, both are asynchronous operations.

Advantages and disadvantages

​ Advantages: Binary compact files, saving space. Data sets at different times can be saved. Reply is fast, suitable for disaster recovery.

​ Disadvantages: Consumption of performance, imprecise control, and data loss.

AOF

​ AOF (Append Only File), after AOF is turned on, whenever Redis executes a command to change the data set (such as SET), this command will be appended to the end of the AOF file. In this way, when Redis restarts, the program can achieve the purpose of rebuilding the data set by re-executing the commands in the AOF file.

AOF strategy

  • always: execute fsync every time a new command is appended to the AOF file: very slow and very safe.
  • everysec: fsync once per second: fast enough (similar to using RDB persistence), and only lose 1 second of data in case of failure. (Default, taking into account speed and security)
  • no: Never fsync by yourself: Hand over the data to the operating system for processing, and the operating system decides when to synchronize the data. The faster, less secure option.

AOF rewrite

​ Rebuild the AOF file without interrupting the service client. Execute the bgrewriteaof command, Redis will generate a new AOF file, this file contains the minimum commands needed to rebuild the current dataset. Redis 2.4 can automatically trigger AOF rewriting through configuration.

​ The following parameters need to be configured:
insert image description here

Advantages and disadvantages

​ Advantages: make redis more durable, and use the default strategy to lose up to 1 second of data if a failure occurs, and the file is highly readable.

​ Disadvantage: Depending on the fsync strategy used, AOF may be slower than RDB.

how to choose

​ If you care about your data very much, but can still tolerate data loss within a few minutes, then you can just use RDB persistence.

​ Many users only use AOF persistence, but this method is not recommended: because regular generation of RDB snapshots (snapshot) is very convenient for database backup, and the speed of RDB recovery data sets is also faster than AOF recovery.

​ Since the AOF persistence method is far less efficient than the RDB method when restarting and loading data, Redis introduced a hybrid persistence method after version 4.0. The configuration item is aof-use-rdb-preamble, yes is enabled, and the strategy is to generate or When writing the AOF file, write the RDB data in the front, and append the AOF data to the back, and load the RDB first, and then load the AOF at each startup.

redis transaction

​ During the execution of instructions by Redis, multiple consecutive instructions must not be disturbed, interrupted, or queued. In a queue, a series of commands are executed one-time, sequentially, and exclusively.

  • Open transaction: multi

  • Execution transaction: exec

  • Cancel transaction: discard

    Notice:

  • If there is a syntax error in the commands contained in the defined transaction, all commands in the overall transaction will not be executed. Include those commands that are syntactically correct.

  • In the process of defining a transaction, if an error occurs in command execution, the command that can run correctly will be executed, and the command that runs incorrectly will not be executed. The data corresponding to the command that has been executed will not be automatically rolled back, and the programmer needs to implement the rollback in the code.

lock (monitor)

​ Multiple clients may operate the same set of data at the same time, and once the data is modified by the operation, it will not be suitable for continued operation. Lock the data to be operated before the operation, once the change occurs, terminate the current operation.

  • watch key1 [key2...]: Add a watch lock to the key. If the key changes before exec is executed, the transaction execution will be terminated.
  • Unwatch: Unwatch all keys.

delete policy

​ A strategy for deleting expired data.

  • Timing deletion: Create a timer. When the key is set with an expiration time and the expiration time arrives, the timer task will immediately execute the deletion operation on the key.
  • Periodic deletion: Every 100ms, randomly select some keys with an expiration time set, check whether they are expired, and delete them if they are expired.
  • Lazy deletion: If your expired key is not deleted by regular deletion, it still stays in the memory, unless your system checks the key, it will be deleted by redis.

Elimination strategy

​ When the maximum memory usage of Redis is reached, it is necessary to clean up the data stored in Redis, release the memory, and keep the memory usage of Redis below the capacity limit. (also called eviction algorithm)
insert image description here

  • Detect volatile data (dataset server.db[i].expires that may expire)
    • volatile-lru: select the least recently used data to eliminate
    • volatile-lfu: select the least recently used data to eliminate
    • volatile-ttl: select data that will expire
    • volatile-random: arbitrary selection of data elimination
  • Detect full database data (all data sets server.db[i].dict)
    • allkeys-lru: Select the least recently used data to eliminate
    • allkeys-lfu: Select the data with the least number of recent uses to eliminate
    • allkeys-random: random selection of data elimination
  • waive data eviction
    • no-enviction (eviction): Prohibits the eviction of data (the default policy in redis4.0), which will cause an error OOM (Out Of Memory)

high availability

master-slave mode

​ In the master-slave mode, Redis deploys multiple machines, with a master node responsible for read and write operations, and a slave node responsible for read operations only. The data of the slave node comes from the master node, and the realization principle is the master-slave replication mechanism.

​ Master-slave replication includes full replication and incremental replication. Generally, when the slave starts to connect to the master for the first time, or thinks it is the first connection, it uses full copy, otherwise it is incremental copy

full copy

  • The slave sends the sync command to the master.
  • After receiving the SYNC command, the master executes the bgsave command to generate the full RDB file.
  • The master uses a buffer to record all write commands during RDB snapshot generation.
  • After the master executes bgsave, it sends RDB snapshot files to all slaves.
  • After the slave receives the RDB snapshot file, it loads and parses the received snapshot.
  • The master uses the buffer to record all write commands generated during RDB synchronization.
  • After the master snapshot is sent, it starts to send the write command in the buffer to the slave;
  • Salve accepts command requests and executes write commands from the master buffer

After redis2.8, psync has been used instead of sync, because the sync command consumes a lot of system resources, and psync is more efficient.

Incremental replication

After the slave is fully synchronized with the master, if the data on the master is updated again, incremental replication will be triggered.
​ When data increases or decreases on the master node, the replicationFeedSalves() function will be triggered, and each command invoked on the master node will use replicationFeedSlaves() to synchronize to the slave node. Before executing this function, the master node will judge whether there is data update in the command executed by the user. If there is data update and the slave node is not empty, this function will be executed. The function of this function is to send the command executed by the user to all slave nodes and let the slave nodes execute it.

sentinel mode

​ Sentinel mode, a Sentinel system composed of one or more Sentinel instances, it can monitor all Redis master nodes and slave nodes, and when the monitored master node goes offline, it will automatically offline A slave node is promoted to be the new master node. However, if a sentinel process monitors the Redis node, problems may occur (single point problem). Therefore, multiple sentinels can be used to monitor the Redis node, and monitoring will be performed between each sentinel.

Simply put, the sentinel mode has three functions:

  • Send a command and wait for the Redis server (including the master server and the slave server) to return to monitor its running status;
  • Sentinel detects that the master node is down, and will automatically switch the slave node to the master node, and then notify other slave nodes through the publish-subscribe mode, modify the configuration file, and let them switch hosts;
  • Sentinels will also monitor each other to achieve high availability.

The working mode of Sentinel is as follows:

  • Each Sentinel sends a PING command to its known Master, Slave and other Sentinel instances once a second.
  • If an instance (instance) is longer than the set maximum timeout period from the last valid reply to the PING command, the instance will be marked as subjectively offline by Sentinel.
  • If a Master is marked as subjectively offline, all Sentinels that are monitoring the Master must confirm that the Master has indeed entered the subjectively offline state once per second.
  • 当有足够数量的 Sentinel(大于等于配置文件指定的值)在指定的时间范围内确认Master的确进入了主观下线状态, 则Master会被标记为客观下线。
  • 在一般情况下, 每个 Sentinel 会以每10秒一次的频率向它已知的所有Master,Slave发送 INFO 命令。
  • 当Master被 Sentinel 标记为客观下线时,Sentinel 向下线的 Master 的所有 Slave 发送 INFO 命令的频率会从 10 秒一次改为每秒一次。
  • 若没有足够数量的 Sentinel同意Master已经下线, Master的客观下线状态就会被移除;若Master 重新向 Sentinel 的 PING 命令返回有效回复, Master 的主观下线状态就会被移除。

一个Master不能正常工作时,哨兵会开始一次自动故障迁移。

  • 它会将失效的Master的其中一个Slave升级为新的Master,并让失效Master的其他Slave改为复制新的Master。
  • 当客户端试图连接失效的Master时,集群也会向客户端返回新的Master地址,使得集群可以使用现在的Master替换失效的Master。
  • Master和Slave服务器切换后,Master的redis.conf、Slave的redis.conf和sentinel.conf的配置文件都会发生相应的改变,即,Master主服务器的redis.conf配置文件中会多一行slaveof的配置,sentinel.conf的监控目标会随之调换。

集群模式

​ The sentinel mode is based on the master-slave mode, which realizes the separation of reading and writing. It can also switch automatically, and the system availability is higher. However, the data stored in each node is the same, which wastes memory and is not easy to expand online. The Cluster cluster solves this problem and realizes the distributed storage of Redis. Shard the data, that is to say, store different content on each Redis node to solve the problem of online expansion. And, it also provides replication and failover functions.
​ The distributed algorithm used in distributed storage is the Hash Slot slot algorithm.

​ The slot algorithm divides the entire database into 16384 slots (slots), and each key-value pair entering Redis is hashed according to the key and assigned to one of the 16384 slots. The hash map used is also relatively simple, using the CRC16 algorithm to calculate a 16-bit value, and then modulo 16384. Each key in the database belongs to one of these 16384 slots, and each node in the cluster can handle these 16384 slots.

cache penetration

​ The requested data does not exist in the database at all, and abnormal URL access occurs, that is, neither the cache nor the database can query this data, but the request will hit the DB database every time. A large number of meaningless queries fall on the DB, which will obviously increase the pressure on the database, and may seriously cause the database to go down.

​ Solution:

  • When the data cannot be queried (that is, null is returned), it should also be stuffed into the cache and set a short time limit, such as 30-60 seconds, up to 5 minutes. (Temporary solution).
  • Set white/black list (mention set/bitmaps/bloom filter).

cache breakdown

​ In an ordinary high-concurrency system, when a large number of requests query a key at the same time, the key just fails at this time, which will cause a large number of requests to be sent to the DB database. At a certain moment, the amount of database requests and queries is too large, and the pressure increases sharply, which may seriously cause database downtime.

​ Solution:

  • Extend the expiration time of hot data (the main product of Taobao activities) (prepared in advance).
  • Monitor hotspot keys in real time and adjust expiration time (on-site adjustment).
  • Start the scheduled task, and refresh the data validity period before the peak period to ensure that it is not lost (adjusted at the same time).
  • Distributed locks. In essence, in fact, this is the multi-thread corresponding to the high concurrent request to query the data of the database at the same time. So we can use a mutex to lock the first request that cannot query the data, and other threads will wait until the first thread can query the data and then do the cache. . Later threads come in and find that there is already a cache, so they go directly to the cache (as a last resort).

cache avalanche

​ When a large-scale cache failure occurs at a certain moment (more keys in the cache will expire in a short period of time), for example, the cache server is down, a large number of requests will come in and directly hit the DB, and the result It's just that the DB can't handle it, and it crashes directly.

​ Solution:

  • Request throttling/service downgrade.
  • Redis master-slave cluster deployment, setting persistence strategy and quick recovery.
  • According to the different expiration times of different keys according to the business design, between the fixed time + random value of the same type of key, the super hot data uses a permanent key.
  • Elimination policy LRU/LFU.
  • Read key lock (×)!

How MySQL and Redis ensure double write consistency

  • Delayed double delete
    • After the write operation comes in, delete the cache first, trust the database more, sleep for a while (for example, 1s), and then delete the cache. In this way, after deleting the cache for the first time, it is possible that another reading thread reads the data and rewrites the dirty data that has not been updated to the cache. Then the first delayed deletion will delete the dirty data after the write operation. (This sleep time = time spent reading business logic data + hundreds of milliseconds. In order to ensure the end of the read request, the write request can delete the cache dirty data that the read request may bring.)
  • Delete cache retry mechanism
  • Because of delayed double deletion, there may be data inconsistency problems caused by failure due to exceptions in the second step of deleting the cache. You can use this scheme to optimize: if the deletion fails, delete it a few more times to ensure that the cache is successfully deleted. So you can introduce a delete cache retry mechanism.
  • write request to update the database
  • The cache failed to delete for some reason
  • Put the key that failed to delete into the message queue
  • Consume the message of the message queue and get the key to be deleted
  • Retry delete cache operation
  • Read biglog asynchronously delete cache
    • Retrying to delete the cache mechanism is okay, but it will cause a lot of business code intrusion . In fact, it can also be optimized in this way: the key is eliminated asynchronously through the binlog of the database.
    • Take mysql as an example: you can use Ali's canal to send the binlog log collection to the MQ queue, and then confirm and process this update message through the ACK mechanism, delete the cache, and ensure the consistency of the data cache.

Why Redis changed to multithreading after 6.0

Before Redis 6.0, when Redis processed client requests, including reading sockets, parsing, executing, and writing sockets, etc., it was processed by a serial main thread, which was the so-called "single thread".

​ After 6.0, the use of multi-threading in redis does not mean that it completely abandons single-threading. Redis still uses a single-threaded model to process client requests, but uses multi-threading to process data reading and writing and protocol analysis, and executes commands using a single thread. The purpose of this is because the performance bottleneck of redis lies in the network IO rather than the CPU. Using multithreading can improve the efficiency of IO read and write, thereby improving the overall performance of redis.

distributed lock

  • Locking: Use setnx to lock, when the command returns 1, it means that the lock has been successfully acquired
  • Unlock: After the thread that gets the lock finishes executing the task, use the del command to release the lock so that other threads can continue to execute the setnx command to acquire the lock

Existing problem: Suppose a thread hangs up in the process of executing the task after acquiring the lock, and it is too late to explicitly execute the del command to release the lock, then the threads competing for the lock will not be able to execute, resulting in a deadlock situation.

Solution: Set the lock timeout

  • Set the lock timeout time: the key of setnx must set a timeout time to ensure that even if it is not explicitly released, the lock will be automatically released after a certain period of time. You can use the expire command to set the lock timeout period

There is a problem: setnx and expire are not atomic operations. Suppose a thread executes the setnx command and successfully acquires the lock, but before executing the expire command, the server hangs up. In this way, the lock is not set to expire After a while, it becomes a deadlock, and other threads can no longer acquire the lock.

Solution: The set command of redis supports setting the expiration time of the key while acquiring the lock

  • Use the set command to lock and set the lock expiration time: command format: set <lock.key> <lock.value> nx ex

There is a problem:

① Suppose thread A successfully obtains the lock, and the set timeout period is 30 seconds. If some reason causes thread A to execute very slowly and has not finished executing after 30 seconds, the lock will be automatically released when the lock expires, and thread B will get the lock.

② Subsequently, thread A finishes executing the task, and then executes the del instruction to release the lock. But at this time, thread B has not finished executing, and what thread A actually deletes is the lock added by thread B.

solution:

You can make a judgment before del releases the lock to verify whether the current lock is a lock you added yourself. When locking, use the current thread ID as the value, and verify whether the value corresponding to the key is the ID of your own thread before deleting. However, this actually implies a new problem. The get operation, judgment and lock release are two independent operations, not atomic. For non-atomic problems, we can use Lua scripts to ensure the atomicity of operations (if redis.call('get',KEYS[1]) == ARGV[1] then return redis.call('del',KEYS [1]) else return 0 end;)

if(jedis.set(key, uni_request_id, "NX", "EX", 100s) == 1{
    
     
    //加锁 
    try {
    
     
        do something //业务处理 
	} catch(){
    
     } 
    finally {
    
     
        //判断是不是当前线程加的锁,是才释放 
        if (uni_request_id.equals(jedis.get(key))) {
    
     
            jedis.del(key); //释放锁 
        } 
    } 
}

The above method is better after adding lua script. In general, this implementation method can already be used. However, there is a problem that the lock expires and is released, and the business has not been executed yet (actually, there is generally no problem in estimating the time for business processing).

  • Lock renewal (redisson): If for some reason the thread holding the lock has not completed the task within the lock expiration time, and the lock is automatically released because it has not timed out, then it will cause multiple threads to hold it at the same time The phenomenon of lock appears, and in order to solve this problem, "lock renewal" can be carried out. In fact, there is a "watchdog" mechanism in the Redisson package of JAVA, which has already realized this function for us.
    • After redisson acquires the lock, it will maintain a watchdog thread. When the lock is about to expire and has not been released, it will continuously extend the life time of the lock key.
    • The thread goes to acquire the lock, and the acquisition is successful: execute the lua script and save the data to the redis database. The whole process is monitored and renewed by a "watchdog" similar to a daemon thread.
    • The thread goes to acquire the lock, and the acquisition fails: it has been trying to acquire the lock through the while loop. After the acquisition is successful, execute the lua script and save the data to the redis database.
    • After the watchdog is started, it will also have a certain impact on the overall performance. By default, the watchdog thread is not started. If redisson is used for locking while setting the expiration time of the lock, it will also cause the watchdog mechanism to fail.
    • After redisson acquires the lock, it will maintain a watchdog thread. At 1/3 of the expiration time set by each lock, if the thread has not finished executing the task, the validity period of the lock will be continuously extended. The watchdog check lock timeout defaults to 30 seconds, which can be changed through the lockWactchdogTimeout parameter.

Key points of redisson distributed locks:

  1. No expiration time is set for the key. Redisson will maintain a watchdog watchdog after the lock is successfully locked. The watchdog is responsible for monitoring and processing at regular intervals. When the lock is not released and is about to expire, the lock will be automatically renewed to ensure that the lock is locked before unlocking. will not automatically expire
  2. The atomic operation of locking and unlocking is realized through Lua script
  3. By recording the client id that acquired the lock, it is judged whether the current client has acquired the lock each time the lock is added, and the reentrant lock is realized.

RabbitMQ

​ MQ is called Message Queue," which is a container for storing messages during message transmission. It is a typical model: producer and consumer. Applicable business scenarios: decoupling, asynchronous, and traffic peak clipping.

​ RabbitMQ features: Based on the AMQP protocol, it also supports the JMS standard through plug-ins. High concurrency, high performance, high availability and strong community support. Many languages ​​are supported.

​ RabbitMQ can be used in decoupling, asynchronous communication, high concurrency current limiting, timeout business, data delay processing, etc. in the business service module.

core components

insert image description here

  • Broker: Broker (middleware) is simply understood as the RabbitMQ server.
  • vhost: virtual host. It is used in multi-tenant scenarios and provides scope control. When creating a connection, you can specify the virtual machine and the corresponding user name and password.
  • Connection: link. Whether it is a producer or a consumer, it needs to establish a connection with the Broker, which is a TCP connection. There is only one Connection between a producer or a consumer and the Broker, that is, there is only one TCP connection. Connections are usually long-lived.
  • channel: channel. A TCP connection contains multiple channels to share TCP and reduce the overhead of TCP creation and destruction.
  • Exchange: switch.
    • The fanout switch is just like a broadcast, distributing messages to all bound queues without selection.
    • In direct mode, a key is bound between the switch and the queue (this key is the Binding key). Only when the Routing key of the message is the same as the Binding key, the switch will send the message to the queue.
    • The topic mode is the topic mode, which is routed to the queue through pattern matching. (wildcard matching)
    • Headers are not commonly used, and the headers switch maps messages to queues through Headers headers.
    • default Exchange, the name of the default exchange is the empty string. If you do not specify the name of an exchange when sending a message, it will be sent to the "default exchange". By default, Exchange does not perform Binding operations.
  • Queue: Queue. Message queue, where messages are saved. Contains the following attributes:
    • Name
    • Whether Durable (the queue still exists after the message broker is restarted) is persistent
    • Whether Exclusive (only used by one connection, and the queue is deleted when the connection is closed) is exclusive and exclusive
    • Auto-delete (deleted when the last consumer unsubscribes) automatically deletes
    • Arguments (other attribute parameters of the queue)
  • Routing key: Routing rules. The attribute of the message header. When the producer sends the message to the switch, it will carry a key on the message header. This key is the routing key to specify the routing rules of the message.
  • Binding: Binding can be understood as a verb, and its function is to bind exchange and queue according to routing rules.
  • Binding key: When binding Exchange and Queue, a binding key is generally specified. When the producer sends a message to Exchange, a routing key will be carried in the message header. When the binding key matches the routing key, the message will be Routing to the corresponding Queue.

What are the modes of RabbitMQ?

​ Simple queue mode, work queue mode, subscription mode (fanout), routing mode (direct), topic mode (topic), RPC mode, release confirmation mode.

Simple application (take routing mode as an example)

  • Publisher
public static final String DIRECT_EXCHANGE_NAME = "direct.exchange";
public static final String DIRECT_ROUTING_KEY_NAME = "direct.routing.key";

@ApiOperation("路由模式(direct)发送消息")
@GetMapping("/sendDirectMessage/{msg}")
public String sendDirectMessage(@PathVariable String msg) {
    
    

    //发送消息到交换机
    //参数: 交换机、路由键、消息体
    rabbitTemplate.convertAndSend(DirectConfig.DIRECT_EXCHANGE_NAME, DirectConfig.DIRECT_ROUTING_KEY_NAME, msg);
    return "success";
}
  • Subscriber
/**
 * 步骤:
 *      1.配置交换机
 *      2.配置队列
 *      3.绑定队列到交换机
 */
@Configuration
public class DirectConfig {
    
    

    public static final String DIRECT_EXCHANGE_NAME = "direct.exchange";
    public static final String DIRECT_QUEUE_NAME = "direct.queue";
    public static final String DIRECT_ROUTING_KEY_NAME = "direct.routing.key";


    //配置交换机
    @Bean
    DirectExchange directExchange() {
    
    

        //参数: 交换机名、持久化、自动删除、参数
        //return new DirectExchange(DIRECT_EXCHANGE_NAME,true,false,null);
        return new DirectExchange(DIRECT_EXCHANGE_NAME);
    }

    //配置队列
    @Bean
    Queue directQueue() {
    
    

        //参数: 队列名、持久化、是否独占、自动删除、参数
        //return new Queue(DIRECT_QUEUE_NAME,true,false,false,null);
        return new Queue(DIRECT_QUEUE_NAME);
    }

    //绑定队列到交换机
    @Bean
    Binding directBinding() {
    
    
        //参数:队列名、目的类型(使用默认)、交换机名、路右键名、参数
        //return new Binding(DIRECT_QUEUE_NAME, Binding.DestinationType.QUEUE, DIRECT_EXCHANGE_NAME,DIRECT_ROUTING_KEY_NAME,null);
        return BindingBuilder.bind(directQueue()).to(directExchange()).with(DIRECT_ROUTING_KEY_NAME);
    }

}
@Component
//也可应用到方法上
@RabbitListener(queues = DirectConfig.DIRECT_QUEUE_NAME)
public class DircetRecive {
    
    

    private static final Logger log = LoggerFactory.getLogger(DircetRecive.class);

    @RabbitHandler
    public void recive(String msg){
    
    

        log.info("我收到的消息是:{}",msg);
    }

}

reliable message delivery

​ Configuration file configuration:

spring:
  # 添加配置
  rabbitmq:
    # 发布方return回调确认开启
    publisher-returns: true
    # 开启是否达到交换机的回调(相关联的)
    publisher-confirm-type: correlated

​ Start MQ callback ConfirmCallback and ReturnCallback.

​ ConfirmCallback is triggered regardless of whether the message arrives at the switch. If it arrives at the switch, ack is true, and if it does not arrive, it is false.

​ ReturnCallback is not triggered if the exchange to the queue is successful, but is triggered if it is not successful. (This callback mostly appears in the development stage)

  • Before sending, the message information (including switches and routing keys) is persisted to the database and the number of attempts to set the state to -1 is 0.
  • Use a custom RabbitTemplate to configure ConfirmCallback and ReturnCallback.
  • If the ack in ConfirmCallback is true, the status will be changed to 1, which means the sending is successful.
  • ReturnCallback monitors the information that arrives at the switch but does not arrive at the queue. This situation often occurs in development, when the routing key is configured incorrectly, and the new message information is added to the database. The status is 4, which is convenient for programmers/operation and maintenance personnel to view.
  • Configure a scheduled task, query the message with the status of -1 in the database, and resend it. Add 1 to the number of attempts each time. If the number of attempts exceeds 5, no more queries will be made (operation and maintenance personnel will handle it).
  • The above process + operation and maintenance manual cover to ensure the reliability of message delivery.

Consumer message retry

​ By default, if an exception occurs during message consumption, the message will be re-enqueued for re-consumption. If the exception cannot be resolved in time, messages will be delivered without limit, occupying server resources. At this point, you can enable the consumer side, the message retry mechanism, and control the message retry.

spring:
  rabbitmq:
    listener:
      simple:
        retry:
          #开启消息重发控制
          enabled: true
          #重发次数
          max-attempts: 3
          #间隔时间
          initial-interval: 3000

Consumer manual response

​ According to different business scenarios, you can also manually respond to consumer-side messages by coding to handle abnormal problems.

spring:
  rabbitmq:
    listener:
      simple:
        # 开启手动ack
        acknowledge-mode: manual
@Component
public class DirectReliableRecive {
    
    

    private static final Logger log = LoggerFactory.getLogger(DirectReliableRecive.class);

    @RabbitListener(queues = DirectReliableConfig.DIRECT_RELIABLE_QUEUE_NAME)
    public void revice(Message message,Channel channel) {
    
    

        try {
    
    
            log.info("我收到的消息是:{}",new String(message.getBody()));
            int i = 1/0;
            //响应成功   第一个参数:消息标识   第二个参数: 是否多个应答
            channel.basicAck(message.getMessageProperties().getDeliveryTag(),false);
        } catch (Exception e) {
    
    
            e.printStackTrace();
            try {
    
    
                //响应不成功    第一个参数:消息标识   第二个参数: 是否多个应答    第三个参数:是否重新入队
                channel.basicNack(message.getMessageProperties().getDeliveryTag(),false,false);
                //消息拒绝   第一个参数:消息标识    第二个参数:是否重新入队
                //channel.basicReject(message.getMessageProperties().getDeliveryTag(),true);
            } catch (IOException ioException) {
    
    
                ioException.printStackTrace();
            }
        }
    }

}

dead letter queue

​ Dead letter queue: DLX, dead-letter-exchange, using DLX, when a message becomes a dead letter (dead message) in a queue, it can be re-published to another Exchange, this Exchange is DLX.

​ Dead letter source:

  • The message was rejected (basic.reject or basic.nack) and requeue=false
  • Message TTL expired
  • The queue reaches the maximum length (the queue is full, no more data can be added to mq)

​ Dead letter processing

  • Discard, if it is not very important, you can choose to discard
  • Record the dead letter into the warehouse, and then do subsequent business analysis or processing
  • Through the dead letter queue, it is processed by the application responsible for listening to the dead letter

​According to the feature that the message TTL will become a dead letter when it expires, it can be used in the delayed task (cancel the unpaid order), and the TTL is specified when the message is sent, and normal consumption is not performed. Wait until it becomes a dead letter, query the status of the message in the database, and change the status according to the business.

​ In RabbitMQ, you can install the delayed queue plug-in, or you can use the delayed queue directly.

Message idempotent processing

​ In the reliable delivery of messages, a message that has been sent successfully but has not yet arrived can be called back. When the scheduled task finds that the status of the message is -1, it resends it. Then this normal message is sent twice. The problem of idempotence arises.

​ In order to solve the idempotence problem, when a message is sent, a message unique identifier is stored in the Message object.

​ When consuming on the consumer side, first take out the identifier and check if it exists in redis, if not, store it in redis, and then perform business processing. If there is, it means that the message has been consumed and returns directly.

Consumer-side current limiting (peak clipping)

spring:
  rabbitmq:
    listener:
      simple:
        # 公平分发
        prefetch: 1
        # 开启手动ack
        acknowledge-mode: manual
        max-concurrency: 1 #每次最多拿一条消息

Nginx

​ Nginx is a lightweight/high-performance reverse proxy web server. It implements very efficient reverse proxy and load balancing. It can handle 20,000-30,000 concurrent connections, and the official monitoring can support 50,000 concurrent connections.

How does Nginx handle requests?

​ After receiving a request, nginx first matches the server module with the listen and server_name directives, and then matches the location in the server module, where location is the actual address.

server {
    
                						# 第一个Server区块开始,表示一个独立的虚拟主机站点
    listen       80;      					# 提供服务的端口,默认80
    server_name  localhost;       			# 提供服务的域名主机名
    location / {
    
                				# 第一个location区块开始
        root   html;       				# 站点的根目录,相当于Nginx的安装目录
        index  index.html index.htm;      	# 默认的首页文件,多个用空格分开
    }          								
}

What are forward proxy and reverse proxy?

Forward proxy: a server located between the client and the original server (origin server), in order to obtain content from the original server, the client sends a request to the proxy and specifies the target (origin server), and then the proxy forwards the request to the original server And return the obtained content to the client.

​ The forward proxy can be summed up in one sentence: the proxy side is the client side. Users are aware.

​Reverse proxy: It refers to the use of a proxy server to receive connection requests on the network, and then send the request to the server on the internal network and return the results obtained from the server to the client requesting the connection on the network. At this time, the proxy The server behaves as a reverse proxy server externally.

​ The reverse proxy summary is just one sentence: the proxy side is the server side. Users are imperceptible.

Nginx application scenarios?

  • http server. Nginx is an http service that can provide http services independently. It can be used as a web static server.
  • virtual host. It is possible to virtualize multiple websites on one server, such as virtual machines used by personal websites.
  • Reverse proxy, load balancing. When the number of website visits reaches a certain level and a single server cannot satisfy user requests, multiple server clusters are required to use nginx as a reverse proxy. And multiple servers can share the load evenly, and there will be no situation where a certain server is down due to high load and a certain server is idle.
  • Security management can also be configured in nginx. For example, Nginx can be used to build an API interface gateway to intercept each interface service.

How to configure Nginx virtual host

#当客户端访问www.xxxx.com,监听端口号为80直接跳转到真实ip服务器地址 127.0.0.1:8080
server {
    
    
    listen       80;
    server_name  www.xxxx.com;
    location / {
    
    
    proxy_pass http://127.0.0.1:8080;
    index  index.html index.htm;
	}
}

Nginx load balancing algorithm

  • polling
  • weighted round robin
  • ip_hash
  • fair (plugin): Allocation is based on response speed.
  • url_hash (third-party plug-in)

Four functions of nginx

  • Forward proxy: Configure a proxy server on the client (browser) to access the Internet through the proxy server.
  • Reverse proxy: We only need to send the request to the reverse proxy server. After the reverse proxy server selects the target server to obtain the data, it returns to the client. At this time, the reverse proxy server and the target server are a server externally, exposing It is the proxy server address, which hides the real server IP address.
  • Load balancing: A single server can't solve it. We increase the number of servers, and then distribute the requests to each server. Instead of concentrating the original requests on a single server, we distribute the requests to multiple servers and distribute the load to different servers. Server, which is what we call load balancing.
  • Dynamic and static separation: In order to speed up the analysis speed of the website, dynamic pages and static pages can be analyzed by different servers to speed up the analysis speed. Reduce the pressure on the original single server

SpringCloud

​ Spring cloud is a service governance toolkit based on Spring Boot, which is used to manage and coordinate services in the microservice architecture. Spring Cloud is an ordered collection of a series of frameworks. It uses the development convenience of Spring Boot to subtly simplify the development of distributed system infrastructure, such as service discovery registration, configuration center, load balancing, circuit breaker, data monitoring, etc., all of which can be achieved with one-click development style of Spring Boot Launch and deploy. Repackaging through the Spring Boot style shields away the complex configuration and implementation principles, and finally leaves a set of distributed system development kits that are easy to understand, easy to deploy, and easy to maintain for developers. With Spring Cloud, it becomes easier to implement the microservice architecture.

Understanding of Microservices

​ In fact, similar to the SOA architecture, microservices are the sublimation of SOA. One of the key points emphasized by the microservice architecture is that "business needs to be completely componentized and serviced." The original single business system will be split into multiple Small applications that are independently developed, designed, and run. These small applications complete interaction and integration through services. Service achieves: single responsibility, service-oriented, external exposure of RestAPI, and service autonomy.

Eureka

​ eureka is a service registry (which can be a cluster) that exposes its address to the outside world. After the service provider starts, it registers its own information (ip, address, service name, etc.) with eureka. The consumer pulls a copy of the service list maintained by the registry from eureka, and completes the service call through the service information on the list. The service provider will periodically send heartbeat renewals to eureka.
insert image description here

注解:
    //标明该服务为Eureka的服务端
    @EnableEurekaServer
    //标明该服务为Eureka的客户端
    @EnableEurekaClient
    //标明该服务为Eureka的客户端,替代上
    @EnableDiscoveryClient

Self-protection mechanism: By default, if Eureka Server does not receive the heartbeat of a microservice instance within a certain period of time (90 seconds by default), Eureka Server will remove the instance. However, when a network partition failure occurs, the microservice and Eureka Server cannot communicate normally, and the microservice itself is running normally. At this time, the microservice should not be removed, so a self-protection mechanism is introduced.

​ The self-protection mode is just a security protection measure against abnormal network fluctuations. Using the self-protection mode can make the Eureka cluster run more robustly and stably.

​The working mechanism of the self-

Ribbon

​ Ribbon is a load balancing component of SpringCloud.

​ We configure the load balancing algorithm through the service name, and then we can directly request to the corresponding instance. The bottom layer is that the LoadBalancerInterceptor class will intercept the request, then obtain the service list from Eureka according to the service id, and then use the load balancing algorithm to obtain the real service address information and replace the service name.
insert image description here

load balancing strategy

​IRule : This is the parent interface of all load balancing strategies. The core method inside is the choose method, which is used to select a service instance.
insert image description here

Strategy illustrate
com.netflix.loadbalancer.RoundRobinRule round robin strategy Started services are iterated
com.netflix.loadbalancer.RandomRule random selection Randomly choose a server from the list to visit
com.netflix.loadbalancer.RetryRule retry selection First obtain the service according to the strategy of RoundRobinRule, if the service fails to obtain, it will retry within the specified time to obtain the available service
BestAvailableRule Maximum Available Policy First filter out the faulty server, and then select a service with the smallest number of concurrent requests (nacos is NacosRule (com.alibaba.cloud.nacos.ribbon.NacosRule))
WeightedResponseTimeRule with weighted round robin strategy For the extension of RoundRobinRule, the faster the response, the greater the selection weight of the instance, and the easier it is to be selected
AvailabilityFilteringRule available filtering strategy First filter out service instances that are faulty or have concurrent requests greater than the threshold, and then select instances with smaller concurrency
ZoneAvoidanceRule zone awareness policy 默认规则,复合判断server所在区域的性能和server的可用性选择服务器

配置文件配置负载进程策略

# 这里使用服务提供者的instanceName
# 通过服务名,来进行不同服务的个性化配置
nacos-producer:
  ribbon:
    # 代表Ribbon使用的负载均衡策略      (nacos的权重负载策略)
    NFLoadBalancerRuleClassName: com.alibaba.cloud.nacos.ribbon.NacosRule #通过配置文件,配置负载均衡策略
    # 同一台服务器上的最大重试次数(第一次尝试除外)
    MaxAutoRetries: 1
    # 重试的下一个服务器的最大数量(不包括第一个服务器)
    MaxAutoRetriesNextServer: 1
    # 是否可以为此客户端重试所有操作
    OkToRetryOnAllOperations: true
    # 从源刷新服务器列表的时间间隔
    ServerListRefreshInterval: 2000
    # Apache HttpClient使用的连接超时
    ConnectTimeout: 3000
    # Apache HttpClient使用的读取超时
    ReadTimeout: 3000
# 另一个服务名
nacos-producer1:
  ribbon:
    NFLoadBalancerRuleClassName: com.netflix.loadbalancer.RandomRule #通过配置文件,配置负载均衡策略

懒加载配置

# 预加载配置,默认为懒加载。我们在这里开启预加载。
# 一般在服务多的情况下,懒加载有可能在第一次访问时造成短暂的拥堵,有可能造成生产故障。
ribbon:
  eager-load:
    enabled: true
    clients: nacos-producer #这里添加的是预加载的服务名

OpenFeign

​ Spring Cloud OpenFeign对Feign进行了增强,是声明式、模板化的HTTP客户端。用于远程服务调用。

​ OpenFeign可以把Rest的请求进行隐藏,伪装成类似SpringMVC的Controller一样。你不用再自己拼接url,拼接参数等等操作,一切都交给OpenFeign去做。

  • 启动类
//开启feign客户端
@EnableFeignClients
  • feign接口
    • 首先这是一个接口,Feign会通过动态代理,帮我们生成实现类。这点跟mybatis的mapper很像
    • @FeignClient ,声明这是一个Feign客户端,同时通过 value 属性指定服务名称
    • 接口中的定义方法,完全采用SpringMVC的注解,Feign会根据注解帮我们生成URL,并访问获取结果
//指定接口代理nacos-producer客户端,path为当前服务接口前缀。
@FeignClient(value = "nacos-producer", path = "/product")
public interface ProductFeignService {
    
    
}

​ OpenFeign中本身已经集成了Ribbon依赖和自动配置,因此不需要额外引入依赖,也不需要再注入 RestTemplate 对象。

Hystrix

​ hystrix是Netlifx开源的一款容错框架,防雪崩利器,具备服务降级,服务熔断,依赖隔离,监控(Hystrix Dashboard)等功能。

  • 熔断机制:当失败率达到阀值自动触发熔断(如因网络故障、超时造成的失败率)。熔断的含义是直接忽略该服务,返回兜底数据;
  • 降级机制:超时降级、资源不足时(线程或信号量)降级 、运行异常降级等,降级后可以配合降级接口返回托底数据。

单独使用

  • 启动类
// 添加断路器支持
@EnableCircuitBreaker

降级Fallback

@RestController
@RequestMapping("h1")
@DefaultProperties(defaultFallback = "classFallBack")
public class HystrixController01 {
    
    

    @RequestMapping("/method01/{id}")
    //添加到方法上,表明该方法开启服务降级、熔断
    @HystrixCommand(fallbackMethod = "methodFallBack")
    //@HystrixCommand
    public String method(@PathVariable String id) {
    
    

        try {
    
    
            TimeUnit.SECONDS.sleep(5);//出现sleep interrupted睡眠中断异常
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
        return "hello world!";
    }


    public String methodFallBack(String id) {
    
    
        return "系统异常,请及时联系客服人员。";
    }

    /**
     * com.netflix.hystrix.contrib.javanica.exception.FallbackDefinitionException:
     * fallback method wasn't found: classFallBack([])
     * 这里要注意,我们在class的回调方法中,形参列表必须为空。
     * 不然会出现以上错误。
     * @return
     */
    public String classFallBack(/*String id*/) {
    
    
        return "系统异常,请及时联系客服人员。";
    }
    
}

线程隔离

​ 服务雪崩效应产生与服务堆积在同一个线程池中有关,因为所有的请求都是同一个线程池进行处理,这时候如果在高并发情况下,所有的请求全部访问同一个接口,这时候可能会导致其他服务没有线程进行接受请求,这就是服务雪崩效应。

@RestController
@RequestMapping("h2")
public class HystrixController02 {
    
    

    @GetMapping("/method01/{id}")
    @HystrixCommand(fallbackMethod = "methodFallBack",
            //测试thread和semaphore 两种隔离策略的异同
            // execution.isolation.strategy 默认为thread
            commandProperties = {
    
    
                    //@HystrixProperty(name= HystrixPropertiesManager.EXECUTION_ISOLATION_STRATEGY, value = "THREAD")}
            @HystrixProperty(name= HystrixPropertiesManager.EXECUTION_ISOLATION_STRATEGY, value = "SEMAPHORE")}
    )
    public String method01(@PathVariable String id){
    
    
        //xxxx
    }

    @GetMapping("/method02/{id}")
    @HystrixCommand(fallbackMethod = "methodFallBack",
            commandProperties = {
    
    
                    设置超时的时候不中断线程,默认为true
                    @HystrixProperty(name=HystrixPropertiesManager.EXECUTION_ISOLATION_THREAD_INTERRUPT_ON_TIMEOUT,value="false")}
    )
    public String method02(@PathVariable String id){
    
    
        //xxxx
    }

    @GetMapping("/method03/{id}")
    @HystrixCommand(fallbackMethod = "methodFallBack",
            commandProperties = {
    
    
                    //设置熔断策略为semaphore,并且最大连接为1  (可以通过该思路来实现,限流)
                    @HystrixProperty(name = HystrixPropertiesManager.EXECUTION_ISOLATION_STRATEGY, value = "SEMAPHORE"),
                    @HystrixProperty(name=HystrixPropertiesManager.EXECUTION_ISOLATION_SEMAPHORE_MAX_CONCURRENT_REQUESTS,value="1")}
    )
    public String method03(@PathVariable String id){
    
    
        //xxxx
    }

    @GetMapping("/method04/{id}")
    @HystrixCommand(fallbackMethod = "methodFallBack",
            commandProperties = {
    
    
                    //设置是否超时,默认为true
                    @HystrixProperty(name=HystrixPropertiesManager.EXECUTION_TIMEOUT_ENABLED,value="false")}
    )
    public String method04(@PathVariable String id){
    
    
        //xxxx
    }

    @GetMapping("/method05/{id}")
    @HystrixCommand(fallbackMethod = "methodFallBack",
            commandProperties = {
    
    
                    //设置过期时间,单位:毫秒,默认:1000
                    @HystrixProperty(name=HystrixPropertiesManager.EXECUTION_ISOLATION_THREAD_TIMEOUT_IN_MILLISECONDS,value="6000")}
    )
    public String method05(@PathVariable("id")String id){
    
    
        //xxxx
    }

    public String methodFallBack(String id) {
    
    
        return "系统异常,请及时联系客服人员。";
    }

}

服务熔断

​ 服务在高并发的情况下出现进程阻塞,导致当前线程不可用,慢慢的全部线程阻塞 ,导致服务器雪崩。这时直接熔断整个服务,而不是一直等到服务器超时。

  • 断路器全开时:一段时间内 达到一定的次数无法调用 并且多次监测没有恢复的迹象 断路器完全打开 那么下次请求就不会请求到该服务
  • 半开:短时间内有恢复迹象断路器会将部分请求发给该服务,正常调用时断路器关闭
  • 关闭:当服务一直处于正常状态能正常调用
/**
 * circuitBreaker.enabled :true 打开熔断 默认开启
 * circuitBreaker.requestVolumeThreshold: 当在配置时间窗口内达到此数量的失败后,进行短路。默认20个
 * circuitBreaker.sleepWindowInMilliseconds:短路多久以后开始尝试是否恢复,默认5s
 * circuitBreaker.errorThresholdPercentage:出错百分比阈值,当达到此阈值后,开始短路。默认50%
 */
@GetMapping("/method06/{id}")
@HystrixCommand(fallbackMethod = "methodFallBack",
        commandProperties = {
    
    
                @HystrixProperty(name = "circuitBreaker.enabled", value = "true"),
                @HystrixProperty(name = "circuitBreaker.requestVolumeThreshold", value = "10"),
                @HystrixProperty(name = "circuitBreaker.sleepWindowInMilliseconds", value = "10000"),
                @HystrixProperty(name = "circuitBreaker.errorThresholdPercentage", value = "60")}
)
public String method06(@PathVariable String id){
    
    
    //xxxx
}

整合Feign

  • 启动类
@EnableFeignClients
//@SpringBootApplication
//@EnableCircuitBreaker //开启熔断器
//@EnableDiscoveryClient 
@SpringCloudApplication  //以上三个注解的组合注解
  • 配置
feign:
  hystrix:
    #feign是否开启熔断
    enabled: true
  • Fallback类,相比于FallbackFactory,工厂可以用来获取到触发断路器的异常信息。建议使用FallbackFactory。

  • FallbackFactory

    • feign接口
    /**
     * fallbackFactory 指定一个fallback工厂,与指定fallback实现类不同,
     * 此工厂可以用来获取到触发断路器的异常信息,
     * ProductFeignFallBack需要实现FallbackFactory类
     * 指定服务的serviceName
     */
    @FeignClient(value = "rest-template-user",fallbackFactory = ProductFeignFallBack.class)
    @Component
    public interface Product02Feign {
          
          
    
        @GetMapping("/product/getString")
        public String getString();
    
    }
    
    • 回退工厂实现类
    @Component
    @Slf4j
    /**
     * 必须实现FallBackFactory,且它的泛型必须是你要指定地点feign的接口。
     */
    public class ProductFeignFallBack implements FallbackFactory<ProductFeign> {
          
          
        @Override
        public ProductFeign create(Throwable throwable) {
          
          
            return new ProductFeign() {
          
          
                @Override
                public String getString() {
          
          
                    log.error("fallback reason:{}",throwable.getMessage());
                    return "我犯错了,我知道。";
                }
            };
        }
    }
    

Gateway

​ Spring Cloud Gateway是建立在Spring生态系统之上的API网关,Spring Cloud Gateway旨在提供一种简单而有效的方法来路由到api。所谓的API网关,就是指系统的统一入口,它封装了应用程序的内部结构,为客户端提供统一服务,一些与业务本身功能无关的公共逻辑可以在这里实现,诸如认证、鉴权、监控、路由转发等等。

​ gateway相当于所有服务的门户,将客户端请求与服务端应用相分离,客户端请求通过gateway后由定义的路由和断言进行转发,路由代表需要转发请求的地址,断言相当于请求这些地址时所满足的条件,只有同时符合路由和断言才给予转发。最后过滤器是对请求的增强。

核心概念

  • 路由(route):路由信息的组成是由一个id、一个目的url、一组谓词匹配、一组Filter组成的。如果路由的谓词为真,说明请求url和配置路由匹配。
  • 谓词(predicate)(断言): 谓词函数允许开发者去定义匹配来自于Http Requeset中的任何信息比如请求头和参数。
  • 过滤器(Filter): 过滤器Filter将会对请求的请求和响应进行修改处理。Filter分为两种类型的Filter,分别是Gateway Filter和Global Filter。

工作流程

  • Gateway Client向Gateway Server发送请求
  • 请求首先会被HttpWebHandlerAdapter进行提取组装成网关上下文
  • 然后网关的上下文会传递到DispatcherHandler,它负责将请求分发给RoutePredicateHandlerMapping
  • RoutePredicateHandlerMapping负责路由查找,并根据路由断言判断路由是否可用
  • 如果过断言成功,由FilteringWebHandler创建过滤器链并调用
  • 请求会一次经过PreFilter–微服务–PostFilter的方法,最终返回响应

应用

spring:
  cloud:
    gateway:
      # 默认全局过滤器,对所有路由生效
      default-filters:
        # 响应头过滤器,对输出的响应设置其头部属性名称为
        # X-Response-Default-MyName,值为 lz
        # 如果有多个参数多则重写一行设置不同的参数
        - AddResponseHeader=X-Response-Default-MyName, lz
      # 路由
      routes:
        - id: feign-consummer-route
          uri: lb://feign-consummer
          # 谓词
          predicates:
            - Path=/**
          # 过滤器
          filters:
            - PrefixPath=/feign

自定义全局过滤器

@Component
public class AuthMyFilter implements GlobalFilter, Ordered {
    
    

    /**
     * @description  过滤器执行方法
     * @param exchange 前后端交互信息,包括request与response
     * @param chain 过滤器链
     * @return 下个过滤器直到过滤器结束或者校验不通过返回结果
     */
    @Override
    public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
    
    
        //开始执行鉴权方法
        //获取请求参数
        String token = exchange.getRequest().getHeaders().getFirst("token");

        if (StringUtils.isEmpty(token)) {
    
    
            ServerHttpResponse response = exchange.getResponse();
            response.setStatusCode(HttpStatus.INTERNAL_SERVER_ERROR);
            String msg = "token is null!";

            DataBuffer wrap = response.bufferFactory().wrap(msg.getBytes());
            return response.writeWith(Mono.just(wrap));
        }

        // 调用下个过滤器(过滤器是基于函数回调)
        return chain.filter(exchange);
    }

    /**
     * @description 定义过滤器优先级,数字越小优先级约高,可以为负数
     */
    @Override
    public int getOrder() {
    
    
        return 0;
    }
}

跨域配置

spring:
  cloud:
    gateway:
      globalcors:
        corsConfigurations:
          #允许跨域的请求路径
          '[/**]':
            #允许的来源
            allowedOrigins: "*"
            #允许的方法
            allowedMethods:
              - GET
              - POST
            #是否允许携带cookie
            allowCredentials: true
            #允许http请求携带的header
            allowedHeaders:
              - Content-Type
            #response应答可以暴露的header
            exposedHeaders:
              - Content-Type

Config

​ 配置中心。

​ 分布式系统中,由于服务数量非常多,配置文件分散在不同的微服务项目中,config支持配置文件放在远程Git仓库(GitHub、码云),对配置文件集中管理。

​ Bus是消息总线可以为微服务做监控,也可以实现应用程序之间相互通信。 Spring Cloud Bus可选的消息代理有RabbitMQ和Kafka。对配置文件进行监控和通知。

Nacos

​ 致力于服务发现、配置和管理微服务。

​ 相比于eureka来说,nacos除了可以做服务注册中心。其实它也集成了服务配置的功能,我们可以直接使用它作为服务配置中心。

基本概念

  • **服务注册:**Nacos Client会通过发送REST请求的方式向Nacos Server注册自己的服务,提供自身的元数据,比如ip地址、端口等信息。Nacos Server接受到注册请求后,就会把这些元数据信息存储在一个双层的内存Map中。
  • **服务心跳:**在服务注册后,Nacos Client会维护一个定时心跳来持续通知Nacos Server,说明服务一直处于可用状态,防止被剔除。默认5s发送一次心跳。
  • **服务同步:**Nacos Server集群之间会互相同步服务实例,用来保证服务信息的一致性。
  • **服务发现:**服务消费者(Nacos Client)在调用服务提供者的服务时,会发送一个REST请求给Nacos Server,获取上面的注册清单,并且缓存在Nacos Client本地, 同时会在Nacos Client本地开启一个定时任务定时拉取服务端最新的注册表信息更新到本地缓存。
  • **服务健康检查:**Nacos Server会开启一个定时任务用来检查注册服务实例的健康情况,对于超过15s没有收到客户端心跳的实例会将它的healthy属性设置为false(客户端服务发现时不会发现),如果某个实例超过30s没有收到心跳,直接剔除该实例(被剔除的实例如果恢复发送心跳则会重新注册)。

Sentinel

​ 分布式容错机制。Sentinel 以流量为切入点,从流量控制、熔断降级、系统负载保护等多个维度保护服务的稳定性。相比于hystrix功能更加丰富,还有可视化配置的控制台。Sentinel 是面向分布式服务架构的流量控制组件,主要以流量为切入点,从限流、流量整形、熔断降级、系统负载保护、热点防护等多个维度来帮助开发者保障微服务的稳定性。

​ 使用时先定义好资源埋点。只要有了资源,我们就可以在任何时候灵活地定义各种流量控制规则。

注解实现流控

  • @SentinelResource注解实现
  • @SentinelResource 注解用来标识资源是否被限流、降级。
    • blockHandler: 定义当资源内部发生了BlockException应该进入的方法(捕获的是Sentinel定义的异常)
    • fallback: 定义的是资源内部发生了Throwable应该进入的方法
    • exceptionsToIgnore:配置fallback可以忽略的异常

可配置的参数:

  • 流控规则配置
    • 流控阈值类型
      • QPS
      • 线程数
    • 流控模式
      • 直接
      • 关联
      • 链路
    • 流控效果
      • 快速失败
      • Warm Up(激增流量)
      • 排队等待
  • 降级规则配置
    • 慢调用比例
    • 异常比例
    • 异常数
  • 热点参数限流
  • 授权控制规则(可设置黑名单、白名单)

整合Feign

  • yml
#对Feign的支持
feign:
  sentinel:
    enabled: true # 添加feign对sentinel的支持
  • feign
@FeignClient(value = "nacos-producer", path = "/product", fallback = ProductFeignFallback.class)
public interface ProductFeignService {
    
    
	//xxxxx
}
  • fallback
@Component
public class ProductFeignFallback implements ProductFeignService {
    
    
   //xxxx
}

Seata

​ Seata 是一款开源的分布式事务解决方案,致力于提供高性能和简单易用的分布式事务服务。Seata 将为用户提供了 AT、TCC、SAGA 和 XA 事务模式,为用户打造一站式的分布式解决方案。

CAP定理

  • Consistency(一致性):用户访问分布式系统中的任意节点,得到的数据必须一致。
  • Availability(可用性):用户访问集群中的任意健康节点,必须能得到响应,而不是超时或拒绝。
  • Partition tolerance (分区容错性):因为网络故障或其它原因导致分布式系统中的部分节点与其它节点失去连接,形成独立分区。

​ 在分布式系统中,系统间的网络不能100%保证健康,一定会有故障的时候,而服务有必须对外保证服务。因此Partition Tolerance不可避免。

​ 如果此时要保证一致性,就必须等待网络恢复,完成数据同步后,整个集群才对外提供服务,服务处于阻塞状态,不可用。

​ 如果此时要保证可用性,就不能等待网络恢复,那节点之间就会出现数据不一致。

​ 也就是说,在P一定会出现的情况下,A和C之间只能实现一个。

分布式事务最大的问题是各个子事务的一致性问题,因此可以借鉴CAP定理和BASE理论,有两种解决思路:

  • AP模式:各子事务分别执行和提交,允许出现结果不一致,然后采用弥补措施恢复数据即可,实现最终一致。
  • CP模式:各个子事务执行后互相等待,同时提交,同时回滚,达成强一致。但事务等待过程中,处于弱可用状态。

BASE理论

BASE理论是对CAP的一种解决思路,包含三个思想:

  • Basically Available (基本可用):分布式系统在出现故障时,允许损失部分可用性,即保证核心可用。
  • **Soft State(软状态):**在一定时间内,允许出现中间状态,比如临时的不一致状态。
  • Eventually Consistent(最终一致性):虽然无法保证强一致性,但是在软状态结束后,最终达到数据一致。

Seata中的三大角色

  • **TC (Transaction Coordinator) - 事务协调者:**维护全局和分支事务的状态,驱动全局事务提交或回滚。这个就是我们的Seata服务器,用于全局控制。

  • **TM (Transaction Manager) - 事务管理器:**定义全局事务的范围:开始全局事务、提交或回滚全局事务。

  • **RM (Resource Manager) - 资源管理器:**管理分支(本地)事务处理的资源,与TC交谈以注册分支事务和报告分支事务的状态,并驱动分支事务提交或回滚。

    其中,TC 为单独部署的 Server 服务端,TM 和 RM 为嵌入到应用中的 Client 客户端。

Seata支持4种事务模式

  • AT:本质上就是2PC的升级版,在 AT 模式下,用户只需关心自己的 “业务SQL”,对业务无侵入。
  • TCC:和我们上面讲解的思路是一样的。
  • XA:同上,但是要求数据库本身支持这种模式才可以。
  • Saga:用于处理长事务,每个执行者需要实现事务的正向操作和补偿操作。

AT模式设计思路

​ AT模式的核心是对业务无侵入,是一种改进后的两阶段提交。

​ 简单总结:

  • 一阶段,Seata 会拦截“业务 SQL”,首先解析 SQL 语义,找到“业务 SQL”要更新的业务数据,在业务数据被更新前,将其保存成“before image”,然后执行“业务 SQL”更新业务数据,在业务数据更新之后,再将其保存成“after image”,最后生成行锁。以上操作全部在一个数据库事务内完成,这样保证了一阶段操作的原子性。提交至数据库。
  • 二阶段如果确认提交的话,因为“业务 SQL”在一阶段已经提交至数据库, 所以 Seata 框架只需将一阶段保存的快照数据和行锁删掉,完成数据清理即可,当然如果需要回滚,那么就用“before image”还原业务数据;但在还原前要首先要校验脏写,对比“数据库当前业务数据”和 “after image”,如果两份数据完全一致就说明没有脏写,可以还原业务数据,如果不一致就说明有脏写,出现脏写就需要转人工处理。

Dubbo

​ Dubbo是一款高性能、轻量级的开源RPC框架,提供服务自动注册、自动发现等高效服务治理方案, 可以和Spring框架无缝集成。

Dubbo核心组件有哪些

insert image description here

节点角色说明:

节点 角色说明
Provider 暴露服务的服务提供方。
Consumer 调用远程服务的服务消费方。
Registry 服务注册与发现的注册中心。
Monitor 统计服务的调用次数和调用时间的监控中心。

调用关系说明:

  • 服务容器负责启动,加载,运行服务提供者。
  • 服务提供者在启动时,向注册中心注册自己提供的服务。
  • 服务消费者在启动时,向注册中心订阅自己所需的服务。
  • 注册中心返回服务提供者地址列表给消费者,如果有变更,注册中心将基于长连接推送变更数据给消费者。
  • 服务消费者,从提供者地址列表中,基于软负载均衡算法,选一台提供者进行调用,如果调用失败,再选另一台调用。
  • 服务消费者和提供者,在内存中累计调用次数和调用时间,定时每分钟发送一次统计数据到监控中心。

Dubbo的注册中心集群挂掉,发布者和订阅者之间还能通信么?

​ 可以通讯。启动Dubbo 时,消费者会从Zookeeper拉取注册的生产者的地址接口等数据,缓存在本地。每次调用时,按照本地存储的地址进行调用。

Dubbo集群提供了哪些负载均衡策略?

  • Random LoadBalance: 随机选取提供者策略,有利于动态调整提供者权重。截面碰撞率高,调用次数越多,分布越均匀。

  • RoundRobin LoadBalance: 轮循选取提供者策略,平均分布,但是存在请求累积的问题。

  • LeastActive LoadBalance: 最少活跃调用策略,解决慢提供者接收更少的请求。

  • ConstantHash LoadBalance: 一致性Hash策略,使相同参数请求总是发到同一提供者,一台机器宕机,可以基于虚拟节点,分摊至其他提供者,避免引起提供者的剧烈变动。

    默认为Random随机调用

Dubbo的集群容错方案有哪些?

  • Failover Cluster:失败自动切换,当出现失败,重试其它服务器。通常用于读操作,但重试会带来更长延迟。
  • Failfast Cluster:快速失败,只发起一次调用,失败立即报错。通常用于非幂等性的写操作,比如新增记录。
  • Failsafe Cluster:失败安全,出现异常时,直接忽略。通常用于写入审计日志等操作。
  • Failback Cluster:失败自动恢复,后台记录失败请求,定时重发。通常用于消息通知操作。
  • Forking Cluster:并行调用多个服务器,只要一个成功即返回。通常用于实时性要求较高的读操作,但需要浪费更多服务资源。可通过 forks=”2″ 来设置最大并行数。
  • Broadcast Cluster:广播调用所有提供者,逐个调用,任意一台报错则报错 。通常用于通知所有提供者更新缓存或日志等本地资源信息。

默认的容错方案是Failover Cluster。

Dubbo超时设置有哪些方式?

​ Dubbo超时设置有两种方式:

​ 服务提供者端设置超时时间,在Dubbo的用户文档中,推荐如果能在服务端多配置就尽量多配置,因为服务提供者比消费者更清楚自己提供的服务特性。

​ 服务消费者端设置超时时间,如果在消费者端设置了超时时间,以消费者端为主,即优先级更高。

​ 因为服务调用方设置超时时间控制性更灵活。如果消费方超时,服务端线程不会定制,会产生警告。
​ dubbo在调用服务不成功时,默认是会重试两次。

Zookeeper

​ ZooKeeper 是一个开源的分布式协调服务。它是一个为分布式应用提供一致性服务的软件,分布式应用程序可以基于 Zookeeper 实现诸如数据发布/订阅、负载均衡、命名服务、分布式协调/通知、集群管理、Master 选举、分布式锁和分布式队列等功能。

​ 它是集群的管理者,监视着集群中各个节点的状态根据节点提交的反馈进行下一步合理操作。最终,将简单易用的接口和性能高效、功能稳定的系统提供给用户。

Zookeeper集群中的角色

  • leader 处理所有的事务请求(写请求),可以处理读请求,集群中只能有一个leader。
  • follower只能处理读请求,同时作为leader的候选节点,即如果leader宕机,follower节点要参与到新的leader选举中,有可能成为新的leader节点。
  • observer只能处理读请求,不能参与选举。

四种类型的 znode

​ ZooKeeper 是一个树形目录服务,其数据模型和Unix的文件系统目录树很类似,拥有一个层次化结构。

​ 这里面的每一个节点都被称为: ZNode,每个节点上都会保存自己的数据和节点信息。

​ 节点可以拥有子节点,同时也允许少量(1MB)数据存储在该节点之下。

​ 节点可以分为四大类:

  • PERSISTENT 持久化节点:客户端与 zookeeper 断开连接后,该节点依旧存在。
  • EPHEMERAL 临时节点 :-e,客户端与 zookeeper 断开连接后,该节点依旧存在,只是 Zookeeper 给该节点名称进行顺序编号。
  • PERSISTENT_SEQUENTIAL 持久化顺序节点 :-s,客户端与 zookeeper 断开连接后,该节点被删除。
  • EPHEMERAL_SEQUENTIAL 临时顺序节点 :-es,客户端与 zookeeper 断开连接后,该节点被删除,只是 Zookeeper 给该节点名称进行顺序编号。

分布式锁

​ 有了 zookeeper 的一致性文件系统,锁的问题变得容易。锁服务可以分为两类,一个是保持独占,另一个是控制时序。

​ 我们可以基于zookeeper的两种特性来实现分布式锁,首先我们来看第一种:

唯一节点特性

​ 我们可以基于唯一节点特性来实现分布式锁的操作。多个应用程序去抢占锁资源时,只需要在指定节点上创建一个 /Lock 节点,由于Zookeeper中节点的唯一性特性,使得只会有一个用户成功创建 /Lock 节点,剩下没有创建成功的用户表示竞争锁失败。

​ 这种方法虽然能达到目的,但是会有一个问题,假设有非常多的节点需要等待获得锁,那么等待的方式自然是使用watcher机制来监听/lock节点的删除事件,一旦发现该节点被删除说明之前获得锁的节点已经释放了锁,那么此时剩下的B、C、D节点会同时收到删除事件从而去竞争锁,这个过程会产生惊群效应。

​ 什么是“惊群效应”呢?简单来说就是如果存在许多的客户端在等待获取锁,当成功获取到锁的进程释放该节点后,所有处于等待状态的客户端都会被唤醒,这个时候zookeeper会在短时间内发送大量子节点变更事件给所有待获取锁的客户端,然后实际情况是只会有一个客户端获得锁。如果在集群规模比较大的情况下,会对zookeeper服务器的性能产生比较的影响。

有序节点

​ 为了解决惊群效应,我们可以采用Zookeeper的有序节点特性来实现分布式锁。

​ 每个客户端都往指定的节点下注册一个临时有序节点,越早创建的节点,节点的顺序编号就越小,那么我们可以判断子节点中最小的节点设置为获得锁。如果自己的节点不是所有子节点中最小的,意味着还没有获得锁。这个的实现和前面单节点实现的差异性在于,每个节点只需要监听比自己小的节点,当比自己小的节点删除以后,客户端会收到watcher事件,此时再次判断自己的节点是不是所有子节点中最小的,如果是则获得锁,否则就不断重复这个过程,这样就不会导致惊群效应,因为每个客户端只需要监控一个节点。
insert image description here

Docker

​ Docker 是一个开源的应用容器引擎。

​ Docker 可以让开发者打包他们的应用以及依赖包到一个轻量级、可移植的容器中,然后发布到任何流行的 Linux 机器上,也可以实现虚拟化。

常用命令

  • docker images 查看本地dockers镜像
  • docker search tomact 在Docker Hub上搜索tomcat镜像
  • docker pull tomcat[:version] 从Docker Hub上下载特定版本的镜像到本地
  • docker rmi -f [镜像id] 删除镜像
  • docker run --name mynginx -d nginx:latest 将镜像放入容器中启动
  • docker ps 查看运行中的容器
  • docker ps -a 查看所有容器
  • docker stop [容器id]or[容器name] 暂停容器
  • docker start [容器id]or[容器name] 启动容器
  • docker rm [容器id]or[容器name] 删除容器
  • docker cp [容器id]or[容器name]:/文件路径 从容器拷贝文件到宿主机
  • docker logs -f -t --tail 10 [容器id]or[容器name] 打印日志

DockerFile中常见的指令

​ Dockerfile 是一个文本文件,其中包含我们需要运行以构建 Docker 映像的所有命令。Docker 使用 Dockerfile 中的指令自动构建镜像。我们可以docker build用来创建按顺序执行多个命令行指令的自动构建。

  • From 指定基础镜像,该脚本从什么基础上创建
  • Label 指定镜像标签
  • Run docker脚本执行到这一步要运行那些指令
  • CMD docker脚本执行完之后,容器生成好了要执行的命令,一个DockerFile脚本中只能有一条CMD指令有效

Vue

​ Vue 是一套用于构建用户界面的渐进式框架。与其它大型框架不同的是,Vue 被设计为可以自底向上逐层应用。

生命周期

vue的生命周期通常有8个,分别是创建前后,挂载前后,更新前后,销毁前后,分别对应的钩子函数有beforeCreate创建前,created创建后,beforeMount挂载前,moubted挂载后,beforeUpdate更新前,updated更新后,beforeDestory销毁前,destoyed销毁后。

Vue指令

  • v-text渲染数据,不解析标签。
  • v-html不仅可以渲染数据,而且可以解析标签。
  • v-if:根据表达式的值的真假条件渲染元素。在切换时元素及它的数据绑定 / 组件被销毁并重建。
  • v-show:根据表达式之真假值,切换元素的 display CSS 属性。
  • v-for:循环指令,基于一个数组或者对象渲染一个列表,vue 2.0以上必须需配合 key值 使用。
  • v-bind:动态地绑定一个或多个特性,或一个组件 prop 到表达式。
  • v-on:用于监听指定元素的DOM事件,比如点击事件。绑定事件监听器。
  • v-model:实现表单输入和应用状态之间的双向绑定
  • v-pre:跳过这个元素和它的子元素的编译过程。可以用来显示原始 Mustache 标签。跳过大量没有指令的节点会加快编译。
  • v-once:只渲染元素和组件一次。随后的重新渲染,元素/组件及其所有的子节点将被视为静态内容并跳过。这可以用于优化更新性能。

v-if与v-show区别

​ 此元素进入页面后,此元素只会显示或隐藏不会被再次改变显示状态,此时用v-if更加合适。当用v-if来隐藏元素时,初次加载时就不用渲染此dom节点,提升页面加载速度

​ After this element enters the page, the display status of this element will change frequently, so it is more appropriate to use v-show at this time. When using v-show to hide elements, the dom node will only be rendered when it is first loaded, and then the general display will be used to control the display.

Guess you like

Origin blog.csdn.net/li6909096/article/details/128876748