YoloV7 training strongest operation strategy

YoloV7 training strongest operation strategy

This article mainly guides you to use yolov7 to practice mask target detection data set . The main purpose is to hope that through this tutorial, you can use yolov7 to train, test, and predict your own data set. The code data set training model link is at the end!

YOLOV7 configuration files are merged, the trunk can be replaced at will, TSOCDE (2023 latest decoupling head) project link


2022-11-20 update:

Uploaded a video tutorial of yolov7 on station b to be used with this blog post. Link
The data set link in station B , this is a 1.1k target detection data set for identifying whether a person is wearing a mask, which has voc format and yolo format. This is the code and model file trained by yolov7 , which contains the trained yolov7 -The weights of -tiny, yolov7, and yolov7w6 can be used directly for detection.

Tutorials, video explanations, and source code for adding pyqt5 as a visual interface in yolov7. Link


Updated on 2022-12-28:

The heat map visualization of yolov5 and yolov7 has been updated on station B and github . It does not require any modification to the source code. It is plug and play. If you are interested, you can check it out.

Updated on 2023-1-9:

DAMO-YOLO tutorials have been uploaded on station B and blogs .

Updated on 2023-1-28:

Station B teaching links and blog YOLOV7 improvements - add EIOU, SIOU, AlphaIOU, FocalEIOU.

Updated on 2023-1-31:

Station B teaching link YOLOV7 improvement-added attention mechanism comes with dozens of attention mechanism codes.

Updated on 2023-2-11:

Bilibili teaching link YOLOV7 improvement-Wise IoU reference .

Updated on 2023-2-18:

Station B teaching link YOLOV7 improvement-add deformable convolution DCNV2.

2023.2.26 Update

Bilibili Tutorial Link Visualize and count the TP, FP, and FN of the prediction results

2023.2.26 Update

Station B teaching link YOLOV7 improvement-added SAConv.

Heavy! ! ! ! ! YOLO Model Improvement Collection Guide-CSDN

2023-1

A yolov7+byteTrack (2021 target tracking SOTA) source code will be updated in January. If you are interested, please like and follow, so stay tuned!

In addition, here is an advertisement, which is a code based on pytorch-image-classifier that I integrated and open sourced . This is a sample blog with full functions and visualization. If you are interested, you can take a look. Thank you!


Text begins:

1. Download source code and data set

For the source code, you can directly download the code of this link. The code of this link contains some convenient tools, such as converting voc format to yolo, splitting data sets, etc. The code of this link will be updated as the official website is updated, so don’t be afraid The problem is that the code version is old!
Here I have prepared a mask target detection data set . If you do not have a data set currently, you can download this data set as a practical data set case for this blog.

2. Configure the environment

For the environment configuration, you can refer to another article of this blogger based on the flower classification of pytorch . If there are any problems during the installation process, you can send a private message to the blogger or leave a message. The blogger will try his best to help everyone solve the problem.

3. Process the data set

As we all know, most of the source code of the yolo series is annotated files that need to use yolo format. So most of the data sets on the Internet are in voc format, so a format conversion needs to be done here. We will divide it into two situations:

VOC data set format

If your dataset format is VOC format, there will usually be pictures and files with xml suffix. This xml suffix file is the label file of your data set. In the code, our dataset folder is the storage of the data set. location, where the images are stored in the dataset/VOCdevkit/JPEGImages folder, the label files need to be stored in the dataset/VOCdevkit/Annotations folder, and the dataset/VOCdevkit/txt folder stores the converted yolo format label files. (You can create this file or not. It will be automatically detected during conversion. If it does not exist, it will be created automatically.)
Insert image description here
Insert image description here

Of course, you can also modify imgpath (the path where the image is stored), xmlpath (the path where the voc annotation format exists), and txtpath (the path where the converted yolo format is stored), but the blogger recommends that you generally do not modify it, because it depends on the user's hands-on ability. It's relatively close. If there are errors reported after modification, it may not be handled. It is recommended that it is safest to follow the blogger's path and examples.
After the corresponding data is placed, we need to run xml2txt.py. This file has a postfix parameter, which is the suffix format of your image. The default is jpg. If your image is bmp or png, you can modify this parameter. Of course, it does not Supports mixed suffix formats, which will cause an error message that the output file cannot be found. Please pay attention to this! This file will read the xml file in the Annotations folder into memory, then convert it into yolo format and save it to the dataset/VOCdevkit/txt folder. The running screenshot is as shown below:
Insert image description here

Each file conversion will have a separate line of output. If there is a problem with the conversion of a certain file or there are files in other formats in your annotation file, the corresponding information will be prompted. For example, if the conversion of a certain file fails, an error message will be prompted. , but it will not terminate the program, but this file will not be converted. Finally, there will be a list below this file convert failure. If the list is empty, it means that all conversions are successful. If it is not empty, the file path in the list It is an annotation file that has a conversion error. You can look at the error message. If it is weird, ignore it. However, one thing to note is that if all conversion errors occur, it is very likely that the location you stored is wrong. Wait, these need to be checked based on the error message. The second list is the category information in your data set. This category information is useful. We need to manually copy it to the names of the data/data.yaml file, as shown in the figure below: The blue box is the fixed path (
Insert image description here
if You follow this tutorial), the yellow box is modified according to the category of your data set. For example, the current data set has three categories, we set it to 3. The red box is the information output by our xml2txt.py, then the name of the category It can be changed. Assuming that our third category means incorrect wearing of masks, then we can change it to mask_incorrect, but it is recommended not to include Chinese.

YOLO data set format

If your own data set is in YOLO format, then you can directly put all the images in the dataset/VOCdevkit/VOC2007/JPEGImages folder, and put the label file txt in dataset/VOCdevkit/VOC2007/txt, and then you need to modify it yourself. The number of categories and category names in data/data.yaml. For data sets in YOLO format, there is usually a separate classes.txt to record category information.

Split the dataset

Regardless of whether it is a VOC format data set or a YOLO format data set, follow the above steps and run split_data.py. This file also has a postfix parameter, which defaults to jpg. If your data set does not have a jpg suffix, please modify it yourself. Of course not Supports mixed suffix formats, please pay attention! There are also val_size and test_size parameters in split_data.py, which are proportional coefficients. The defaults are 0.1 and 0.2. Please modify them yourself if necessary. After running successfully, it will automatically create the folders shown below, and then copy the corresponding pictures and label files to the corresponding folders.
Insert image description here
When you complete this step, the data set is processed.

4. Training

For training, we divide it into two, because yolov7 has two training files, one is train.py and the other is train_aux.py. If you download the code of this article, then the pre-training weights have been downloaded to the weights folder in the project, then the following tutorial starts the training:

training-train.py

Let’s first explain the meaning of the key parameters:

  • Weights
    pre-training file weight path, this can be found in the weights folder.
  • cfg
    model configuration file path, which can be found in the cfg/training folder.
  • The path of the data
    configuration file is data/data.yaml by default.
  • hyp
    hyperparameter configuration file path, this can be found in the data folder.
  • The number of epochs
    to learn.
  • batch-sizeThe
    amount of data in one iteration.
  • img-size
    image input size for training.
  • resume
    Whether to continue the last unfinished training.
  • deviceThe
    equipment used for training.
  • label-smoothing
    label smoothing value.
  • name
    The name of the folder where the log model is saved.

  • The path to the upper-level folder in the folder where the project log model is saved.

  • The number of workers in the workers dataloader.
  • Whether single-cls
    treats all categories as one category for training. It means no classification.
  • multi-scale
    multi-scale training.

For most projects, we only need to pay attention to the parameters weights, cfg, epochs, batch-size, and img-size. Weights and cfg need to match, that is, if you choose the configuration file of yolov7-tiny, you will Select the weight of yolov7-tiny, as shown in the figure below:
Insert image description here
Then we can start training, where we use yolov7-tiny for demonstration. If you need to train other models, please change the paths of the –weights and –cfg parameters yourself. One thing to note is that the current train.py only supports the training of three models: yolov7-tiny, yolov7, and yolov7x:
Insert image description here
other models need to be trained in another training script train_aux.py, which we will demonstrate below. The parameter settings are as follows:
Insert image description here
Then run the train.py file, and then there will be a long training time. When the training is completed, we can see the following information on the console:
Insert image description here
Finally, the training time, accuracy index, and saved model will be output. path and size.

training-train_aux.py

First of all, the models trained by this script file are relatively large. Generally, they may not be able to be trained without a server. This operation is exactly the same as train.py, except that cfg and weights only support the following models: Because this train_aux.py only
Insert image description here
supports It supports training models with p6 detection layer, and then our training parameter settings are as follows:
Insert image description here
mainly modifying the weights and cfg parameters, because the models with p6 detection layer are relatively large, so here epochs is only set to 50 for demonstration. Then we can run train_aux.py and wait for the training to complete.

–hyp parameter

The hyp parameter is the configuration file path of the hyperparameter. For novices, the default is generally enough. You can basically get a better result without modifying it. For those who want to adjust the parameters, you can open the corresponding file yourself. Modification, the official provides a more detailed explanation for each parameter in the configuration file.

4. Test

In the third step, we mainly introduce some important parameters and how to use the two training scripts train.py and train_aux.py respectively. In the fourth step, we mainly introduce how to use the trained model to calculate indicators for our test set. Our script for calculating indicators is test.py. Some key parameters are explained below:

  • weights
    trained model weight path.
  • data
    data configuration file path. The model is data/data.yaml (if you follow this blog)
  • Batch-size
    tests the amount of data in one iteration.
  • The image size for img-size
    testing is generally the same as during training.
  • conf-thres
    Confidence threshold for the target.
  • iou-thres
    iou threshold in nms.
  • The task
    task type supports test (train, val, test) collection. The default is test. It also supports calculation of fps. It only needs to be set to speed.
  • Whether augment
    uses test phase data augmentation (TTA).
  • The comments in the verbose
    code are written to display the ap of each category, but there is no difference in actual use.
  • save-txt
    Whether the recognition result needs to be saved as txt.

  • When testing save-hybrid, I feel that it is no different from save-txt. If you know more about it, please leave a message.

  • Whether save-conf saves the confidence level needs to be used together with save-txt.
  • save-jsonWhether
    the recognition results need to be saved in coco-json format.
  • nameThe
    name of the folder where the accuracy indicator is saved.

  • The path to the upper-level folder in the folder where the project accuracy indicator is saved.
    After we train successfully, we can find the following files in the file set in runs/train:
    Insert image description here
    weights contains the weights saved during training, and the others are some indicator files. You can open them yourself and take a look. I won’t go into details here. are some of the more common indicators.
    Our parameter settings are as follows, mainly the path of weights. Here we choose best.pt, which is the model with the best accuracy in the verification set during the training process.
    Insert image description here
    After the run is completed, you can see the following figure:
    Insert image description here
    showing the indicators of each category and the overall and some inference time-consuming information. You can also find the corresponding indicator image in the folder in runs/test:
    Insert image description here
5. Forecast

The fifth step is the tutorial of the prediction script detect.py. Most of its parameters are similar to test.py. Let’s first explain the key parameters:

  • weights
    trained model weight path.
  • Data path for source
    detection. (Supports images, folders (which store pictures), and videos)
  • The image size for img-size
    testing is generally the same as during training.
  • conf-thres
    Confidence threshold for the target.
  • iou-thres
    iou threshold in nms.
  • Whether augment
    uses test phase data augmentation (TTA).
  • The comments in the verbose
    code are written to display the ap of each category, but there is no difference in actual use.
  • save-txt
    Whether the recognition result needs to be saved as txt.

  • Whether save-conf saves the confidence level needs to be used together with save-txt.
  • nameThe
    name of the folder where the accuracy indicator is saved.

  • The path to the upper-level folder in the folder where the project accuracy indicator is saved.

Our parameter settings are as follows:
Insert image description here
Then we are consistent with using test.py, and also use best.pt for detection. We set the source to the image path of the test set. After the operation is completed, you can find the corresponding saved image data in the runs/detect folder.

6. Follow-up

A more practical mask detection project based on yolov7 will be updated in the future (with pyqt interface, larger training data set, better detection effect), which can be used as a course project or graduation project, etc. Please pay more attention to it.
Code dataset model link

If the content is helpful to you, please give it a like, thank you!

Copyright statement: This article is an original article by the blogger and follows the CC 4.0 BY-SA copyright agreement. Please attach the original source link and this statement when reprinting. Link to this article: https://blog.csdn.net/qq_37706472/article/details/127796547 ———————————— Copyright Statement: This article is the original work of CSDN blogger “Devil Mask” This article follows the CC 4.0 BY-SA copyright agreement. Please attach a link to the original source and this statement when reprinting. Original link: https://blog.csdn.net/qq_37706472/article/details/127796547

YoloV7 training strongest operation strategy

This article mainly guides you to use yolov7 to practice mask target detection data set . The main purpose is to hope that through this tutorial, you can use yolov7 to train, test, and predict your own data set. The code data set training model link is at the end!

YOLOV7 configuration files are merged, the trunk can be replaced at will, TSOCDE (2023 latest decoupling head) project link


2022-11-20 update:

Uploaded a video tutorial of yolov7 on station b to be used with this blog post. Link
The data set link in station B , this is a 1.1k target detection data set for identifying whether a person is wearing a mask, which has voc format and yolo format. This is the code and model file trained by yolov7 , which contains the trained yolov7 -The weights of -tiny, yolov7, and yolov7w6 can be used directly for detection.

Tutorials, video explanations, and source code for adding pyqt5 as a visual interface in yolov7. Link


Updated on 2022-12-28:

The heat map visualization of yolov5 and yolov7 has been updated on station B and github . It does not require any modification to the source code. It is plug and play. If you are interested, you can check it out.

Updated on 2023-1-9:

DAMO-YOLO tutorials have been uploaded on station B and blogs .

Updated on 2023-1-28:

Station B teaching links and blog YOLOV7 improvements - add EIOU, SIOU, AlphaIOU, FocalEIOU.

Updated on 2023-1-31:

Station B teaching link YOLOV7 improvement-added attention mechanism comes with dozens of attention mechanism codes.

Updated on 2023-2-11:

Bilibili teaching link YOLOV7 improvement-Wise IoU reference .

Updated on 2023-2-18:

Station B teaching link YOLOV7 improvement-add deformable convolution DCNV2.

2023.2.26 Update

Bilibili Tutorial Link Visualize and count the TP, FP, and FN of the prediction results

2023.2.26 Update

Station B teaching link YOLOV7 improvement-added SAConv.

Heavy! ! ! ! ! YOLO Model Improvement Collection Guide-CSDN

2023-1

A yolov7+byteTrack (2021 target tracking SOTA) source code will be updated in January. If you are interested, please like and follow, so stay tuned!

In addition, here is an advertisement, which is a code based on pytorch-image-classifier that I integrated and open sourced . This is a sample blog with full functions and visualization. If you are interested, you can take a look. Thank you!


Text begins:

1. Download source code and data set

For the source code, you can directly download the code of this link. The code of this link contains some convenient tools, such as converting voc format to yolo, splitting data sets, etc. The code of this link will be updated as the official website is updated, so don’t be afraid The problem is that the code version is old!
Here I have prepared a mask target detection data set . If you do not have a data set currently, you can download this data set as a practical data set case for this blog.

2. Configure the environment

For the environment configuration, you can refer to another article of this blogger based on the flower classification of pytorch . If there are any problems during the installation process, you can send a private message to the blogger or leave a message. The blogger will try his best to help everyone solve the problem.

3. Process the data set

As we all know, most of the source code of the yolo series is annotated files that need to use yolo format. So most of the data sets on the Internet are in voc format, so a format conversion needs to be done here. We will divide it into two situations:

VOC data set format

If your dataset format is VOC format, there will usually be pictures and files with xml suffix. This xml suffix file is the label file of your data set. In the code, our dataset folder is the storage of the data set. location, where the images are stored in the dataset/VOCdevkit/JPEGImages folder, the label files need to be stored in the dataset/VOCdevkit/Annotations folder, and the dataset/VOCdevkit/txt folder stores the converted yolo format label files. (You can create this file or not. It will be automatically detected during conversion. If it does not exist, it will be created automatically.)
Insert image description here
Insert image description here

Of course, you can also modify imgpath (the path where the image is stored), xmlpath (the path where the voc annotation format exists), and txtpath (the path where the converted yolo format is stored), but the blogger recommends that you generally do not modify it, because it depends on the user's hands-on ability. It's relatively close. If there are errors reported after modification, it may not be handled. It is recommended that it is safest to follow the blogger's path and examples.
After the corresponding data is placed, we need to run xml2txt.py. This file has a postfix parameter, which is the suffix format of your image. The default is jpg. If your image is bmp or png, you can modify this parameter. Of course, it does not Supports mixed suffix formats, which will cause an error message that the output file cannot be found. Please pay attention to this! This file will read the xml file in the Annotations folder into memory, then convert it into yolo format and save it to the dataset/VOCdevkit/txt folder. The running screenshot is as shown below:
Insert image description here

Each file conversion will have a separate line of output. If there is a problem with the conversion of a certain file or there are files in other formats in your annotation file, the corresponding information will be prompted. For example, if the conversion of a certain file fails, an error message will be prompted. , but it will not terminate the program, but this file will not be converted. Finally, there will be a list below this file convert failure. If the list is empty, it means that all conversions are successful. If it is not empty, the file path in the list It is an annotation file that has a conversion error. You can look at the error message. If it is weird, ignore it. However, one thing to note is that if all conversion errors occur, it is very likely that the location you stored is wrong. Wait, these need to be checked based on the error message. The second list is the category information in your data set. This category information is useful. We need to manually copy it to the names of the data/data.yaml file, as shown in the figure below: The blue box is the fixed path (
Insert image description here
if You follow this tutorial), the yellow box is modified according to the category of your data set. For example, the current data set has three categories, we set it to 3. The red box is the information output by our xml2txt.py, then the name of the category It can be changed. Assuming that our third category means incorrect wearing of masks, then we can change it to mask_incorrect, but it is recommended not to include Chinese.

YOLO data set format

If your own data set is in YOLO format, then you can directly put all the images in the dataset/VOCdevkit/VOC2007/JPEGImages folder, and put the label file txt in dataset/VOCdevkit/VOC2007/txt, and then you need to modify it yourself. The number of categories and category names in data/data.yaml. For data sets in YOLO format, there is usually a separate classes.txt to record category information.

Split the dataset

Regardless of whether it is a VOC format data set or a YOLO format data set, follow the above steps and run split_data.py. This file also has a postfix parameter, which defaults to jpg. If your data set does not have a jpg suffix, please modify it yourself. Of course not Supports mixed suffix formats, please pay attention! There are also val_size and test_size parameters in split_data.py, which are proportional coefficients. The defaults are 0.1 and 0.2. Please modify them yourself if necessary. After running successfully, it will automatically create the folders shown below, and then copy the corresponding pictures and label files to the corresponding folders.
Insert image description here
When you complete this step, the data set is processed.

4. Training

For training, we divide it into two, because yolov7 has two training files, one is train.py and the other is train_aux.py. If you download the code of this article, then the pre-training weights have been downloaded to the weights folder in the project, then the following tutorial starts the training:

training-train.py

Let’s first explain the meaning of the key parameters:

  • Weights
    pre-training file weight path, this can be found in the weights folder.
  • cfg
    model configuration file path, which can be found in the cfg/training folder.
  • The path of the data
    configuration file is data/data.yaml by default.
  • hyp
    hyperparameter configuration file path, this can be found in the data folder.
  • The number of epochs
    to learn.
  • batch-sizeThe
    amount of data in one iteration.
  • img-size
    image input size for training.
  • resume
    Whether to continue the last unfinished training.
  • deviceThe
    equipment used for training.
  • label-smoothing
    label smoothing value.
  • name
    The name of the folder where the log model is saved.

  • The path to the upper-level folder in the folder where the project log model is saved.

  • The number of workers in the workers dataloader.
  • Whether single-cls
    treats all categories as one category for training. It means no classification.
  • multi-scale
    multi-scale training.

For most projects, we only need to pay attention to the parameters weights, cfg, epochs, batch-size, and img-size. Weights and cfg need to match, that is, if you choose the configuration file of yolov7-tiny, you will Select the weight of yolov7-tiny, as shown in the figure below:
Insert image description here
Then we can start training, where we use yolov7-tiny for demonstration. If you need to train other models, please change the paths of the –weights and –cfg parameters yourself. One thing to note is that the current train.py only supports the training of three models: yolov7-tiny, yolov7, and yolov7x:
Insert image description here
other models need to be trained in another training script train_aux.py, which we will demonstrate below. The parameter settings are as follows:
Insert image description here
Then run the train.py file, and then there will be a long training time. When the training is completed, we can see the following information on the console:
Insert image description here
Finally, the training time, accuracy index, and saved model will be output. path and size.

training-train_aux.py

First of all, the models trained by this script file are relatively large. Generally, they may not be able to be trained without a server. This operation is exactly the same as train.py, except that cfg and weights only support the following models: Because this train_aux.py only
Insert image description here
supports It supports training models with p6 detection layer, and then our training parameter settings are as follows:
Insert image description here
mainly modifying the weights and cfg parameters, because the models with p6 detection layer are relatively large, so here epochs is only set to 50 for demonstration. Then we can run train_aux.py and wait for the training to complete.

–hyp parameter

The hyp parameter is the configuration file path of the hyperparameter. For novices, the default is generally enough. You can basically get a better result without modifying it. For those who want to adjust the parameters, you can open the corresponding file yourself. Modification, the official provides a more detailed explanation for each parameter in the configuration file.

4. Test

In the third step, we mainly introduce some important parameters and how to use the two training scripts train.py and train_aux.py respectively. In the fourth step, we mainly introduce how to use the trained model to calculate indicators for our test set. Our script for calculating indicators is test.py. Some key parameters are explained below:

  • weights
    trained model weight path.
  • data
    data configuration file path. The model is data/data.yaml (if you follow this blog)
  • Batch-size
    tests the amount of data in one iteration.
  • The image size for img-size
    testing is generally the same as during training.
  • conf-thres
    Confidence threshold for the target.
  • iou-thres
    iou threshold in nms.
  • The task
    task type supports test (train, val, test) collection. The default is test. It also supports calculation of fps. It only needs to be set to speed.
  • Whether augment
    uses test phase data augmentation (TTA).
  • The comments in the verbose
    code are written to display the ap of each category, but there is no difference in actual use.
  • save-txt
    Whether the recognition result needs to be saved as txt.

  • When testing save-hybrid, I feel that it is no different from save-txt. If you know more about it, please leave a message.

  • Whether save-conf saves the confidence level needs to be used together with save-txt.
  • save-jsonWhether
    the recognition results need to be saved in coco-json format.
  • nameThe
    name of the folder where the accuracy indicator is saved.

  • The path to the upper-level folder in the folder where the project accuracy indicator is saved.
    After we train successfully, we can find the following files in the file set in runs/train:
    Insert image description here
    weights contains the weights saved during training, and the others are some indicator files. You can open them yourself and take a look. I won’t go into details here. are some of the more common indicators.
    Our parameter settings are as follows, mainly the path of weights. Here we choose best.pt, which is the model with the best accuracy in the verification set during the training process.
    Insert image description here
    After the run is completed, you can see the following figure:
    Insert image description here
    showing the indicators of each category and the overall and some inference time-consuming information. You can also find the corresponding indicator image in the folder in runs/test:
    Insert image description here
5. Forecast

The fifth step is the tutorial of the prediction script detect.py. Most of its parameters are similar to test.py. Let’s first explain the key parameters:

  • weights
    trained model weight path.
  • Data path for source
    detection. (Supports images, folders (which store pictures), and videos)
  • The image size for img-size
    testing is generally the same as during training.
  • conf-thres
    Confidence threshold for the target.
  • iou-thres
    iou threshold in nms.
  • Whether augment
    uses test phase data augmentation (TTA).
  • The comments in the verbose
    code are written to display the ap of each category, but there is no difference in actual use.
  • save-txt
    Whether the recognition result needs to be saved as txt.

  • Whether save-conf saves the confidence level needs to be used together with save-txt.
  • nameThe
    name of the folder where the accuracy indicator is saved.

  • The path to the upper-level folder in the folder where the project accuracy indicator is saved.

Our parameter settings are as follows:
Insert image description here
Then we are consistent with using test.py, and also use best.pt for detection. We set the source to the image path of the test set. After the operation is completed, you can find the corresponding saved image data in the runs/detect folder.

6. Follow-up

A more practical mask detection project based on yolov7 will be updated in the future (with pyqt interface, larger training data set, better detection effect), which can be used as a course project or graduation project, etc. Please pay more attention to it.
Code dataset model link

If the content is helpful to you, please give it a like, thank you!

Copyright statement: This article is an original article by the blogger and follows the CC 4.0 BY-SA copyright agreement. Please attach the original source link and this statement when reprinting. Link to this article: https://blog.csdn.net/qq_37706472/article/details/127796547 ———————————— Copyright Statement: This article is the original work of CSDN blogger “Devil Mask” This article follows the CC 4.0 BY-SA copyright agreement. Please attach a link to the original source and this statement when reprinting. Original link: https://blog.csdn.net/qq_37706472/article/details/127796547

Guess you like

Origin blog.csdn.net/weixin_43722052/article/details/133418521