Python implements GA genetic algorithm to optimize XGBoost classification model (XGBClassifier algorithm) project actual combat

Explanation: This is a machine learning practical project (with data + code + documentation + video explanation ). If you need data + code + documentation + video explanation, you can go directly to the end of the article to get it.




1. Project Background

Genetic Algorithm (GA) was first proposed by John Holland in the United States in the 1970s. This algorithm is designed and proposed according to the evolution law of organisms in nature. It is a calculation model of the biological evolution process that simulates the natural selection and genetic mechanism of Darwin's biological evolution theory, and it is a method to search for the optimal solution by simulating the natural evolution process. The algorithm converts the solving process of the problem into a process similar to the crossover and mutation of chromosome genes in biological evolution by means of mathematics and computer simulation operations. When solving more complex combinatorial optimization problems, compared with some conventional optimization algorithms, usually better optimization results can be obtained faster. Genetic algorithm has been widely used in combinatorial optimization, machine learning, signal processing, adaptive control and artificial life and other fields.

This project optimizes the XGBoost classification model through GA genetic algorithm.

2. Data Acquisition

The modeling data for this time comes from the Internet (compiled by the author of this project), and the statistics of the data items are as follows:

The data details are as follows (partial display):

 

3. Data preprocessing

3.1 View data  with Pandas tool

Use the head() method of the Pandas tool to view the first five rows of data:

key code:

 

3.2 Data missing view

Use the info() method of the Pandas tool to view data information:

As can be seen from the above figure, there are a total of 9 variables, no missing values ​​in the data, and a total of 1000 data.

key code:

3.3 Data descriptive statistics 

Use the describe() method of the Pandas tool to view the mean, standard deviation, minimum, quantile, and maximum of the data.

 

The key code is as follows:

 

4. Exploratory Data Analysis

4.1 Histogram of y variables 

Use the plot() method of the Matplotlib tool to draw a histogram:

 

4.2 y=1 sample x1 variable distribution histogram

Use the hist() method of the Matplotlib tool to draw a histogram:

 

4.3 Correlation analysis

 

As can be seen from the figure above, the larger the value, the stronger the correlation. A positive value is a positive correlation, and a negative value is a negative correlation.

5. Feature Engineering

5.1 Establish feature data and label data

The key code is as follows:

 

5.2 Dataset splitting

Use the train_test_split() method to divide according to 80% training set and 20% test set. The key code is as follows:

 

6. Construct GA genetic algorithm to optimize XGBoost classification model

Mainly use the GA genetic algorithm to optimize the XGBoost classification algorithm for target classification.

6.1 GA genetic algorithm to find the optimal parameter value   

Optimal parameters:  

 

6.2 Optimal parameter value construction model 

 

7. Model Evaluation

7.1 Evaluation indicators and results

Evaluation indicators mainly include accuracy rate, precision rate, recall rate, F1 score and so on.

 

It can be seen from the above table that the F1 score is 0.9158, indicating that the model works well.

The key code is as follows:  

 

7.2 Classification report

 

As can be seen from the above figure, the F1 score of classification 0 is 0.92; the F1 score of classification 1 is 0.92.  

7.3 Confusion Matrix

 

As can be seen from the above figure, there are 5 samples that are actually 0 and predicted to be not 0; there are 11 samples that are actually 1 and predicted to be not 1, and the overall prediction accuracy is good.   

8. Conclusion and Outlook

To sum up, this paper uses the GA genetic algorithm to find the optimal parameter value of the XGBoost algorithm to build a classification model, and finally proves that the model we proposed works well. This model can be used for forecasting of everyday products.

# 初始化种群、初始解
Sol = np.zeros((N_pop, d))  # 初始化位置
Fitness = np.zeros((N_pop, 1))  # 初始化适用度
for i in range(N_pop):  # 迭代种群
    Sol[i] = np.random.uniform(Lower_bound, Upper_bound, (1, d))  # 生成随机数
    Fitness[i] = objfun(Sol[i])  # 适用度
 
 
# ******************************************************************************
 
# 本次机器学习项目实战所需的资料,项目资源如下:
 
# 项目说明:
 
# 链接:https://pan.baidu.com/s/1c6mQ_1YaDINFEttQymp2UQ
 
# 提取码:thgk
 
# ******************************************************************************
 
 
# y=1样本x1变量分布直方图
fig = plt.figure(figsize=(8, 5))  # 设置画布大小
plt.rcParams['font.sans-serif'] = 'SimHei'  # 设置中文显示
plt.rcParams['axes.unicode_minus'] = False  # 解决保存图像是负号'-'显示为方块的问题
data_tmp = df.loc[df['y'] == 1, 'x1']  # 过滤出y=1的样本
# 绘制直方图  bins:控制直方图中的区间个数 auto为自动填充个数  color:指定柱子的填充色
plt.hist(data_tmp, bins='auto', color='g')

For more project practice, see the list of machine learning project practice collections:

List of actual combat collections of machine learning projects


For project code consultation and acquisition, please see the official account below. 

Guess you like

Origin blog.csdn.net/weixin_42163563/article/details/132180818