Python implements competitive adaptive reweighted sampling method (CARS) for feature variable selection and builds LightGBM regression model (LGBMRegressor algorithm) project combat

Explanation: This is a machine learning practical project (with data + code + documentation + video explanation ). If you need data + code + documentation + video explanation, you can go directly to the end of the article to get it.




1. Project background

Competitive adaptive reweighted sampling (CARS) is a feature variable selection method that combines Monte Carlo sampling and PLS model regression coefficients, imitating the principle of "survival of the fittest" in Darwin's theory (Li et al. al., 2009). In the CARS algorithm, adaptive reweighted sampling (ARS) is used each time to retain the points with larger absolute weights of regression coefficients in the PLS model as a new subset, remove points with smaller weights, and then based on the new subset After multiple calculations, the wavelength in the subset with the smallest root mean square error (RMSECV) of the interactive verification of the PLS model is selected as the characteristic wavelength.

This project constructs the LightGBM regression model through feature selection by competitive adaptive reweighted sampling method.

2. Data acquisition

The modeling data for this time comes from the Internet (compiled by the author of this project), and the statistics of the data items are as follows:

The data details are as follows (partial display):

3. Data preprocessing

3.1 View data with Pandas tools

Use the head() method of the Pandas tool to view the first five rows of data:

 key code:

3.2 Check missing data

Use the info() method of the Pandas tool to view data information:

As can be seen from the above figure, there are a total of 9 variables, no missing values ​​in the data, and a total of 1000 data.

key code:

 

3.3 Data descriptive statistics

Use the describe() method of the Pandas tool to view the mean, standard deviation, minimum, quantile, and maximum of the data.

The key code is as follows:

 

4. Exploratory Data Analysis

4.1 Histogram of y variables

Use the hist() method of the Matplotlib tool to draw a histogram:

As can be seen from the figure above, the y variable is mainly concentrated between -400 and 400.  

4.2 Correlation analysis

As can be seen from the figure above, the larger the value, the stronger the correlation. A positive value is a positive correlation, and a negative value is a negative correlation.

5. Feature engineering

5.1 Establish feature data and label data

The key code is as follows:

5.2 CARS for feature selection

 

The number of features obtained:

Partial display of the data after feature selection (data saved to Excel):

 

5.3 Dataset splitting

Use the train_test_split() method to divide according to 80% training set and 20% test set. The key code is as follows:

6. Build the LightGBM regression model

The LightGBM regression algorithm is mainly used for target regression.

6.1 Build the model

7. Model Evaluation

7.1 Evaluation indicators and results

The evaluation indicators mainly include explainable variance value, mean absolute error, mean square error, R square value and so on.

 

It can be seen from the above table that the R square is 0.9076, which is a good model.

The key code is as follows:   

 7.2 Comparison chart of actual value and predicted value

 

From the above figure, it can be seen that the fluctuations of the actual value and the predicted value are basically the same, and the model fitting effect is good.    

8. Conclusion and Outlook

To sum up, this paper adopts the competitive adaptive reweighted sampling method for feature variable selection to construct the LightGBM regression model, and finally proves that the model we proposed works well. This model can be used for forecasting of everyday products.

# y变量分布直方图
fig = plt.figure(figsize=(8, 5))  # 设置画布大小
plt.rcParams['font.sans-serif'] = 'SimHei'  # 设置中文显示
plt.rcParams['axes.unicode_minus'] = False  # 解决保存图像是负号'-'显示为方块的问题
data_tmp = df['y']  # 过滤出y变量的样本
# 绘制直方图  bins:控制直方图中的区间个数 auto为自动填充个数  color:指定柱子的填充色
plt.hist(data_tmp, bins='auto', color='g')

 
# ******************************************************************************
 
# 本次机器学习项目实战所需的资料,项目资源如下:
 
# 项目说明:
 
# 链接:https://pan.baidu.com/s/1c6mQ_1YaDINFEttQymp2UQ
 
# 提取码:thgk
 
# ******************************************************************************
 
 
print('LightGBM回归模型-R方值:{}'.format(round(r2_score(y_test, y_pred), 4)))
print('LightGBM回归模型-均方误差:{}'.format(round(mean_squared_error(y_test, y_pred), 4)))
print('LightGBM回归模型-可解释方差值:{}'.format(round(explained_variance_score(y_test, y_pred), 4)))
print('LightGBM回归模型-平均绝对误差:{}'.format(round(mean_absolute_error(y_test, y_pred), 4)))

For more project practice, see the list of machine learning project practice collections:

List of actual combat collections of machine learning projects


For project code consultation and acquisition, please see the official account below. 

Guess you like

Origin blog.csdn.net/weixin_42163563/article/details/132269827