Python implements the Harris Eagle Optimization Algorithm (HHO) to optimize the Random Forest Classification Model (RandomForestClassifier Algorithm) project combat

Explanation: This is a machine learning practical project (with data + code + documentation + video explanation). If you need data + code + documentation + video explanation, you can go directly to the end of the article to get it.



 


1. Project background

In 2019, Heidari et al. proposed the Harris Hawk Optimization (HHO), which has a strong global search capability and has the advantage of requiring fewer parameters to be adjusted.

This project uses the HHO Harris Eagle optimization algorithm to find the optimal parameter values ​​to optimize the random forest classification model.

2. Data acquisition

The modeling data for this time comes from the Internet (compiled by the author of this project), and the statistics of the data items are as follows:

 The data details are as follows (partial display):

3. Data preprocessing

3.1 View data with Pandas tools

Use the head() method of the Pandas tool to view the first five rows of data:

key code:

 

3.2 Data missing view

Use the info() method of the Pandas tool to view data information:

As can be seen from the above figure, there are a total of 11 variables, and there are no missing values ​​in the data, with a total of 1000 data.

key code:

3.3 Data descriptive statistics

Use the describe() method of the Pandas tool to view the mean, standard deviation, minimum, quantile, and maximum of the data.

The key code is as follows:

 

4. Exploratory Data Analysis

4.1 y variable histogram

Use the plot() method of the Matplotlib tool to draw a histogram:

4.2 y=1 sample x1 variable distribution histogram

Use the hist() method of the Matplotlib tool to draw a histogram:

 

4.3 Correlation analysis

 

As can be seen from the figure above, the larger the value, the stronger the correlation. A positive value is a positive correlation, and a negative value is a negative correlation.

5. Feature Engineering

5.1 Establish feature data and label data

The key code is as follows:

5.2 Dataset splitting

Use the train_test_split() method to divide according to 80% training set and 20% test set. The key code is as follows:

6. Construct the HHO Harris Eagle optimization algorithm to optimize the random forest classification model

Mainly use the HHO Harris Eagle optimization algorithm to optimize the random forest classification algorithm for object classification.

6.1 Optimal parameters searched by HHO Harris Eagle optimization algorithm

key code:

Process data for each iteration:

 

Optimal parameters:

 

6.2 Optimal parameter value construction model

 

7. Model Evaluation

7.1 Evaluation indicators and results

Evaluation indicators mainly include accuracy rate, precision rate, recall rate, F1 score and so on.

It can be seen from the above table that the F1 score is 0.9064, indicating that the model works well.

The key code is as follows:

 

7.2 Classification report

 

As can be seen from the above figure, the F1 score of classification 0 is 0.90; the F1 score of classification 1 is 0.91.

7.3 Confusion Matrix

As can be seen from the above figure, there are 11 samples that are actually 0 and predicted to be not 0; 8 samples are actually predicted to be 1 and not 1, and the overall prediction accuracy is good.

8. Conclusion and Outlook

To sum up, this paper uses the HHO Harris Eagle optimization algorithm to find the optimal parameter values ​​of the random forest classification model to build a classification model, and finally proves that the model we proposed works well. This model can be used for forecasting of everyday products.

# Sigma计算赋值
nume = math.gamma(1 + beta) * np.sin(np.pi * beta / 2)  # 计算
deno = math.gamma((1 + beta) / 2) * beta * 2 ** ((beta - 1) / 2)  # 计算
sigma = (nume / deno) ** (1 / beta)  # Sigma赋值
# Parameter u & v
u = np.random.randn(dim) * sigma  # u参数随机赋值
v = np.random.randn(dim)  # v参数随机赋值
# 计算步骤
step = u / abs(v) ** (1 / beta)  # 计算
LF = 0.01 * step  # LF赋值


# ******************************************************************************
 
# 本次机器学习项目实战所需的资料,项目资源如下:
 
# 项目说明:
 
# 链接:https://pan.baidu.com/s/1c6mQ_1YaDINFEttQymp2UQ
 
# 提取码:thgk
 
# ******************************************************************************


# 定义目标函数
def Fun(X_train, y_train, X_test, y_test, x, opts):
    # 参数
    alpha = 0.99  # 赋值
    beta = 1 - alpha  # 赋值
    # 原始特征数
    max_feat = len(x)
    # 选择特征数
    num_feat = np.sum(x == 1)
    # 无特征选择判断
    if num_feat == 0:  # 判断
        cost = 1  # 赋值
    else:
        # 调用错误率计算函数
        error = error_rate(X_train, y_train, X_test, y_test, x, opts)
        # 目标函数计算
        cost = alpha * error + beta * (num_feat / max_feat)

    return cost  # 返回数据

 For more project practice, see the list of machine learning project practice collections:

List of actual combat collections of machine learning projects


Guess you like

Origin blog.csdn.net/weixin_42163563/article/details/130485705