m Matlab simulation of intelligent garbage classification system based on deep learning network, with GUI interface

Table of contents

1. Algorithm simulation effect

2. Summary of theoretical knowledge involved in algorithms

3.MATLAB core program

4. Obtain the complete algorithm code file


1. Algorithm simulation effect

The matlab2022a simulation results are as follows:

2. Summary of theoretical knowledge involved in algorithms

       The rapid increase in the amount of garbage and the complexity and diversity of materials in the garbage have brought serious environmental pollution and resource waste problems. Recycling can reduce waste, but manual pipeline garbage sorting has harsh working conditions, high labor intensity, and low sorting efficiency. The intelligent garbage classification system is an application based on deep learning network. It can automatically classify garbage by learning from a large amount of training data. Intelligent garbage classification systems are mainly based on convolutional neural networks (CNN) or recursive neural networks (RNN). They learn from a large amount of training data and process input images or texts to automatically identify and classify garbage.

Data preprocessing

       ​ ​ First, a large number of garbage images or texts need to be labeled, and the garbage is divided into categories such as recyclable, food waste, hazardous and other garbage. Then, these data are preprocessed, such as grayscale, scaling, denoising and other operations, to optimize the learning efficiency of the neural network.

Convolutional Neural Network (CNN)

      CNN is a neural network used to process image data. In intelligent garbage classification, CNN can be used to extract features and classify garbage images. Through multiple convolutional layers, pooling layers and fully connected layers, CNN can automatically learn the features in the image and output the probability value of each category in the last layer.

      During the training process, the cross-entropy loss function is used to measure the difference between the predicted probability and the actual label, and the back-propagation algorithm is used to optimize the network parameters.

Recurrent Neural Network (RNN)

       RNN is a neural network used to process sequence data, and is suitable for time series data such as text. In intelligent garbage classification, RNN can be used to classify garbage names, descriptions, etc. By inputting sequence data into the RNN model, the output results for each category can be obtained. During the training process, the cross-entropy loss function or the logarithmic loss function can be used to measure the difference between the predicted results and the actual labels, and the back-propagation algorithm can be used to optimize the network parameters.

Cross-Entropy Loss function (Cross-Entropy Loss)

      The cross-entropy loss function is used to measure the difference between the predicted probability and the actual label. In multi-classification problems, the mathematical formula is:

L(y, y_pred) = -sum(y_pred * log(y))

Among them, y is the actual label, y_pred is the predicted probability, and log is the natural logarithm function.

Backpropagation algorithm

       The backpropagation algorithm is used to optimize the parameters of the neural network. By calculating the gradient of the loss function for each parameter, the parameters are updated to minimize the loss function. Its mathematical formula is:

delta_w = -(1/m) * sum(delta_loss * x) * grad(w)
delta_b = -(1/m) * sum(delta_loss) * grad(b)
delta_theta = -(1/m) * sum((h(x) - y) * x') * grad(theta)
delta_alpha = -(1/m) * sum((h(x) - y) * x') * grad(alpha)

       Among them, m is the number of samples, delta_loss is the gradient of the loss function to the parameters, x is the input data, y is the actual label, h(x) is the model prediction result, grad() is the gradient function of the parameters, delta_w, delta_b, delta_theta and delta_alpha is the update amount of parameters w, b, theta and alpha respectively.

       By continuously iterating the training and testing process, the intelligent garbage classification system can gradually improve the classification accuracy and robustness, and realize intelligent classification of garbage.

3.MATLAB core program

Name1   = get(handles.edit7, 'String');
NEpochs = str2num(get(handles.edit8, 'String'));
NMB     = str2num(get(handles.edit9, 'String'));
LR      = str2num(get(handles.edit10, 'String'));
Rate    = str2num(get(handles.edit11, 'String'));


% 使用 imageDatastore 加载图像数据集
Dataset = imageDatastore(Name1, 'IncludeSubfolders', true, 'LabelSource', 'foldernames');
% 将数据集分割为训练集、验证集和测试集
[Training_Dataset, Validation_Dataset, Testing_Dataset] = splitEachLabel(Dataset, Rate, (1-Rate)/2, (1-Rate)/2);
% 加载预训练的 GoogleNet 网络
load googlenet.mat
 
 
% 获取输入层的大小
Input_Layer_Size = net.Layers(1).InputSize(1:2);

% 将图像数据集调整为预训练网络的输入尺寸
Resized_Training_Dataset   = augmentedImageDatastore(Input_Layer_Size ,Training_Dataset);
Resized_Validation_Dataset = augmentedImageDatastore(Input_Layer_Size ,Validation_Dataset);
Resized_Testing_Dataset    = augmentedImageDatastore(Input_Layer_Size ,Testing_Dataset);

% 获取特征学习层和分类器层的名称
Feature_Learner   = net.Layers(142).Name;
Output_Classifier = net.Layers(144).Name;
% 计算数据集的类别数目
Number_of_Classes = numel(categories(Training_Dataset.Labels));
% 创建新的全连接特征学习层
New_Feature_Learner = fullyConnectedLayer(Number_of_Classes, ...
    'Name', 'Coal Feature Learner', ...
    'WeightLearnRateFactor', 10, ...
    'BiasLearnRateFactor', 10);
% 创建新的分类器层
New_Classifier_Layer = classificationLayer('Name', 'Coal Classifier');
% 获取完整网络架构
Network_Architecture = layerGraph(net);
% 替换网络中的特征学习层和分类器层
New_Network = replaceLayer(Network_Architecture, Feature_Learner, New_Feature_Learner);
New_Network = replaceLayer(New_Network, Output_Classifier, New_Classifier_Layer);
0Y_007m

4. Obtain the complete algorithm code file

IN

Guess you like

Origin blog.csdn.net/hlayumi1234567/article/details/133850161