mMatlab simulation of driver's phone call behavior early warning system based on Yolov2 deep learning network, with GUI interface

Table of contents

1. Algorithm simulation effect

2. Summary of theoretical knowledge involved in algorithms

2.1Yolov2

2.2 System architecture and working principle

3.MATLAB core program

4. Obtain the complete algorithm code file


1. Algorithm simulation effect

The matlab2022a simulation results are as follows:

2. Summary of theoretical knowledge involved in algorithms

       As the number of cars continues to increase, traffic safety issues have become increasingly prominent. Among them, the behavior of drivers holding mobile phones is an important cause of traffic accidents. In order to reduce the occurrence rate of such accidents, this article proposes a driver's hand-held phone behavior early warning system based on Yolov2 deep learning network. The system can monitor the driver's driving behavior in real time and send out an early warning signal when the driver is found holding a phone, reminding the driver to concentrate and ensure driving safety.

2.1Yolov2


      Yolov2 is an object detection algorithm that uses a deep learning model called a convolutional neural network (CNN). The model can automatically learn and extract image features to detect target objects in images. In terms of face detection, Yolov2 can automatically learn and extract facial features to accurately detect the position of the face in the image.

       The core idea of ​​the Yolov2 algorithm is to use a method called "anchor" to achieve face detection by modeling faces of different sizes and aspect ratios. The algorithm first presets some anchor points in the image, and then determines whether there is a face and its location by calculating the similarity between the anchor points and the real face.

The mathematical formula of the Yolov2 model mainly includes the following parts:

(1) Anchor point calculation: For each anchor point, calculate its similarity to the real face. The deep learning method based on convolutional neural network is usually used for calculation. The formula is as follows:

         A(i,j) = f(I,i,j) (1)

       Among them, A(i,j) represents the similarity between anchor point (i,j) and the real face, f(I,i,j) represents the comparison between anchor point (i,j) and the face in image I. Calculation results.

(2) Face position regression: Based on the similarity of the anchor points, methods such as non-maximum suppression (NMS) are used to return the face position. The formula is as follows:

         B = argmax A * I (2)

Among them, B represents the returned face position, A represents the similarity matrix between the anchor point and the real face, and I represents the image.

2.2 System architecture and working principle

         This system mainly consists of three parts: image acquisition module, Yolov2 deep learning network module and early warning output module. The specific working principle is as follows:

       Image acquisition module: Collect the driver’s facial image in real time through the vehicle camera, and transmit the image to the Yolov2 deep learning network module for processing. During the simulation process, we took test pictures for testing.
       Yolov2 deep learning network module: This module is the core part of this system and is mainly responsible for feature extraction and target detection of input facial images. Specifically, this module first uses a convolutional neural network (CNN) to extract features from the input facial image, and then uses the Yolov2 algorithm to perform target detection on the extracted features to determine whether there is a driver holding a phone.
        Early warning output module: When the Yolov2 deep learning network module detects the driver’s behavior of holding a phone, the module will send out an early warning signal to remind the driver to concentrate. Early warning signals can be output in a variety of ways, such as text prompts, sounds, lights, etc. On the GUI interface, text prompts are used.

3.MATLAB core program

....................................................................
% --- Executes on button press in pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
global im;
global Predicted_Label;
cla (handles.axes1,'reset')
 
axes(handles.axes1);
set(handles.edit2,'string',num2str(0));


[filename,pathname]=uigetfile({'*.bmp;*.jpg;*.png;*.jpeg;*.tif'},'选择一个图片','F:\test');
str=[pathname filename];
% 判断文件是否为空,也可以不用这个操作!直接读入图片也可以的
% im = imread(str);
% imshow(im)
if isequal(filename,0)||isequal(pathname,0)
    warndlg('please select a picture first!','warning');
    return;
else
    im = imread(str);
    imshow(im);
end
 

% --- Executes on button press in pushbutton2.
function pushbutton2_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton2 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
% global im;
%  
% 
% 
% [Predicted_Label, Probability] = classify(net, II);
% imshow(im);
global im;
global Predicted_Label;


load model.mat
img_size= [224,224];


axes(handles.axes1);

I               = imresize(im,img_size(1:2));
[bboxes,scores] = detect(detector,I,'Threshold',0.15);
flag=0;
if ~isempty(bboxes) % 如果检测到目标
    [Vs,Is] = max(scores);
    flag    = 1;
    I       = insertObjectAnnotation(I,'rectangle',bboxes(Is,:),Vs,LineWidth=2);% 在图像上绘制检测结果
end
imshow(I)
0Y_010m

4. Obtain the complete algorithm code file

IN

Guess you like

Origin blog.csdn.net/hlayumi1234567/article/details/134868516