Project Introduction
TensorFlow2.X builds a convolutional neural network (CNN) to realize face recognition (you can recognize your own face!). The built convolutional neural network has a VGG-like structure (the convolutional layer and the pooling layer are stacked repeatedly, then go through the fully connected layer, and finally use softmax to map the probability of each category, and the one with the highest probability is the recognition result).
other projects
Fruit and Vegetable Recognition: Fruit Recognition Project Based on Convolutional Neural Network
Traffic Sign Recognition: Traffic Sign Recognition Project Based on Convolutional Neural Network
Network structure:
Development environment:
- python==3.7
- tensorflow==2.3
Install conda and pycharm
If it is already installed, please ignore it.
Obtain in the comment area : the sharing link of the installation package, including Pycharm, Anaconda, Miniconda, TeamViewer (remote assistance), FormatFactory (format factory).
To install aconda,
you can choose Anaconda or Miniconda. The installation methods and methods are exactly the same. But it is strongly recommended to choose Miniconda , because it is relatively "light", takes up very little memory, and the installation time is extremely short compared to Anaconda !
Go directly to the next step, and check all the options on this page, otherwise the environment variables will not be added.
Install Pycharm
directly to the next step, and check all of them on this page.
Install the remote assistance software
If you need remote assistance, please install TeamViewer in advance.
data set:
Image categories:
'COVID-19':'New Coronary Pneumonia', 'NORMAL':'Normal', 'Viral Pneumonia':'Viral Pneumonia'
Note: The data set comes from the Internet, so the face category display pictures of myself (Li Peiyu) have been coded.
code debugging
After getting the project, decompress the file, as shown in the following figure after decompression:
Step1: Open the project folder
Introduction to each file and code:
Step2: Build a development environment
Create a virtual environment
After entering cmd and pressing Enter, a command terminal will open, and we will start to create a virtual environment:
the input command is:
conda create -n tf23_py37 python=3.7
After entering the command and pressing Enter, the following prompt appears, continue to press Enter:
Then after pressing Enter, we created a virtual environment with the environment name "tf23_py37", and its python version is 3.7, as shown in the following figure:
Activate the virtual environment
Copy this command, enter the command line, and activate the virtual environment we created:
conda activate tf23_py37
Install third-party dependencies
Next, start to install the third-party dependent libraries used in the project, such as tensorflow, matplotlib, pyqt5, etc. All the dependent libraries used this time are recorded in the requirements.txt file. Start the installation below:
Enter the following command in the command terminal.
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
Note : The above command "-i" for installing third-party dependent libraries is followed by the domestic mirror source address. If the installation fails and prompts that there is no corresponding third-party library dependent version in the mirror source we specified, you can consider choosing other mirror sources.
Commonly used mirror source addresses in China
清华:https://pypi.tuna.tsinghua.edu.cn/simple
阿里云:https://mirrors.aliyun.com/pypi/simple/
中国科技大学 https://pypi.mirrors.ustc.edu.cn/simple/
华中理工大学:http://pypi.hustunique.com/
山东理工大学:http://pypi.sdutlinux.org/
豆瓣:http://pypi.douban.com/simple/
After the installation is successful, as shown below:
Open the project configuration environment
If the following prompt appears:
If the following prompt appears:
Select the interpreter (the virtual environment we created above)
After opening the friend charm, click the interpreter selection in the lower right corner of pycharm, select "Add Interpreter" and choose to add an interpreter.
Just follow the prompts in the picture, add the "python interpreter" we need, and the bottom right corner of pycharm will display as shown in the figure below, which means success:
Train the neural network model
Open the code of the project "train_cnn.py", and operate according to the prompts in the picture:
The effect of successful operation is shown in the figure below:
After the successful operation, the next thing to do is to "wait". According to the configuration of each person's computer, the time to run the code for training the network is also different (a few minutes - several hours). Wait for the completion of the operation. If no error is reported, the training is successful. .
After the training is successful, the "cnn_fv.h5" file will be generated in the models folder.
After the training is successful, in the results folder, you can see the "results_cnn.png" picture, which records the changes in accuracy and loss during the training process.
Also follow the logical steps of running "train_cnn.py" to run "train_mobilenet.py", and the mobilenet neural network will be trained. The results of the operation can form a set of comparisons with CNN. It is more conducive to our writing articles!
test
After training the model, we start to test the model (evaluate the performance of the model), open "test_model.py"
Follow the instructions in the picture.
After running successfully, a "heatmap_cnn.png" heat map will be generated under the results folder (you can see the prediction accuracy of each category), as shown below:
predict
GUI interface of PYQT5
After training and testing , we have obtained a neural network weight that can be used for traffic sign recognition. Next, we start to predict the traffic sign pictures that need to be recognized. Open the "windows.py" code, click to run directly, the result is as follows:
After running successfully, we get a pyqt5 GUI interface, and then we can use this GUI operation to predict the fruit pictures predicted by our project!
Flask web display
Open the code "app.py" and click Run directly, the result is as follows:
Click the link output by the console, or open your browser and enter the URL http://127.0.0.1:5000. You can jump to the flask front-end page, and then do your own interaction, upload pictures, click prediction, the model will output the confidence of each category, and display it on the webpage in descending order from large to small, and display it on the top category It is the recognition result of the final model .