Building Convolutional Neural Network CNN (New Coronary Pneumonia Medical Image Recognition) Nanny and Level Tutorial Based on Tensorflow

Project Introduction

TensorFlow2.X builds a convolutional neural network (CNN) to realize face recognition (you can recognize your own face!). The built convolutional neural network has a VGG-like structure (the convolutional layer and the pooling layer are stacked repeatedly, then go through the fully connected layer, and finally use softmax to map the probability of each category, and the one with the highest probability is the recognition result).

other projects

Fruit and Vegetable Recognition: Fruit Recognition Project Based on Convolutional Neural Network
Traffic Sign Recognition: Traffic Sign Recognition Project Based on Convolutional Neural Network

Network structure:
insert image description here

Development environment:

  • python==3.7
  • tensorflow==2.3

Install conda and pycharm

If it is already installed, please ignore it.

Obtain in the comment area : the sharing link of the installation package, including Pycharm, Anaconda, Miniconda, TeamViewer (remote assistance), FormatFactory (format factory).
insert image description here

To install aconda,
you can choose Anaconda or Miniconda. The installation methods and methods are exactly the same. But it is strongly recommended to choose Miniconda , because it is relatively "light", takes up very little memory, and the installation time is extremely short compared to Anaconda !

Go directly to the next step, and check all the options on this page, otherwise the environment variables will not be added.
insert image description here

Install Pycharm
directly to the next step, and check all of them on this page.
insert image description here
Install the remote assistance software
If you need remote assistance, please install TeamViewer in advance.
insert image description here

data set:
insert image description here

insert image description here

Image categories:
'COVID-19':'New Coronary Pneumonia', 'NORMAL':'Normal', 'Viral Pneumonia':'Viral Pneumonia'

Note: The data set comes from the Internet, so the face category display pictures of myself (Li Peiyu) have been coded.
insert image description here

code debugging

insert image description here

After getting the project, decompress the file, as shown in the following figure after decompression:
insert image description here

Step1: Open the project folder

insert image description here

Introduction to each file and code:
insert image description here

Step2: Build a development environment

insert image description here

Create a virtual environment

After entering cmd and pressing Enter, a command terminal will open, and we will start to create a virtual environment:
insert image description here
the input command is:

conda create -n tf23_py37 python=3.7

After entering the command and pressing Enter, the following prompt appears, continue to press Enter:
insert image description here

Then after pressing Enter, we created a virtual environment with the environment name "tf23_py37", and its python version is 3.7, as shown in the following figure:

insert image description here

Activate the virtual environment

Copy this command, enter the command line, and activate the virtual environment we created:

conda activate tf23_py37

insert image description here

Install third-party dependencies

Next, start to install the third-party dependent libraries used in the project, such as tensorflow, matplotlib, pyqt5, etc. All the dependent libraries used this time are recorded in the requirements.txt file. Start the installation below:

Enter the following command in the command terminal.

pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

Note : The above command "-i" for installing third-party dependent libraries is followed by the domestic mirror source address. If the installation fails and prompts that there is no corresponding third-party library dependent version in the mirror source we specified, you can consider choosing other mirror sources.
Commonly used mirror source addresses in China

清华:https://pypi.tuna.tsinghua.edu.cn/simple
阿里云:https://mirrors.aliyun.com/pypi/simple/
中国科技大学 https://pypi.mirrors.ustc.edu.cn/simple/
华中理工大学:http://pypi.hustunique.com/
山东理工大学:http://pypi.sdutlinux.org/
豆瓣:http://pypi.douban.com/simple/

After the installation is successful, as shown below:

insert image description here

Open the project configuration environment

insert image description here

If the following prompt appears:
insert image description here

If the following prompt appears:

insert image description here

Select the interpreter (the virtual environment we created above)

After opening the friend charm, click the interpreter selection in the lower right corner of pycharm, select "Add Interpreter" and choose to add an interpreter.

insert image description here

insert image description here

Just follow the prompts in the picture, add the "python interpreter" we need, and the bottom right corner of pycharm will display as shown in the figure below, which means success:

insert image description here

Train the neural network model

Open the code of the project "train_cnn.py", and operate according to the prompts in the picture:

insert image description here
The effect of successful operation is shown in the figure below:

insert image description here
After the successful operation, the next thing to do is to "wait". According to the configuration of each person's computer, the time to run the code for training the network is also different (a few minutes - several hours). Wait for the completion of the operation. If no error is reported, the training is successful. .

After the training is successful, the "cnn_fv.h5" file will be generated in the models folder.

insert image description here
After the training is successful, in the results folder, you can see the "results_cnn.png" picture, which records the changes in accuracy and loss during the training process.
insert image description here
Also follow the logical steps of running "train_cnn.py" to run "train_mobilenet.py", and the mobilenet neural network will be trained. The results of the operation can form a set of comparisons with CNN. It is more conducive to our writing articles!

test

After training the model, we start to test the model (evaluate the performance of the model), open "test_model.py"

insert image description here
Follow the instructions in the picture.

After running successfully, a "heatmap_cnn.png" heat map will be generated under the results folder (you can see the prediction accuracy of each category), as shown below:
insert image description here

predict

GUI interface of PYQT5

After training and testing , we have obtained a neural network weight that can be used for traffic sign recognition. Next, we start to predict the traffic sign pictures that need to be recognized. Open the "windows.py" code, click to run directly, the result is as follows:

insert image description here

After running successfully, we get a pyqt5 GUI interface, and then we can use this GUI operation to predict the fruit pictures predicted by our project!

Flask web display

Open the code "app.py" and click Run directly, the result is as follows:

insert image description here
Click the link output by the console, or open your browser and enter the URL http://127.0.0.1:5000. You can jump to the flask front-end page, and then do your own interaction, upload pictures, click prediction, the model will output the confidence of each category, and display it on the webpage in descending order from large to small, and display it on the top category It is the recognition result of the final model .

insert image description here

Guess you like

Origin blog.csdn.net/qq_34184505/article/details/130197481