An example of U-net-based fundus blood vessel image segmentation based on the mmsegmentation framework

The experiment was run on the following environmentinsert image description here

I. Introduction

Regarding how to install the relevant environment of mmlab, a detailed installation tutorial will be released later. There are indeed many pitfalls during the installation process, but after the mmlab framework is installed, there is only one word, fragrant! The current mainstream network structures all support it, as long as the config file is written according to individual needs, it can be called directly, which is very convenient. So I strongly recommend everyone to learn it. mmsegmentation officially supports many data sets, which are clearly introduced in its prepare-datasets.md. Since other data sets are relatively large, the CHASEDB1 data set is selected for experimentation.

Two data set preparation

The data set can be downloaded directly from the Internet. This is the link of the network disk: Link: https://pan.baidu.com/s/16fXFzOj6BvgUbxBtimURXA Extraction code: 1111
After downloading the data set, use the tool that comes with mmsegmentation to process the compressed file. As shown in the figure below, it can be processed into the folder structure required by the program:

python tools/convert_datasets/chase_db1.py /path/to/CHASEDB1.zip

The latter is your own file download address, and then a folder data will be formed under mmsegmentation, and the structure inside is as follows.
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
The reason why the annotation looks black here is because the label position is 1, which is not much different from 0. In fact, it should be like this:insert image description here

three training

A very good feature of mmlab is that as long as the config file is configured, you can run the train to get the required results. Click on the train.py file, and you will find that there are many parts that need to be filled or optional in the parameter part. The most important thing here is The config parameter, others can be left blank:
insert image description here
here we are using the Unet network and the CHASEDB1 data set, that is, find the py file of chase01 in the unet directory in the config folder, that is, if you start training, set it to run in the run
insert image description here
in the menu bar The parameters are enough:
insert image description here
just enter the parameters: insert image description here
then run it directly.

Four result visualization

Run it and you will get a latest.pth and log.json file, which saves the entire training log and your trained model. There are many tools that can be visualized on the Internet, but I don’t know why it is a big problem to use it. Later, I simply wrote it myself. One, the code will be attached later, and the visualized result is shown in the figure below:insert image description here

Guess you like

Origin blog.csdn.net/onepunch_k/article/details/123463607