Pytorch1.10 makes its own data set to reproduce U-Net to realize weld segmentation

  Simply making a record for the successful implementation of U-Net is not a nanny-level tutorial.

My code: https://github.com/struggler176393/Unet_Pytorch
Reference code: https://github.com/qiaofengsheng/pytorch-UNet

environment

  • pytorch 1.10.1
  • python 3.7
  • miracles 11.3
  • cudnn8.2.0
      In fact, the pytorch environment seems to have some function names that have changed a bit since 1.7, and they need to be changed, so this code is estimated to be applicable to 1.7 and higher versions, and it should be possible before 1.7.
      There are also some commonly used packages such as numpy that configure themselves, I don't remember much.

Make a dataset

  The U-Net dataset only needs JPG images and masks. Labelme is too troublesome. It is recommended to use EIseg, which can be intelligently segmented.
  Refer to this blog: https://blog.csdn.net/qq_37541097/article/details/120154543
  There is a pitfall, that is, the weight may have to be downloaded from another place (official website), and the one given in the blog does not seem to work.
insert image description here  After marking, put the JPG file into the JPEGImages folder, and put the mask mask file into the SegmentationClass folder.

run

  The next step is quite simple. If you want to train, run train.py directly, and if you want to test, run test.py directly, and then enter the absolute path of the predicted image. If the data set is small, the result of one or two hours of training is quite good. insert image description here  The effect of ten minutes of training is not bad.

Guess you like

Origin blog.csdn.net/astruggler/article/details/128354028