Semantic-Segmentation-Suite train their own data sets to solve common problems.

During this time they are doing something semantic segmentation aspects of their own data sets for a long time to finally get started, and I am using github relatively easy to use a code that is really very convenient, semantic segmentation code
only after downloaded need to do their data sets corresponding to it, you can run, problems encountered and solutions recorded as follows:
1)
this is not a shot, this is what I first encountered the problem is that a lot about memory information used, the memory how, I guess is not enough memory on the graphics card, but I remember a few days ago to run successfully before, so I shutdown restart it once, it should be just another application takes more memory, and I card is 1060, memory on 5G.
2)

Exception: Crop shape (512, 512) exceeds image dimensions (363, 560)!

Traceback (most recent call last):
  File "train.py", line 179, in <module>
    input_image, output_image = data_augmentation(input_image, output_image)
  File "train.py", line 53, in data_augmentation
    input_image, output_image = utils.random_crop(input_image, output_image, args.crop_height, args.crop_width)
  File "/home/wsb/Semantic-Segmentation-Suite/utils/utils.py", line 181, in random_crop
    raise Exception('Crop shape (%d, %d) exceeds image dimensions (%d, %d)!' % (crop_height, crop_width, image.shape[0], image.shape[1]))
Exception: Crop shape (512, 512) exceeds image dimensions (363, 560)!

Check the bottom line found 512 models is the size of the default input picture, but I crawled pictures online sample sizes, so when this image and then crop height and width of less than 512, then called this mistake, so I run command when the input value specified directly in the back of the aspect on it, I specify the input value 100,80.

python3 train.py --dataset Road/ --crop_height 100 --crop_width 80

The newspaper has found a new error:
3) tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp: Dimensions of the Inputs Should match: the Shape [0] = [1,48,6,4] vs. the Shape [1] = [1,288,6,5]
the error I found some on the internet but did not understand how, finally going to be released in github above this question, the answer suddenly found a more reasonable answer, this is the answer to the original.
Here Insert Picture Description
Meaning that the default train.py using FC-DenseNet56, comprising five layers pools, each is divided by 2, adding the size of the pictures you entered is not a multiple of 2 of 5 = 32 ^, a decimal will come out (I was on a choice of 100,80), tensorflow will be rounded, so there have been inconsistencies situation, so we should take a multiple of 32 as a model for the picture size of the input, and then I take is the size (heigh = 128, width = 96) this problem will be solved.
Here Insert Picture Description

Guess you like

Origin blog.csdn.net/weixin_42535742/article/details/90522872