图像去雨实例(一)之AttentiveGAN

论文阅读见:《Attentive Generative Adversarial Network for Raindrop Removal from A Single Image》

首先环境准备:

pip3 install -r requirements.txt

根据实际情况安装需要的工具即可。

Test model

In this repo I uploaded a model trained on dataset provided by the origin author origin_dataset.

The trained derain net model weights files are stored in folder model/

python3 test_model.py --weights_path model/new_model/derain_gan_2019-01-25-15-55-54.ckpt-200000 --image_path data/test_data/test_2.png

这里需要注意,直接使用源码作者提供的训练模型可能会出错,因为模型和代码不匹配。上述是我自己训练后测试的bash,仅供参考。

The author's results are as follows:

Test Input Image

Test Input

Test Derain result image

Test Derain_Result

My results are as follows:

Test Input Image

Test Derain result image

Test Attention Map at time 1

Test Attention Map at time 2

Test Attention Map at time 3

Test Attention Map at time 4

 

Train your own model

Data Preparation

Firstly you need to organize your training data refer to the data/training_data_example folder structure. And you need to generate a train.txt record the data used for training the model.

Dataset

The whole dataset can be find here:

https://drive.google.com/open?id=1e7R76s6vwUJxILOcAsthgDLPSnOrQ49K

####Training Set:

861 image pairs for training.

####Testing Set A:

For quantitative evaluation where the alignment of image pairs is good. A subset of testing set B.

####Testing Set B:

239 image pairs for testing.

 

# 将原数据集分为training ,validation  by gavin
import os
import random

import argparse

# 划分验证集训练集
_NUM_TEST = 0 #20000

parser = argparse.ArgumentParser()
parser.add_argument('--folder_path', default='/home/gavin/Dataset/attentive-gan-derainnet/train', type=str,
                    help='The folder path')
parser.add_argument('--train_filename', default='./data/training_data_example/train_test.txt', type=str,
                    help='The train filename.')
parser.add_argument('--validation_filename', default='./data/training_data_example/validation.txt', type=str,
                    help='The validation filename.')

#/home/gavin/Dataset/attentive-gan-derainnet/test_b
def _get_filenames(dataset_dir):
    photo_filenames = []
    image_list = os.listdir(dataset_dir)
    photo_filenames = [os.path.join(dataset_dir, _) for _ in image_list]
    return photo_filenames


if __name__ == "__main__":

    args = parser.parse_args()

    data_dir = os.path.join(args.folder_path,'data')

    data_dir_gt = os.path.join(args.folder_path,'gt')

    # get all file names
    photo_filenames = _get_filenames(data_dir)
    photo_file_gt = _get_filenames(data_dir_gt)

    print("size of dataset is %d" % (len(photo_filenames)))
    print("size of dataset_gt is %d" % (len(photo_file_gt)))

    photo_filenames.sort()
    photo_file_gt.sort()
    print(photo_filenames)

    training_file_names = []

    for i in range(len(photo_file_gt)):
        string_filename = photo_file_gt[i] +' ' + photo_filenames[i]
        training_file_names.append(string_filename)

    '''
    # 切分数据为测试训练集
    random.seed(0)
    random.shuffle(photo_filenames)
    training_file_names = photo_filenames[_NUM_TEST:]

    print("training file size:", len(training_file_names))
    '''

    # make output file if not existed
    if not os.path.exists(args.train_filename):
        os.mknod(args.train_filename)

    # write to file
    fo = open(args.train_filename, "w")
    fo.write("\n".join(training_file_names))
    fo.close()

    # print process
    print("Written file is: ", args.train_filename)



python3 generate_flist.py --folder_path /home/gavin/Dataset/attentive-gan-derainnet/test_b

The training samples are consist of two components. A clean image free from rain drop label image and a origin image degraded by raindrops.

All your training image will be scaled into the same scale according to the config file.

Train model

In my experiment the training epochs are 200010, batch size is 1, initialized learning rate is 0.002. About training parameters you can check the global_configuration/config.py for details.

# train
python3 train_model.py --dataset_dir data/training_data_example/

# continue train
python3 train_model.py --dataset_dir data/training_data_example/  --weights_path model/new_model/derain_gan_2019-01-25-15-55-54.ckpt-200000

# test derain_gan_2019-01-25-15-55-54.ckpt-200000
python3 test_model.py --weights_path model/new_model/derain_gan_2019-01-25-15-55-54.ckpt-200000 --image_path data/test_data/test_1.png    derain_gan_2019-01-25-15-55-54.ckpt-200000

其他实测图片(基于自然场景带雨的图片重新训练):

数据集准备(分割):

# coding=utf-8
# 批量修改图片尺寸
# imageResize(r"D:\tmp", r"D:\tmp\3", 0.7)
import os
from PIL import Image

def image_Crop(input_path, output_path):
    files = os.listdir(input_path)
    os.chdir(input_path)
    path_data = os.path.join(output_path, 'data')
    path_gt = os.path.join(output_path, 'gt')
    # 判断输出文件夹是否存在,不存在则创建
    if (not os.path.exists(output_path)):
        os.makedirs(output_path)
    if (not os.path.exists(path_data)):
        os.makedirs(path_data)
    if (not os.path.exists(path_gt)):
        os.makedirs(path_gt)
    for file in files:
        # 判断是否为文件,文件夹不操作
        if (os.path.isfile(file)):
            print(file)
            img = Image.open(file)
            box1 = (0, 0, img.size[0]/2, img.size[1])
            box2 = (img.size[0]/2, 0,img.size[0], img.size[1])
            roi1 = img.crop(box1)
            roi2 = img.crop(box2)
            roi1.save(os.path.join(path_data, "data" + file))
            roi2.save(os.path.join(path_gt, "gt" + file))


def imageResize(input_path, output_path, scale):
    # 获取输入文件夹中的所有文件/夹,并改变工作空间
    files = os.listdir(input_path)
    os.chdir(input_path)
    # 判断输出文件夹是否存在,不存在则创建
    if (not os.path.exists(output_path)):
        os.makedirs(output_path)
    for file in files:
        # 判断是否为文件,文件夹不操作
        if (os.path.isfile(file)):
            img = Image.open(file)
            width = int(img.size[0] * scale)
            height = int(img.size[1] * scale)
            img = img.resize((width, height), Image.ANTIALIAS)
            img.save(os.path.join(output_path, "New_" + file))

if __name__ == '__main__':
    input_path = '/home/gavin/Dataset/DID-MDN-datasets/DID-MDN-test'
    output_path = '/home/gavin/Dataset/attentive-gan-derainnet/train_mdn'
    image_Crop(input_path,output_path)

后注

 源码作者重新修改了Psnr及ssim的计算方式,并且add data augmentation and modify configuration ,add random flip augmentation function,之后需要修改部分文件,源码有bug,经修改后如下:

首先数据预处理成tfrecords文件,加速训练,这里要强调的是,必须在

data_feed_pipline.py文件最后一行前

添加一行代码,自动生成tfrecords文件夹,不然后面直接训练会一直报错:

if __name__ == '__main__':

    # init args
    args = init_args()

    assert ops.exists(args.dataset_dir), '{:s} not exist'.format(args.dataset_dir)

    producer = DerainDataProducer(dataset_dir=args.dataset_dir)
    tf_save_dir = ops.join(args.tfrecords_dir, 'tfrecords') # add by gavin
    producer.generate_tfrecords(save_dir=tf_save_dir, step_size=5000)

这样就在data文件夹下自动生成tfrecords文件夹,并且tfrecords文件存储在这里。

运行demo

# new demo data prepare
python3 data_provider/data_feed_pipline.py --dataset_dir /home/gavin/Dataset/attentive-gan-derainnet/train --tfrecords_dir ./data

# train
python3 tools/train_model.py --dataset_dir ./data

# test
python3 tools/test_model.py --weights_path model/derain_gan/derain_gan.ckpt-100000--image_path data/test_data/test_1.png

主要配置如下:

# Set the shadownet training epochs
__C.TRAIN.EPOCHS = 100010
# Set the initial learning rate
__C.TRAIN.LEARNING_RATE = 0.0002

以下贴测试图,我只跑了一个完整的过程(100010)就效果非常明显了:

可以说效果是非常明显了,再次感谢作者的伟大工程贡献!

猜你喜欢

转载自blog.csdn.net/Gavinmiaoc/article/details/86700738