行人检测0-06:LFFD源码无死角解析(1)-训练代码总览注释

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接: https://blog.csdn.net/weixin_43013761/article/details/102668471

以下链接是个人关于LFFD(行人检测)所有见解,如有错误欢迎大家指出,我会第一时间纠正。有兴趣的朋友可以加微信:a944284742相互讨论技术。若是帮助到了你什么,一定要记得点赞!因为这是对我最大的鼓励,祝你年少且有为!
行人检测0-00:LFFD-史上最新无死角详细解读:https://blog.csdn.net/weixin_43013761/article/details/102592374

训练框架总览

该篇博客,其实没有什么重要的内容,就是做了一些简单的注释,通过前面的博客可以知道,其训练代码为pedestrian_detection/data_iterator_farm/multithread_dataiter_for_cross_entropy_v1.py

# -*- coding: utf-8 -*-

import sys
import datetime
import os
import math
import logging
from ChasingTrainFramework_GeneralOneClassDetection import logging_GOCD
from ChasingTrainFramework_GeneralOneClassDetection import train_GOCD
import sys
sys.path.append('../')
sys.path.append('../data_provider_farm')


# add mxnet python path to path env if need
mxnet_python_path = '/home/heyonghao/libs/incubator-mxnet/python'
sys.path.append(mxnet_python_path)
import mxnet

'''
init logging
'''
param_log_mode = 'w'
param_log_file_path = '../log/%s_%s.log' % (os.path.basename(__file__)[:-3], datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S'))  # 构建日志文件路径

'''
    data setting
'''
# pick file path for train set,训练数据集
param_trainset_pickle_file_path = os.path.join(os.path.dirname(__file__), '../data_provider_farm/data_folder/data_list_caltech_train_source.pkl')
# pick file path for val set,测试数据集
param_valset_pickle_file_path = os.path.join(os.path.dirname(__file__), '../data_provider_farm/data_folder/data_list_caltech_test_source.pkl')


'''
    training setting,训练配置
'''

# batchsize for training
param_train_batch_size = 32

# the ratio of neg image in a batch,每个batch中负样本所占的比例
param_neg_image_ratio = 0.1

# GPU index for training (single machine multi GPU),指定训练的GPU
param_GPU_idx_list = [0]

# input height for network,网络输入图像的高
param_net_input_height = 480

# input width for network,网络输入图像的宽
param_net_input_width = 480

# the number of train loops,该网络进行多少次训练,估计是迭代总次数
param_num_train_loops = 1000000

# the number of threads used for train dataiter,读取训练数据的线程数
param_num_thread_train_dataiter = 1

# the number of threads used for val dataiter,读取测试数据的线程数
param_num_thread_val_dataiter = 1

# training start 训练开始的索引下标
param_start_index = 0


# the evaluation frequency for current model,训练多少次,对网络进行一次评估
param_validation_interval = 10000

# batchsize for validation,评估测试的batch_size
param_val_batch_size = 20

# the number of loops for each evaluation,循环评估多少次
param_num_val_loops = 0

# the path of pre-trained model,如果需要加载预训练模型这里指定路径
param_pretrained_model_param_path = ''

# the frequency of display, namely displaying every param_display_interval loops,多少个训练进行一次显示
param_display_interval = 10

# the frequency of metric update, less updates will boost the training speed (should less than param_display_interval)
# 更新指标的频率,其频率应该比param_display_interval要低
param_train_metric_update_frequency = 2


# set save prefix (auto)
# 模型保存的路径
param_save_prefix = '../saved_model/' + os.path.basename(__file__)[:-3] + '_' + datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S') + \
                    '/' + os.path.basename(__file__)[:-3].replace('configuration', 'train')

# the frequency of model saving, namely saving the model params every param_model_save_interval loops
# 设置保存模型的迭代次数
param_model_save_interval = 50000


# hard nagative mining ratio, needed by loss layer
# 计算loss的时候,采集负样本的比例
param_hnm_ratio = 5

# init learning rate,初始学习率
param_learning_rate = 0.1
# weight decay,权重衰减
param_weight_decay = 0.00001
# momentum,动能
param_momentum = 0.9

# learning rate scheduler -- MultiFactorScheduler
# 迭代到指定次数,学习率进行衰减
scheduler_step_list = [300000, 600000, ]
# multiply factor of scheduler,调度器的倍数
scheduler_factor = 0.1


# 该处设定了学习率的衰减方式
# construct the learning rate scheduler
param_lr_scheduler = mxnet.lr_scheduler.MultiFactorScheduler(step=scheduler_step_list, factor=scheduler_factor)
# 使用adam优化器
# param_optimizer_name = 'adam'
# param_optimizer_params = {'learning_rate': param_learning_rate,
#                           'wd': param_weight_decay,
#                           'lr_scheduler': param_lr_scheduler,
#                           'begin_num_update': param_start_index}

# 使用sgd优化器
param_optimizer_name = 'sgd'
param_optimizer_params = {'learning_rate': param_learning_rate,
                          'wd': param_weight_decay,
                          'lr_scheduler': param_lr_scheduler,
                          'momentum': param_momentum,
                          'begin_num_update': param_start_index}



'''
    data augmentation,数据增强
'''
# trigger for horizon flip,水平翻转
param_enable_horizon_flip = True

# trigger for vertical flip,垂直翻转
param_enable_vertical_flip = False

# trigger for brightness,亮度调节
param_enable_random_brightness = True
param_brightness_factors = {'min_factor': 0.5, 'max_factor': 1.5}

# trigger for saturation,饱和度调节
param_enable_random_saturation = True
param_saturation_factors = {'min_factor': 0.5, 'max_factor': 1.5}

# trigger for contrast,对比度调节
param_enable_random_contrast = True
param_contrast_factors = {'min_factor': 0.5, 'max_factor': 1.5}

# trigger for blur,对图片进行模糊调节
param_enable_blur = False
param_blur_factors = {'mode': 'random', 'sigma': 1}
param_blur_kernel_size_list = [3]


# negative image resize interval,负样本图像,改变大小的间隔
param_neg_image_resize_factor_interval = [0.5, 3.5]


'''
    algorithm,算法相关设定
'''
# the number of image channels,输入图像的通道数目
param_num_image_channel = 3

# the number of output scales (loss branches),输出图片的缩放次数,其和loss branches相同
param_num_output_scales = 4

# feature map size for each scale
# 对于每个branches输出特征的大小的设定
param_feature_map_size_list = [59, 29, 14, 6]


# 下面的设定,是为了确定图片属于的batch特征图
# bbox lower bound for each scale,对图片进行缩放,设定的的下限(针对每个branches相同的特征图)
param_bbox_small_list = [30, 60, 100, 180]
assert len(param_bbox_small_list) == param_num_output_scales
# bbox upper bound for each scale,对图片进行缩放,设定的的上限(针对每个branches相同的特征图)
param_bbox_large_list = [60, 100, 180, 320]
assert len(param_bbox_large_list) == param_num_output_scales


# bbox gray lower bound for each scale,这里表示的是每个尺寸对应的灰色区域
param_bbox_small_gray_list = [math.floor(v * 0.9) for v in param_bbox_small_list]
# bbox gray upper bound for each scale
param_bbox_large_gray_list = [math.ceil(v * 1.1) for v in param_bbox_large_list]


# the RF size of each scale used for normalization, here we use param_bbox_large_list for better regression
# 对每个尺寸的特征对应的感受野大小进行归一化,为了更好回归我们使用param_bbox_large_list
param_receptive_field_list = param_bbox_large_list

# RF stride for each scale, RF在每个尺寸缩放的步伐
param_receptive_field_stride = [8, 16, 32, 64]

# the start location of the first RF of each scale,每个尺寸感受野的起始位置
param_receptive_field_center_start = [7, 15, 31, 63]

# the sum of the number of output channels, 2 channels for classification and 4 for bbox regression
# 输出的通道数目,两个为正负样本类别,4个为回归框
param_num_output_channels = 6


# -------------------------------------------------------------------------------------------
# print all params,对参数进行打印
orig_param_dict = vars()
param_names = [name for name in orig_param_dict.keys() if name.startswith('param_')]
param_dict = dict()
for name in param_names:
    param_dict[name] = orig_param_dict[name]


def run():
    # log打印参数设置
    logging_GOCD.init_logging(log_file_path=param_log_file_path,
                              log_file_mode=param_log_mode)

    logging.info('Preparing before training.')
    sys.path.append('..')
    # 把网络导入
    from symbol_farm import symbol_30_320_20L_4scales_v1 as net
    
    # 网络加载
    net_symbol, data_names, label_names = net.get_net_symbol()
    #mxnet.viz.print_summary(net_symbol, shape={"data": (32, 3, 480,480 )})

    net_initializer = mxnet.initializer.Xavier()

    logging.info('Get net symbol successfully.')

    # -----------------------------------------------------------------------------------------------
    # init dataiter,初始化数据加载读取迭代器
    from data_provider_farm.pickle_provider import PickleProvider
    from data_iterator_farm.multithread_dataiter_for_cross_entropy_v1 import Multithread_DataIter_for_CrossEntropy as DataIter

    # 训练数据迭代器配置
    train_data_provider = PickleProvider(param_trainset_pickle_file_path)
    train_dataiter = DataIter(
        mxnet_module=mxnet,
        num_threads=param_num_thread_train_dataiter,
		......
		......
        neg_image_resize_factor_interval=param_neg_image_resize_factor_interval
    )

    # 测试数据迭代器参数设置加载
    val_dataiter = None
    if param_valset_pickle_file_path != '' and param_val_batch_size != 0 and param_num_val_loops != 0 and param_num_thread_val_dataiter != 0:
        val_data_provider = PickleProvider(param_valset_pickle_file_path)
        val_dataiter = DataIter(
            mxnet_module=mxnet,
 			......
 			......
            neg_image_resize_factor_interval=param_neg_image_resize_factor_interval

        )
    # ---------------------------------------------------------------------------------------------
    # init metric,测量指标
    from metric_farm.metric_default import Metric

    # 测量指标的相关配置
    train_metric = Metric(param_num_output_scales)
    val_metric = None
    if val_dataiter is not None:
        val_metric = Metric(param_num_output_scales)

    # 配置好训练参数,进行训练迭代
    train_GOCD.start_train(
        param_dict=param_dict,
        mxnet_module=mxnet,
        context=[mxnet.gpu(i) for i in param_GPU_idx_list],
        train_dataiter=train_dataiter,
		......
		......
        start_index=param_start_index)

if __name__ == '__main__':
    run()

上面就是一些注释,其中可能也就是对于

# bbox gray lower bound for each scale,这里表示的是每个尺寸对应的灰色区域,如果
param_bbox_small_gray_list = [math.floor(v * 0.9) for v in param_bbox_small_list]
# bbox gray upper bound for each scale
param_bbox_large_gray_list = [math.ceil(v * 1.1) for v in param_bbox_large_list]

不是很了解,那么我们先看图(论文图Figure 1):
在这里插入图片描述
其中所谓的灰色部分,表示的就是RF-ERF的区域。对于详细的解释,后续部分为大家讲解,下篇博客主要讲解数据预处理过程(比较重要,不能略过)。

猜你喜欢

转载自blog.csdn.net/weixin_43013761/article/details/102668471
今日推荐