Training optimization: reduce loss

 

concept:

Usually, each algorithm in machine learning has an objective function, and the solution process of the algorithm is the process of optimizing this objective function. In classification or regression problems, a loss function (cost function) is usually used as its objective function. The loss function is used to evaluate the degree to which the predicted value of the model is different from the actual value. The better the loss function, the better the performance of the model.

refer:https://blog.csdn.net/weixin_37933986/article/details/68488339

________________________________________________________________________________________________

The following results are for the practice of training multiple classifications for the model (faster_rcnn_inception_resnet_v2_atrous_coco) and the sample sets of different classifications are not uniform

 

The way to make the loss drop quickly

Training set:

Config settings:

a. Use dropout

use_dropout: true #false
dropout_keep_probability: 0.6

b. Multi-stage learning rate, start with a high setting and try to iterate until the loss is low enough

initial_learning_rate: 0.003
          schedule {
            step: 0
            learning_rate: .003
          }
          schedule {
            step: 30000
            learning_rate: .0003
          }
          schedule {
            step: 45000
            learning_rate: .00003
          }
......

c. In order to discover more boxes, adjust the IOC threshold (has nothing to do with loss)

first_stage_nms_iou_threshold: 0.4
second_stage_post_processing {
      batch_non_max_suppression {
        score_threshold: 0.0
        iou_threshold: 0.5
        max_detections_per_class: 100
        max_total_detections: 100
      }
      score_converter: SOFTMAX
    }

Effect:

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325064509&siteId=291194637