Deep Learning Interview Summary

1. nms and soft-nms implementation process

(1) nms implementation process

Initially, the non-maximum suppression process starts with a detection box list B, the corresponding confidence scale S, an empty detection list D, and a threshold N_t. First, find the detection box M corresponding to the highest confidence, delete it from B, and add it to the final box list D. Then, the overlap degree IOU between M and other detection frames in B is calculated, and the target detection whose IOU is greater than the threshold N_t is deleted from B. For the remaining detection boxes in B, continue to repeat this process until B is empty.

(2) What problems does soft nms solve

A major problem with nms is that other detection boxes that overlap with NMS by more than a threshold are directly removed from B. As a result, if there is actually a target in the neighborhood of NMS, it will also be suppressed because the overlap between the detection frame of the target and M is greater than the threshold. This causes the target to not be detected, which affects the overall average accuracy.

(3) Stream layer of soft nms

The penalty function f(iou(M,b_l)) used in soft-nms can be used in two forms:

The first is a linear penalty function, which is expressed as:

s_{i}=\left\{\begin{matrix} si & iou(M,bi)<Nt \\ si(1-iou(M,bi)) & iou(M,bi)>=Nt \end{matrix}\right.

This penalty function satisfies the principle that the detection frame that is far away will not be affected, and the detection frame that is closer will be greatly penalized. However, this function is not continuous, i.e. a penalty is imposed suddenly when the detection box satisfies the overlap threshold Nt.

The second is the Gaussian penalty function, which is expressed as follows:

s_ {i} = s_ {i} e ^ {- \ tfrac {iou (M, bi) ^ {2}} {e}}

(4) Code implementation

2、Focal loss

Solve the problem of column category imbalance

import torch
import numpy as np
import matplotlib.pyplot as plt
x= np.linspace(0,1,500)
# y=-(1-x)^2*log(x)
y=-np.power(1-x,2) *np.log(x)
plt.figure()
plt.plot(x,y)
plt.show()

 Analyze the effect of x on the value of y

3、

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324187234&siteId=291194637