LeetCode236.最近的公共祖先



一、准备RepVgg

1.参考YOLOv7

在yolov7 git仓库中,在common.py提供了详细的RepConv,和与之相关的CSPRepBettleneck模块

RepConv及RepBottleNeck代码如下(示例):

class C3RepVGG(nn.Module):
    # CSP RepBottleneck with 3 convolutions, modified by wqt
    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, act=True):  # ch_in, ch_out, number, shortcut, groups, expansion
        super(C3RepVGG, self).__init__()
        c_ = int(c2 * e)  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1, act=act)
        self.cv2 = Conv(c1, c_, 1, 1, act=act)
        self.cv3 = Conv(2 * c_, c2, 1, act=act)  # act=FReLU(c2)
        # self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])   #original RepBottleneck format
        self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=0.5) for _ in range(n)])   # change e=0.5, otherwise, bugs happens.

    def forward(self, x):
        return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))


class RepBottleneck(Bottleneck):
    # Standard bottleneck
    def __init__(self, c1, c2, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, shortcut, groups, expansion
        super().__init__(c1, c2, shortcut=True, g=1, e=0.5)
        c_ = int(c2 * e)  # hidden channels
        self.cv2 = RepConv(c_, c2, 3, 1, g=g)


class RepBottleneckCSPA(BottleneckCSPA):   #相当于Rep后的C3module
    # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansion
        super().__init__(c1, c2, n, shortcut, g, e)
        c_ = int(c2 * e)  # hidden channels
        self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])

RepConv代码段比较长,此处省略。

二、使用RepVgg

1.搭积木

将模块添加到config目录下定义模块的*.yaml文件

backbone:
  # [from, number, module, args]
  [ 
    # [ -1, 1, Focus, [ 64, 3 ] ],  # 0-P1/2
    [ -1, 1, Conv, [ 128, 4, 4, 0 ] ],  # 1-P2/4
    [ -1, 3, C3RepVGG, [ 128 ] ],    #modify C3 with RepConv    次出添加来一层C3RepVGG
    [ -1, 1, Conv, [ 256, 3, 2 ] ],  # 3-P3/8
    [ -1, 9, C3, [ 256 ] ],
    [ -1, 1, Conv, [ 512, 3, 2 ] ],  # 5-P4/16
    [ -1, 9, C3, [ 512 ] ],
    [ -1, 1, Conv, [ 768, 3, 2 ] ],  # 7-P5/32
    [ -1, 3, C3, [ 768 ] ],
    [ -1, 1, Conv, [ 1024, 3, 2 ] ],  # 9-P6/64
    [ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ],
    [ -1, 3, C3, [ 1024, False ] ],  # 11
  ]

2.解析模型

定义好模型,还需要对参数进行解析,因此新添加的RepVgg模块也需要参数解析,在yolo.py中找到parse_model:

	        if m in [DeConv, Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, DWConv, MixConv2d, Focus, ConvFocus, CrossConv, BottleneckCSP,
                 C3, C3RepVGG, RepBottleneckCSPA, C3TR]:
            c1, c2 = ch[f], args[0]
            if c2 != no:  # if not output
                c2 = make_divisible(c2 * gw, 8)

            args = [c1, c2, *args[1:]]
            if m in [BottleneckCSP, C3, C3RepVGG, RepBottleneckCSPA, C3TR]:
                args.insert(2, n)  # number of repeats
                n = 1
            if m in [DeConv, Conv, GhostConv, Bottleneck, GhostBottleneck, DWConv, MixConv2d, Focus, ConvFocus, CrossConv, BottleneckCSP, 
                        RepBottleneckCSPA, C3, C3RepVGG, C3TR]:
                if 'act' in d.keys():
                    args_dict = {
    
    "act" : d['act']}

在inference阶段,fuse可能会用上

    def fuse(self):  # fuse model Conv2d() + BatchNorm2d() layers
        print('Fusing layers... ')
        for m in self.model.modules():
            if isinstance(m, RepConv):
                #print(f" fuse_repvgg_block")
                m.fuse_repvgg_block()
            elif isinstance(m, RepConv_OREPA):
                #print(f" switch_to_deploy")
                m.switch_to_deploy()
            elif type(m) is Conv and hasattr(m, 'bn'):
                m.conv = fuse_conv_and_bn(m.conv, m.bn)  # update conv
                delattr(m, 'bn')  # remove batchnorm
                m.forward = m.fuseforward  # update forward
        self.info()
        return self

添加完上述代码段,即可运行train函数。

3. 推理

进入test阶段,我们对比来两者的参数和Flops大小,发现RepVgg确实能够有效减少参数量,并提升性能。

Method Stage Params Flops [email protected] [email protected]:0.95
YOLO5FacePose Train 13.13 17.1 96.8 90.8
YOLO5FacePose +RepVgg Train 13.126 16.8 96.2 90.3
YOLO5FacePose +RepVgg Test 13.13 17.1 96.7 90.9

总结

按照上述操作,train操作正常; 但Rep的作用是在推理阶段体现作用,需要进一步测试inference后给出结论。

参考

还有一些其他的参考方法来实现,可能有用但需要实测验证
YOLOv5-Lite:Repvgg重参化对YOLO工业落地的实验和思考
YOLOv5-Lite
YOLOv7-common

猜你喜欢

转载自blog.csdn.net/qq_46130027/article/details/129932045