conv1d简单实现

conv1d代码

从最简单的开始,没有bias,没有padding,stride=1,不进行分组计算,这些条件之后可以逐步添加,这次先实现最基本的,了解其底层过程。

# -*- coding: utf-8 -*-
"""
Created on Sat Mar 12 15:04:51 2022

@author: masteryi
"""


def myconv1d(infeat, convkernel, padding=0, stride=1):
    b, c, h = len(infeat), len(infeat[0]), len(infeat[0][0])
    out_c, in_c, lenk = len(convkernel), len(convkernel[0]), len(convkernel[0][0])
    # 不使用分组卷积,c = in_c
    
    res = [[[0] * (h-lenk+1) for _ in range(out_c) for _ in range(b)]]
    # 最终输出形状:b*out_c*(h-lenk+1)
    
    for i in range(b):
        # 关于batch,目前只能串行完成
        
        for j in range(out_c):
            # 计算每一组的结果
            
            for m in range(c):
                for n in range(h-lenk+1):
                    # 计算每一个位置的值
                    
                    ans = 0
                    for k in range(lenk):
                        ans += infeat[i][m][n+k] * convkernel[j][m][k]
                    res[i][j][n] += ans
    return res


# 我的卷积
infeat = [[[1,2,3,4], [1,2,4,3]]]
convkernel = [[[0,1,2], [0,2,1]], [[1,0,2], [1,2,0]], [[2,0,1], [2,1,0]]]
outfeat = myconv1d(infeat, convkernel)
print(outfeat)


# pytorch源码计算结果
from torch.nn.functional import conv1d
import torch
import numpy

infeat = torch.tensor(numpy.array(infeat))
convkernel = torch.tensor(numpy.array(convkernel))

outfeat_pytorch = conv1d(infeat, convkernel)
print(outfeat_pytorch)

输出结果如下,和官方的计算结果相同:

[[[16, 22], [12, 20], [9, 16]]]
tensor([[[16, 22],
         [12, 20],
         [ 9, 16]]], dtype=torch.int32)

思考

  1. 对于输入通道为in_c,输出通道为out_c,则卷积层的构造为一共有out_c组卷积,每组卷积有in_c个卷积核,每个卷积核大小为 k h ∗ k w k_h*k_w khkw
  2. 组内的多个卷积核都有各自的权重,不互相影响,各个组的卷积核权重也不同;所以卷积层的参数量是 c o u t ∗ c i n ∗ k h ∗ k w c_{out}*c_{in}*k_h*k_w coutcinkhkw
  3. 参考资料:知乎:卷积神经网络CNN(2),详细认识卷积过程

猜你喜欢

转载自blog.csdn.net/qq_45510888/article/details/123446171