Python realizes dynamic for loop

background

The incoming pictures and tensors may have different dimensions. If you want to use the same function to process them, you can use a dynamic for loop to achieve it.

accomplish

Use the tensor tensor in pytorch as the data container.

Here, parameters such as shape in the tensor tensor are not used to return the shape, but are directly passed in. The purpose is to facilitate everyone to understand the establishment process of the dynamic for loop.

implement summation

Implement a dynamic for loop to calculate the sum of all elements of pictures and tensors of different dimensions.

import torch


def dynamicFor(data, data_size):
    '''
    动态for循环之实现求和
    :param data: 传入的待求数据
    :param data_size: 数据的形状
    :return: 数据data中各元素之和
    '''
    count = 0
    place = 0
    ndim = len(data_size)  # 算出维度
    sum_num = 1
    sum_list = []
    sumx = 1
    for i in range(ndim):
        sum_num *= data_size[i]  # 计算所要计算的总次数
        if i != ndim - 1:
            sumx *= data_size[-(i + 1)]  # 计算在每个维度下的除数,方便后面用求余数的方法得到具体位置
            sum_list.append(sumx)
    sum_list.reverse()  # 记得取反,方便后面索引
    # print(sum_list)
    while place < sum_num:
        position = []  # 存储位置
        current_place = place
        for i in range(ndim):
            if i != ndim - 1:
                position.append(current_place // sum_list[i])
                # 下面这步很重要,去掉多余的位置记录,防止后面位置的小除数除出来越界的位置
                current_place = current_place - current_place // sum_list[i] * sum_list[i]
            else:
                position.append(current_place % sum_list[i - 1])
        result = torch.Tensor(data)
        for position_num in position:  # 访问具体位置以方便进行你要进行的操作,比如这里是求和
            result = result[position_num]
        count += result  # 访问到了具体位置,进行你要的操作,求和
        place += 1
    return count

Check if it is correct:

dim3 = torch.ones(400).reshape([5, 8, 10])
print(dynamicFor(dim3, [5, 8, 10]))

dim4 = torch.ones(400).reshape([5, 5, 4, 4])
print(dynamicFor(dim4, [5, 5, 4, 4]))

dim5 = torch.ones(400).reshape([2, 4, 2, 5, 5])
print(dynamicFor(dim5, [2, 4, 2, 5, 5]))

The result is as follows:

tensor(400.)
tensor(400.)
tensor(400.)

Realize the norm

(The reason for doing this dynamic for loop is that when I built the network, I tried to type a loss function to calculate the L2 norm of the network weight. I thought about it when I thought that it might have to deal with data of different dimensions.)

To build a dynamic for loop to find the norms of different dimensions, it should be noted that the last two dimensions are not included in the position index, but the position index is used to locate the position of the matrix, that is, only the previous position index is calculated.

import torch


def dynamicFor4Norm(data, data_size):
    '''
    动态for循环之求矩阵范数。即最后两个维度中矩阵的范数
    :param data: 待求的数据
    :param data_size: 数据data的形状
    :return: 数据data中,所有最后两个维度的矩阵的范数平方之和
    '''
    count = 0
    place = 0
    ndim = len(data_size)  # 算出总维度
    if ndim < 2:  # 如果维度小于2,那么应该就是偏置,就跳过
        return 0
        # raise Exception("Dimension of input is less than 2!")
    elif ndim == 2:  # 维度为2,直接求
        return torch.tensor(data).norm()
    else:
        sum_num = 1
        sum_list = []
        sumx = 1
        for i in range(ndim - 2):  # 因为要保留最后两个的维度,所以去掉最后2个维度的总次数和各维度除数的计算
            sum_num *= data_size[i]  # 计算前面所要计算的总次数
            if i != ndim - 3:
                sumx *= data_size[-2 - (i + 1)]  # 计算前面每个维度下的除数,方便后面用求余数的方法得到具体位置
                sum_list.append(sumx)
        sum_list.reverse()  # 记得取反,方便后面索引
        # print(sum_list)
        while place < sum_num:
            position = []  # 存储矩阵位置,方便便利所有的矩阵
            current_place = place
            for i in range(ndim - 2):
                if i != ndim - 3:
                    position.append(current_place // sum_list[i])
                    # 下面这步很重要,去掉多余的位置记录,防止后面位置的小除数除出来越界的位置
                    current_place = current_place - current_place // sum_list[i] * sum_list[i]
                else:
                    position.append(current_place % sum_list[i - 1])
            result = torch.Tensor(data)
            for position_num in position:  # 访问具体位置以方便进行你要进行的操作,比如这里是求范数的平方
                result = result[position_num]
            count += result.norm()  # 访问到了具体位置,进行你要的操作,求范数
            place += 1
    return count

The test is correct:

# 用和前面同样的dim4、dim5
dim4 = torch.ones(400).reshape([5, 5, 4, 4])
dim5 = torch.ones(400).reshape([2, 4, 2, 5, 5])

print(dynamicFor4Norm(dim4, [5, 5, 4, 4]))
print(dynamicFor4Norm(dim5, [2, 4, 2, 5, 5]))

result:

tensor(100.)
tensor(80.)

small note

Generally, in the Loss function, the square of the weight L2 norm is calculated, for example:

At this point, line 43 above to find the norm has to be changed to:

count += result.norm().pow(2)  # 没错,就是加了个pow(2)求平方

At this time, the same input result is:

tensor(400.)
tensor(400.)

Guess you like

Origin blog.csdn.net/m0_46948660/article/details/129407470