[Pytorch tensor] Advanced operation of Tensor

Article Directory

theme

An advanced version of tensor operations is recorded. There is a primary version, which can be referred to here .
Update after subsequent use

gradient correlation

  • torch.stack; Generally speaking, when we get a list of tensors, we need to perform a series of operations such as detach clone cpu numpy on the internal tensor before we can get a tensor of tensors with the same data through the constructor torch.tensor. However, the operations detach and clone are methods that will clear the current gradient. When we need to save the current gradient while getting the current data, we need to use torch.stackthis function. It is worth noting that this method allows us to directly pass in the list of tensors, which is very convenient. When we need to list of tensors -> higher dim tensor, we can call it directly torch.stack, and then manually disable or clear the gradient. That is, test_tensor = torch.stack(list_of_tensor) + test_tensor.grad = None
    Document source

Guess you like

Origin blog.csdn.net/Petersburg/article/details/124194379