PyTorch in Tensor dimension transform

For the basic data objects Tensor PyTorch the (tensor), in dealing with the problem, we need to frequently change dimensions of data, and for further processing in the latter part of the calculation, this paper aims to transform some dimensions listed methods and examples, to facilitate viewing.

Dimensions View: torch.Tensor.size ()

Check the dimensions of the current tensor

for example:

>>> import torch
>>> a = torch.Tensor([[[1, 2], [3, 4], [5, 6]]])
>>> a.size()
torch.Size([1, 3, 2])

Deformation tensor: torch.Tensor.view (* args) → Tensor

Return the same data but a different size tensor. Returned tensor must have the same data and the same number of tensor elements of the original, but can have different sizes. A tensor must be continuous contiguous () can be viewed.

for example:

>>> x = torch.randn(2, 9)
>>> x.size()
torch.Size([2, 9])
>>> x
tensor([[-1.6833, -0.4100, -1.5534, -0.6229, -1.0310, -0.8038, 0.5166, 0.9774,
     0.3455],
    [-0.2306, 0.4217, 1.2874, -0.3618, 1.7872, -0.9012, 0.8073, -1.1238,
     -0.3405]])
>>> y = x.view(3, 6)
>>> y.size()
torch.Size([3, 6])
>>> y
tensor([[-1.6833, -0.4100, -1.5534, -0.6229, -1.0310, -0.8038],
    [ 0.5166, 0.9774, 0.3455, -0.2306, 0.4217, 1.2874],
    [-0.3618, 1.7872, -0.9012, 0.8073, -1.1238, -0.3405]])
>>> z = x.view(2, 3, 3)
>>> z.size()
torch.Size([2, 3, 3])
>>> z
tensor([[[-1.6833, -0.4100, -1.5534],
     [-0.6229, -1.0310, -0.8038],
     [ 0.5166, 0.9774, 0.3455]],

    [[-0.2306, 0.4217, 1.2874],
     [-0.3618, 1.7872, -0.9012],
     [ 0.8073, -1.1238, -0.3405]]])

It can be seen that x and y, the number and size of data of each data z are equal, but the size or number of dimensions changed.

Compression / decompression tensor: torch.squeeze (), torch.unsqueeze ()

  • torch.squeeze(input, dim=None, out=None)

1 removed tensor input shape and returns. If the input is of the form (A × 1 × B × 1 × C × 1 × D), the output shape is to: (A × B × C × D)

When given dim, then the pressing operation is only on the given dimension. For example, the shape of the input: (A × 1 × B), squeeze (input, 0) tensor will remain unchanged, with only the squeeze (input, 1), the shape becomes (A × B).

Return tensor and tensor input shared memory, so changes to the contents of one will change the other.

for example:

>>> x = torch.randn(3, 1, 2)
>>> x
tensor([[[-0.1986, 0.4352]],

    [[ 0.0971, 0.2296]],

    [[ 0.8339, -0.5433]]])
>>> x.squeeze (). Size () # no parameters, the number of elements to remove all of the dimension 1
torch.Size([3, 2])
>>> x.squeeze()
tensor([[-0.1986, 0.4352],
    [ 0.0971, 0.2296],
    [ 0.8339, -0.5433]])
>>> torch.squeeze (x, 0) .size () # add parameters, removing the first dimension of the element, does not work because there are two elements in the first dimension
torch.Size([3, 1, 2])
>>> torch.squeeze (x, 1) .size () # add parameters to remove elements of the second dimension, exactly 1, acting
torch.Size([3, 2])

If you can see the additional parameters, only the dimension size of 1 position will disappear

  • torch.unsqueeze(input, dim, out=None)

Returns a new tensor, insert Dimension 1 input to the development position

Return tensor and tensor input shared memory, so changes to the contents of one will change the other.

If dim is negative, then the conversion will be dim + input.dim () + 1

Followed by the data above example:

>>> x.unsqueeze(0).size()
torch.Size([1, 3, 1, 2])
>>> x.unsqueeze(0)
tensor([[[[-0.1986, 0.4352]],

     [[ 0.0971, 0.2296]],

     [[ 0.8339, -0.5433]]]])
>>> x.unsqueeze(-1).size()
torch.Size([3, 1, 2, 1])
>>> x.unsqueeze(-1)
tensor([[[[-0.1986],
     [ 0.4352]]],

    [[[ 0.0971],
     [ 0.2296]]],

    [[[ 0.8339],
     [-0.5433]]]])

It can be seen in the specified location, adding a dimension.

Expand tensor: torch.Tensor.expand (* sizes) → Tensor

Return tensor of a new view, a single dimension to expand to a larger size. tensor can be expanded to a higher dimension, a new added dimension will be attached to the front. Expand tensor do not need to allocate new memory, but only a new tensor of view, which is set to 0 by stride, one-dimensional bit higher dimensions will be expanded. Any one dimensional memory without allocating a new case can be extended to any value.

for example:

>>> x = torch.Tensor([[1], [2], [3]])
>>> x.size()
torch.Size([3, 1])
>>> x.expand(3, 4)
tensor([[1., 1., 1., 1.],
    [2., 2., 2., 2.],
    [3., 3., 3., 3.]])
>>> x.expand(3, -1)
tensor([[1.],
    [2.],
    [3.]])

3 is an original data row 1, after the expansion becomes three rows and four columns, the effect of filling process -1 and 1, as it can only expand a size of 1, 1 if not can not be changed, and the size is not 1 dimension must be filled out and go like the original.

Repeat tensor: torch.Tensor.repeat (* sizes)

Repeat tensor along a prescribed dimension. Unlike expand (), this function to copy the data tensor.

for example:

>>> x = torch.Tensor([1, 2, 3])
>>> x.size()
torch.Size([3])
>>> x.repeat(4, 2)
    [1., 2., 3., 1., 2., 3.],
    [1., 2., 3., 1., 2., 3.],
    [1., 2., 3., 1., 2., 3.]])
>>> x.repeat(4, 2).size()
torch.Size([4, 6])

1 to original data row 3, row direction expanded to four times the original, expanded to 2 times the original column direction, becomes the 4 rows and 6 columns.

It can be seen when a change is made to a whole original data, then repeated the specified dimensions and size, into a matrix of 4 rows and two columns, wherein each unit is the same, then the original data into each units.

Matrix Transpose: torch.t (input, out = None) → Tensor

Input of a matrix (two-dimensional tensor), and transpose 0, 1 dimension. It can be viewed as a function transpose (input, 0, 1) shorthand function.

for example:

>>> x = torch.randn(3, 5)
>>> x
tensor([[-1.0752, -0.9706, -0.8770, -0.4224, 0.9776],
    [ 0.2489, -0.2986, -0.7816, -0.0823, 1.1811],
    [-1.1124, 0.2160, -0.8446, 0.1762, -0.5164]])
>>> x.t()
tensor([[-1.0752, 0.2489, -1.1124],
    [-0.9706, -0.2986, 0.2160],
    [-0.8770, -0.7816, -0.8446],
    [-0.4224, -0.0823, 0.1762],
    [ 0.9776, 1.1811, -0.5164]])
>>> torch.t (x) # Another use
tensor([[-1.0752, 0.2489, -1.1124],
    [-0.9706, -0.2986, 0.2160],
    [-0.8770, -0.7816, -0.8446],
    [-0.4224, -0.0823, 0.1762],
    [ 0.9776, 1.1811, -0.5164]])

If the tensor must be two-dimensional, which is the matrix, it can be used.

Dimensions Replacement: torch.transpose (), torch.Tensor.permute ()

  • torch.transpose(input, dim0, dim1, out=None) → Tensor

Return to the input of the input switch matrix. Swap dimensions dim0 and dim1. Input output tensor tensor shared memory, so changes are also a cause a further modified.

for example:

>>> x = torch.randn(2, 4, 3)
>>> x
tensor([[[-1.2502, -0.7363, 0.5534],
     [-0.2050, 3.1847, -1.6729],
     [-0.2591, -0.0860, 0.4660],
     [-1.2189, -1.1206, 0.0637]],

    [[ 1.4791, -0.7569, 2.5017],
     [ 0.0098, -1.0217, 0.8142],
     [-0.2414, -0.1790, 2.3506],
     [-0.6860, -0.2363, 1.0481]]])
>>> torch.transpose(x, 1, 2).size()
torch.Size([2, 3, 4])
>>> torch.transpose(x, 1, 2)
tensor([[[-1.2502, -0.2050, -0.2591, -1.2189],
     [-0.7363, 3.1847, -0.0860, -1.1206],
     [ 0.5534, -1.6729, 0.4660, 0.0637]],

    [[ 1.4791, 0.0098, -0.2414, -0.6860],
     [-0.7569, -1.0217, -0.1790, -0.2363],
     [ 2.5017, 0.8142, 2.3506, 1.0481]]])
>>> torch.transpose(x, 0, 1).size()
torch.Size([4, 2, 3])
>>> torch.transpose(x, 0, 1)
tensor([[[-1.2502, -0.7363, 0.5534],
     [ 1.4791, -0.7569, 2.5017]],

    [[-0.2050, 3.1847, -1.6729],
     [ 0.0098, -1.0217, 0.8142]],

    [[-0.2591, -0.0860, 0.4660],
     [-0.2414, -0.1790, 2.3506]],

    [[-1.2189, -1.1206, 0.0637],
     [-0.6860, -0.2363, 1.0481]]])

Can be transposed to multi-dimensional tensor

  • torch.Tensor.permute(dims)

The tensor dimension transposition

Followed by the data above example:

>>> x.size()
torch.Size([2, 4, 3])
>>> x.permute(2, 0, 1).size()
torch.Size([3, 2, 4])
>>> x.permute(2, 0, 1)
tensor([[[-1.2502, -0.2050, -0.2591, -1.2189],
     [ 1.4791, 0.0098, -0.2414, -0.6860]],

    [[-0.7363, 3.1847, -0.0860, -1.1206],
     [-0.7569, -1.0217, -0.1790, -0.2363]],

    [[ 0.5534, -1.6729, 0.4660, 0.0637],
     [ 2.5017, 0.8142, 2.3506, 1.0481]]])

Direct index of each dimension in the fill process, the tensor will exchange the size specified dimension, it is not limited to twenty-two exchange.

That's all for this article, I want to be helpful to learn, I hope you will support us.

Time: 2019-08-17

Guess you like

Origin www.cnblogs.com/leebxo/p/11827819.html