Article Directory
Tensors basic operations
100+ tensor related operator documentation
Tensor and NumPy are ndarray
similar.
new
We can create a new tensor in this way:
- Create a new uninitialized 5x3 matrix:
x = torch.empty(5, 3)
print(x)
Output:
tensor([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
- Create a randomly initialized matrix:
x = torch.rand(5, 3)
print(x)
Output:
tensor([[0.4494, 0.6230, 0.8681],
[0.0780, 0.2643, 0.0934],
[0.1205, 0.0813, 0.9454],
[0.4212, 0.2899, 0.8791],
[0.1500, 0.6572, 0.4772]])
- Create a new 0 matrix and the data type is long
x = torch.zeros(5, 3, dtype = torch.long)
print(x)
Output:
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])
- Create a new matrix and assign directly
x = torch.tensor([5.5, 3])
print(x)
Output:
tensor([5.5000, 3.0000])
Or, we can build a new tensor from an existing tensor. If we do n’t assign new values to it (such as dtype, etc.), the previous attributes will be used.
x = x.new_ones(5, 3, dtype = torch.double)
print(x)
x = torch.randn_like(x, dtype = torch.float)
print(x)
Output:
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float64)
tensor([[ 0.2552, 2.0007, 0.0682],
[-0.8530, -0.1174, -0.6569],
[-1.1001, 0.8416, -1.3575],
[-1.0513, 0.4601, 0.6628],
[ 2.0841, -0.4303, 0.3235]])
Output tensor size
print(x.size())
The output, here is torch.Size
actually a tuple, so it supports all tuple operations
torch.Size([5, 3])
add
- x + y
- torch.add(x, y)
- torch.add (x, y, out = result) can set an argument to save the output
result = torch.empty(5, 3)
torch.add(x, y, out = result)
print(result)
Output:
tensor([[ 0.7021, 2.1474, 0.0886],
[-0.5905, 0.0338, 0.2445],
[-0.9172, 1.5455, -1.1381],
[-0.6434, 0.5016, 1.0220],
[ 2.9464, -0.1195, 1.0920]])
- y.add_(x)
Tips: Sometimes you will see it after the operator
_
. At this time, it means that when you operate on the tensor, it will not go through the copy operation, but directly change its value at the position, which is equivalent to "in-place operation "(In-place version in English, corresponding to out-of-place version). Some operations, such asadd
both versions, can be used, but this is not true for all operations: ifnarrow
there is no in-place version, it.narrow_
does not exist; similarly, therefill_
is no out-of-place version, so it.fill
does not exist.
# add x to y
y.add_(x)
print(y)
We can also operate on tensor like numpy:
print(x[:, 1])
view
The main function is resize, which is equivalent to resize()
the function in numpy . At the same time, it can also be x.resize_(2, 3)
used for reshape operations.
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
In the parameter, -1 will expand the tensor into a one-dimensional tensor (if other dimensions are not 0, it will change accordingly)
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])
item
If there is only one element of the tensor can be used .item()
to obtain the value.
x = torch.randn(1)
print(x)
print(x.item())
Output:
tensor([0.5746])
0.5746164917945862
NumPy Bridge
Torch tensor ——> NumPy array
a = torch.ones(5)
print(a)
Output:
tensor([1., 1., 1., 1., 1.])
b = a.numpy()
print(b)
Output:
[1. 1. 1. 1. 1.]
a.add_(1)
print(a)
print(b)
Output:
tensor([2., 2., 2., 2., 2.])
[2. 2. 2. 2. 2.]
NumPy array ——> Torch tensor
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out = a)
print(a)
print(b)
Output:
[2. 2. 2. 2. 2.]
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
Note that the value of b will change with the change of a
CUDA Tensors
We can .to
move the tensor to any device through methods.
# let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!
Summary from this