Course Record
Tensor creation, transformation and operation
course code
1. create_tensor
import torch
import numpy as np
a = np.ones((3, 3))
print(a, id(a))
b = torch.tensor(a)
print(b, id(b), b.device)
# b_gpu = torch.tensor(a, device = 'cuda')
b_gpu = torch.tensor(a, device = 'cpu')
print(b_gpu, id(b_gpu), b_gpu.device)
c = torch.from_numpy(a)
print(c, id(c))
a[0, 0] = 2
print(a, c)
c[0, 1] = 3
print(a, c)
d = torch.zeros((3, 3, 3))
print(d, d.dtype, d.shape)
dd = torch.zeros_like(d)
print(d, d.type, d.shape)
e = torch.full((2, 2), 233)
print(e, e.dtype)
ee = torch.full((2, 2), 233.)
print(ee, ee.dtype)
f = torch.arange(1, 5)
print(f, f.dtype)
ff = torch.arange(1., 5.1)
print(ff, ff.dtype)
g = torch.linspace(1, 6, 6)
print(g, g.dtype)
h = torch.normal(0, 1, (3, 3))
print(h, h.dtype)
hh = torch.randn((3, 3))
print(hh, hh.dtype)
i = torch.rand((2, 2))
print(i)
ii = torch.randint(1, 5, (2, 2))
print(ii)
j = torch.randperm(20)
print(j, j.dtype)
2. reshape_tensor
import torch
import numpy as np
a = torch.arange(0, 10, dtype = torch.int64)
b = torch.reshape(a, (2, 5))
print(b)
b_T = torch.t(b)
print(b_T, b_T.shape)
c = torch.reshape(torch.arange(0, 24, dtype = torch.int64), (2, 3, 4))
print(c)
d = torch.transpose(c, 0, 1)
print(d)
e = torch.tensor([1])
print(e, e.shape)
f = torch.squeeze(e)
print(f, f.shape)
f = f * 2
print(f, e)
ee = torch.unsqueeze(f, dim = 0)
print(ee)
3. concat_split_tensor
import torch
import numpy as np
t1 = torch.ones((2, 2))
t2 = torch.zeros((2, 2))
a = torch.cat([t1, t2], dim = 0)
print(a, a.shape)
b = torch.stack([t1, t2], dim = 0)
print(b, b.shape)
print(b[0], b[1])
x = torch.split(b, [1, 1], dim = 0)
print(type(x))
c, d = x
print(c, d)
e = torch.index_select(a, dim = 0, index = torch.tensor([0, 2]))
print(e)
mask = a.ge(1)
f = torch.masked_select(a, mask)
print(mask, f)
4. tensor_operator
# 通过一元线性回归, 来熟悉和展示常用的tensor的运算操作
import torch
import numpy as np
torch.manual_seed(10)
# data
x = torch.rand((20, 1)) * 10
y = 2 * x + 5 + torch.randn(20, 1)
# model
w = torch.tensor(np.asarray([0.3]), requires_grad=True)
b = torch.tensor(np.asarray([0.]), requires_grad=True)
print(w, b)
# iteration
for _ in range(1000):
# flow
y_pre = w * x + b
loss = ( 0.5 * (y_pre - y) ** 2 ).mean()
# backwords
loss.backward()
w.data.sub_(0.05 * w.grad)
b.data.sub_(0.05 * b.grad)
w.grad.zero_()
b.grad.zero_()
# show
if _ % 100 == 0:
print(str(_) + ' loss is', loss.data.numpy())
if loss.data.numpy() < 0.47:
break
print('finish...')
operation
1. Install anaconda, pycharm, CUDA+CuDNN (optional), virtual environment, pytorch, and implement hello pytorch to view the version of pytorch
2. What is the relationship between tensors and matrices, vectors, and scalars?
3. What function does Variable "gave" a tensor?
4. Use torch.from_numpy to create a tensor, and print the address to view the ndarray and tensor data;
5. Implement four modes of torch.normal() to create tensors.
1. Installation environment
- conda create -n torch_p36 python=3.6.5
- conda activate torch_p36
- pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
2. Concept explanation
Scalar
A scalar represents a single number, which is different from most other objects studied in linear algebra
Vector
A vector represents a set of ordered numbers. Through the index in the sequence, we can determine each individual number
Matrix A
matrix is a collection of objects with the same characteristics and latitude, expressed as a two-dimensional data table. Its meaning is that an object is represented as a row in the matrix, and a feature is represented as a column in the matrix, and each feature has a numeric value.
Tensor
In some cases, we will discuss arrays with coordinates that exceed two dimensions. Generally, the elements in an array are distributed in a regular grid of several dimensional coordinates, which we call a tensor
3. Variable "gives" tensor function
Variable is the data type in torch.autograd. It is mainly used to encapsulate Tensor so that tensor can be automatically derived.
There are five main attributes :
1.data: the wrapped Tensor
2.grad: the gradient of data
3.grad_fn: create Tensor Function (the method used to create a tensor, such as addition or multiplication), is the key to automatic derivation
4.requires.grad: Indicate whether the tensor needs a gradient, and the tensor that does not need a gradient can be set to false
5.is_leaf: Indicates whether the tensor is a leaf node in the calculation graph.
Now variable does not need to appear in the code, it is incorporated into tensor
Tensor
dtype
shape
device
4. Create Tensor
import torch
import numpy as np
a = np.ones((3, 3))
print(a, id(a))
b = torch.tensor(a)
print(b, id(b), b.device)
# b_gpu = torch.tensor(a, device = 'cuda')
b_gpu = torch.tensor(a, device = 'cpu')
print(b_gpu, id(b_gpu), b_gpu.device)
c = torch.from_numpy(a)
print(c, id(c))
a[0, 0] = 2
print(a, c)
c[0, 1] = 3
print(a, c)
d = torch.zeros((3, 3, 3))
print(d, d.dtype, d.shape)
dd = torch.zeros_like(d)
print(d, d.type, d.shape)
e = torch.full((2, 2), 233)
print(e, e.dtype)
ee = torch.full((2, 2), 233.)
print(ee, ee.dtype)
f = torch.arange(1, 5)
print(f, f.dtype)
ff = torch.arange(1., 5.1)
print(ff, ff.dtype)
g = torch.linspace(1, 6, 6)
print(g, g.dtype)
h = torch.normal(0, 1, (3, 3))
print(h, h.dtype)
hh = torch.randn((3, 3))
print(hh, hh.dtype)
i = torch.rand((2, 2))
print(i)
ii = torch.randint(1, 5, (2, 2))
print(ii)
j = torch.randperm(20)
print(j, j.dtype)