PyTorch Practical Practice - Detailed explanation of the most complete operation of Tensor, the basis of neural network image classification (2)


Preface

PyTorch can be said to be the most suitable for beginners to learn among the three mainstream frameworks. Compared with other mainstream frameworks, PyTorch's simplicity and ease of use make it the first choice for beginners. The point I want to emphasize is that the framework can be compared to a programming language, which is only a tool for us to achieve project effects, that is, the wheels we use to build cars. What we need to focus on is to understand how to use Torch to implement functions without overly caring about it. How to make the wheels will take us too much learning time. In the future, there will be a series of articles that explain the deep learning framework in detail, but it is only later that we are more familiar with the theoretical knowledge and practical operations of deep learning before we can start learning. What we need most at this stage is to learn how to use these tools.

The content of deep learning is not so easy to master. It contains a lot of mathematical theoretical knowledge and a lot of calculation formula principles that require reasoning. And without actual operation, it is difficult to understand what role the code we write ultimately represents in the neural network computing framework. However, I will try my best to simplify the knowledge and convert it into content that we are more familiar with. I will try my best to let everyone understand and become familiar with the neural network framework, to ensure smooth understanding and smooth deduction, and try not to use too many mathematical formulas and Professional theoretical knowledge. Quickly understand and implement the algorithm in one article, and become proficient in this knowledge in the most efficient way.


The blogger has been focusing on data modeling for four years, and has participated in dozens of mathematical modeling, large and small, and understands the principles of various models, the modeling process of each model, and various problem analysis methods. The purpose of this column is to quickly use various mathematical models, machine learning, deep learning, and code from scratch. Each article contains practical projects and runnable code. Bloggers keep up with various digital and analog competitions. For each digital and analog competition, bloggers will write the latest ideas and codes into this column, as well as detailed ideas and complete codes. I hope friends in need will not miss the column carefully created by the author.
Quick Learning in One Article - Commonly Used Models in Mathematical Modeling


This article is continued from the previous article. There is too much content to be written separately: PyTorch Practical Practice - Detailed explanation of the most complete operation of Tensor, the basis of neural network image classification (1)_fanstuck's blog-CSDN blog

PyTorch data structure-Tensor

5. Tensor mathematical operations

First create two displayed Tensors so that we can observe the operation rules:

import torch

# 创建一个张量
tensor_a = torch.tensor([[1, 2], [3, 4]])
tensor_b = torch.tensor([[5, 6], [7, 8]])

1. Addition and subtraction of tensors

# 张量的加法
tensor_sum = tensor_a + tensor_b
tensor_sum
tensor([[ 6,  8],
        [10, 12]])
# 张量的减法
tensor_diff = tensor_a - tensor_b
tensor_diff

tensor([[-4, -4],
        [-4, -4]])

Rows and columns are added and subtracted accordingly.

2. Multiplication and division of tensors

# 张量的乘法
tensor_product = tensor_a * tensor_b
tensor_product
tensor([[ 5, 12],
        [21, 32]])
# 张量的除法
tensor_div = tensor_a / tensor_b
tensor_div

tensor([[0.2000, 0.3333],
        [0.4286, 0.5000]])

 Rows and columns correspond to multiplication and division.

3. Square root of tensor

# 张量的平方
tensor_square = torch.square(tensor_a)
tensor_square

tensor([[ 1,  4],
        [ 9, 16]])
# 张量的开方
tensor_sqrt = torch.sqrt(tensor_a)
tensor_sqrt

tensor([[1.0000, 1.4142],
        [1.7321, 2.0000]])

  The rows and columns correspond to the square root of each other.

4. Exponential operation of tensor

# 张量的指数运算
tensor_exp = torch.exp(tensor_a)
tensor_exp

tensor([[ 2.7183,  7.3891],
        [20.0855, 54.5981]])

 5. Logarithmic operation of tensors

# 张量的对数运算
tensor_log = torch.log(tensor_a)
tensor_log

tensor([[0.0000, 0.6931],
        [1.0986, 1.3863]])

 6. Multiply tensors element by element

# 张量逐元素相乘
tensor_mul=torch.mul(tensor_a, tensor_b)
tensor_mul

tensor([[ 5, 12],
        [21, 32]])

Same as multiplication.

7.Tensor matrix multiplication

#矩阵乘法
tensor_mm=torch.mm(tensor_a, tensor_b)
tensor_mm

tensor([[19, 22],
        [43, 50]])

 6. Broadcasting Principles

When performing tensor operations in PyTorch, if the shapes of two tensors do not match, PyTorch will try to make their shapes compatible through broadcasting.

import torch

# 创建一个3x2的张量
tensor_a = torch.tensor([[1, 2], [3, 4], [5, 6]])

# 创建一个1x2的张量
tensor_b = torch.tensor([[10, 20]])

# 进行相加操作
result = tensor_a + tensor_b

# 输出结果
print(result)

 tensor_bThe shape of (1, 2), matches tensor_athe shape of (3, 2), PyTorch will automatically tensor_bcopy in the required dimensions to match tensor_athe shape of , and then perform the addition operation.

tensor([[11, 22],
        [13, 24],
        [15, 26]])

 7. Tensor aggregation operation

1. Sum

import torch

# 创建一个张量
tensor = torch.tensor([[1, 2], [3, 4], [5, 6]])

# 对所有元素求和
result_sum = torch.sum(tensor)
print(result_sum)

tensor(21)

2.Mean

import torch

# 创建一个张量,数据类型为浮点数
tensor = torch.tensor([[1, 2], [3, 4], [5, 6]], dtype=torch.float32)

# 计算所有元素的平均值
result_mean = torch.mean(tensor)
print(result_mean)

tensor(3.5000)

 3. Maximum value (Max) and minimum value (Min)

Maximum value:

import torch

# 创建一个张量
tensor = torch.tensor([[1, 2], [3, 4], [5, 6]])

# 找到所有元素中的最大值
result_max = torch.max(tensor)
print(result_max)

tensor(6)

Minimum value:

import torch

# 创建一个张量
tensor = torch.tensor([[1, 2], [3, 4], [5, 6]])

# 找到所有元素中的最小值
result_min = torch.min(tensor)
print(result_min)

tensor(1)

 4. Aggregation operations on specified dimensions

import torch

# 创建一个张量
tensor = torch.tensor([[1, 2], [3, 4], [5, 6]])

# 沿着行的方向对列进行求和(第0维)
result_row_sum = torch.sum(tensor, dim=0)
print(result_row_sum)

tensor([ 9, 12])

 8. Matrix operations

The operations of matrix multiplication have been given above. Here is a more comprehensive introduction to the common operations of matrices:

1.Matrix transpose

import torch

# 创建一个矩阵
A = torch.tensor([[1, 2], [3, 4], [5, 6]])

# 矩阵转置
result = torch.transpose(A, 0, 1)
print(result)

 2. Calculate the inverse of a matrix

import torch

# 创建一个可逆矩阵
A = torch.tensor([[1.0, 2.0], [3.0, 4.0]], dtype=torch.float32)

# 计算逆矩阵
result = torch.inverse(A)
print(result)

 

tensor([[-2.0000,  1.0000],
        [ 1.5000, -0.5000]])

3.Matrix trace finding

import torch

# 创建一个可逆矩阵
A = torch.tensor([[1.0, 2.0], [3.0, 4.0]], dtype=torch.float32)

# 计算逆矩阵
result = torch.inverse(A)
print(result)

 

tensor(5)

 So the basic Tensor operations we have learned here are enough to support us in using PyTorch. Next, we need to learn Variable before we can start practical projects.

Please pay attention to prevent it from getting lost. If there are any mistakes, please leave a message for advice. Thank you very much.

That’s all for this issue. My name is fanstuck. If you have any questions, feel free to leave a message for discussion. See you in the next issue.


Guess you like

Origin blog.csdn.net/master_hunter/article/details/132858669