[Deep Learning] Pytorch Series Tutorials (10): PyTorch Data Structure: 2. Tensor Operations: (4) Detailed explanation of indexing and slicing

Table of contents

I. Introduction

2. Experimental environment

3. PyTorch data structure

0. Classification

1. Tensor

2. Tensor Operations

1. Mathematical operations

2. Statistical calculations

3. Tensor deformation

4. Indexing and slicing

Access a single element using index

Use slices to access subsets

Modify using indexes and slicing

Boolean indexing

Advanced slicing


I. Introduction

ChatGPT:

        PyTorch is an open source machine learning framework that is widely used in the field of deep learning. It provides a rich set of tools and libraries for building and training various types of neural network models. Here is some detailed introduction to PyTorch:

  1. Dynamic calculation graph: PyTorch uses a dynamic calculation graph for calculations, which means that the calculation graph can be dynamically defined, modified and adjusted at runtime, making model construction and debugging more flexible and intuitive.

  2. Powerful GPU acceleration support: PyTorch makes full use of the GPU for calculations, which can greatly improve the speed of training and inference. It provides optimized operations and memory management for GPU, making running models on GPU more efficient.

  3. Automatic derivation: PyTorch has a built-in automatic derivation function, which can automatically calculate the gradient according to the defined calculation graph. This simplifies the implementation of the backpropagation algorithm and makes training neural network models more convenient.

  4. A large number of pre-trained models and model libraries: There are many pre-trained models and model libraries available in the PyTorch ecosystem, such as TorchVision, TorchText, and TorchAudio, etc. These models can be easily loaded and used to speed up model development.

  5. High-level abstraction interface: PyTorch provides high-level abstraction interfaces, such as nn.Moduleand nn.functional, for quickly building neural network models. These interfaces encapsulate commonly used neural network layers and functions, simplifying the model definition and training process.

  6. Support distributed training: PyTorch supports distributed training on multiple GPUs and multiple machines, which can accelerate the training process and process large-scale data and models.

        Overall, PyTorch provides a flexible and powerful platform that makes deep learning research and development more convenient and efficient. Its simple API and rich functions allow users to quickly implement complex neural network models and achieve excellent performance in various tasks.

2. Experimental environment

        This series of experiments uses the following environment

conda create -n DL python=3.7 
conda activate DL
pip install torch==1.8.1+cu102 torchvision==0.9.1+cu102 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
conda install matplotlib

Regarding configuration environment issues, you can refer to the previous painful experience:

3. PyTorch data structure

0. Classification

  • Tensor : Tensor is the most basic data structure in PyTorch, similar to a multi-dimensional array. It can represent a scalar, vector, matrix, or array of arbitrary dimensions.
    • Tensor operations : PyTorch provides a wealth of operation functions for performing various operations on Tensor, such as mathematical operations, statistical calculations, tensor deformation, indexing and slicing, etc. These operation functions can efficiently utilize the GPU for parallel computing and accelerate the model training process.
  • Variable : Variable is an encapsulation of Tensor and is used for automatic derivation. In PyTorch, Variables automatically track and record operations performed on them, building computational graphs and supporting automatic derivation. In PyTorch 0.4.0 and later versions, Variable is abandoned, and Tensor can be used directly for automatic derivation.
  • Dataset : Dataset is an abstract class used to represent data sets. By inheriting the Dataset class, you can customize the data set and implement functions such as data loading, preprocessing, and sample acquisition. PyTorch also provides some built-in data set classes, such as MNIST, CIFAR-10, etc., to easily load commonly used data sets.
  • DataLoader : DataLoader is used to load the data in the Dataset in batches and provides multi-thread and multi-process data pre-reading functions. It can efficiently load large-scale data sets and supports operations such as random shuffling of data, parallel loading, and data enhancement.
  • Module : Module is the base class used to build models in PyTorch. By inheriting the Module class, you can define your own model and implement methods such as forward propagation and back propagation. Module provides functions such as parameter management, model saving and loading, etc. to facilitate model training and deployment.

1. Tensor _ _

        

[Deep Learning] Pytorch Series Tutorials (1): PyTorch Data Structure: 1. Tensor: Dimensions, Data Types_QomolangmaH's Blog-CSDN Blog https://blog.csdn.net/ m0_63834988/article/details/132909219?spm=1001.2014.3001.5501https://blog.csdn.net/m0_63834988/article/details/132909219?spm=1001.2014.3001.5501​Edit https://blog.csdn. net/m0_63834988/ article/details/132909219?spm=1001.2014.3001.5501​Edit https://blog.csdn.net/m0_63834988/article/details/132909219?spm=1001.2014.3001.5501 icon-default.png?t=N7T8https://blog.csdn.net/m0_6383498 8/ article/ details/132909219?spm=1001.2014.3001.5501

2. Tensor Operations

        PyTorch provides a wealth of operation functions for performing various operations on Tensor, such as mathematical operations, statistical calculations, tensor deformation, indexing and slicing, etc. These operation functions can efficiently utilize the GPU for parallel computing and accelerate the model training process.

1. Mathematical operations

2. Statistical calculations

3. Tensor deformation

4. Indexing and slicing

        In PyTorch, you can use indexing and slicing operations to access and modify specific elements or subsets of tensors.

  • Access a single element using index

Use square brackets and index values ​​to access individual elements in a tensor. Index values ​​start at 0 and are specified along each dimension.

import torch

x = torch.tensor([[1, 2, 3], [4, 5, 6]])
print(x[0, 1])  # 访问第0行、第1列的元素

Output:

tensor(2)
  • Use slices to access subsets

        You can use colon (:) for slicing operations to access subsets of a tensor. Colons can be used to specify the starting index, ending index and step size.

import torch

x = torch.tensor([[1, 2, 3], [4, 5, 6]])
print(x[:, 1:])  # 访问所有行的第1列及之后的元素

Output:

tensor([[2, 3],
        [5, 6]])
  • Modify using indexes and slicing

        Indexing and slicing operations can be used to modify specific elements or subsets of a tensor.

import torch

x = torch.tensor([[1, 2, 3], [4, 5, 6]])
x[0, 1] = 9  # 修改第0行、第1列的元素为9
print(x)

Output:

tensor([[1, 9, 3],
        [4, 5, 6]])

  • Boolean indexing

        Using a Boolean tensor as an index, you can select the element whose corresponding position in the Boolean tensor is True. The shape of the boolean tensor must match the shape of the tensor being indexed.

import torch

tensor = torch.tensor([[1, 2, 3],
                       [4, 5, 6],
                       [7, 8, 9]])

# 使用布尔索引选择元素
bool_index = tensor[tensor > 5]
print("布尔索引选择的元素:", bool_index)

Output:

tensor([6, 7, 8, 9])

  • Advanced slicing

    In addition to basic slicing operations, you can also use commas to combine multiple slices together to achieve slicing operations in different dimensions.

import torch

tensor = torch.tensor([[1, 2, 3],
                       [4, 5, 6],
                       [7, 8, 9]])

# 使用高级切片选择子集
advanced_slice = tensor[1:, ::2]
print("高级切片选择的子集:\n", advanced_slice)

Output:

tensor([[4, 6],
        [7, 9]])

Using advanced slicing selects a subset of the tensor starting from the second row to the last row and selecting an element in every other column.

Guess you like

Origin blog.csdn.net/m0_63834988/article/details/132922654