pytorch study notes seventeen: a small summary of official API documentation

1. torchThe commonly used APIs are basically functions for data processing

①Operation data:

torch.is_tensor,torch.set_default_dtype,torch.get_default_dtype,torch.cat,torch.index_select,torch.reshape,torch.squeeze,torch.t,torch.unsqueeze,torch.transpose,torch.take,torch.where

②Define the data (tensor):

torch.tensor,torch.empty,torch.empty_like,torch.full,torch.full_like,torch.ones,
torch.ones_like,torch.zeros,torch.zeros_like,torch.range,torch.arange,torch.line(log)space

③Data conversion (mainly from ndarray):

torch.from_numpy,torch.as_tensor()

④ Random number:

torch.normal,torch.rand,torch.rand_like,torch.randint,torch.randint_like,torch.randn,torch.randn_like,

⑤ Mathematical calculation

一般数学计算:torch.abs,torch.add,torch.clamp,torch.exp,torch.log,torch.log10,torch.mm,torch.mul,torch.pow,torch.round,torch.sigmoid,torch.sin,torch.sqrt,

Calculate data characteristics (average, variance, etc.): torch.argmax, torch.argmin, torch.median, torch.norm, torch.std, torch.sum, torch.unique, torch.var,

Data analysis and processing: torch.eq, torch.equal, torch.isfinite, torch.isinf, torch.isnan, torch.sort,

Spectrum calculation (such as fft, ifft): torch.fft, torch.ifft, torch.rfft, torch.hamming_window (hamming window)

⑥Set up computing equipment:

device=torch.device("cpu:0") or torch.device("cuda:0"), you can also write "cpu", "cuda" directly

2. torch.nnThe commonly used APIs are mainly used to build the network and set the function of the network

torch.nn.ModuleAlmost all models layerare inherited from torch.nn.Moduleclasses. Such models have the following properties. The calling format is model.:

add_module(name, module),zero_grad(),apply(fn),cpu(),cuda(device=None),eval()train(mode=True),float(),to(device=None,dtype=None),requires_grad_(requires_grad=True),

②View the layers or parameters of the model:

modules(),named_modules(),._modules,parameters(),named_parameters(),children(),named_children(),(named function will return layers and names at the same time); state_dict()(dictionary),load_state_dict(state_dict ),

③Build the model:

torch.nn.Sequential,torch.nn.ModuleList,torch.nn.ModuleDict

④Add model parameters:

torch.nn.ParameterList,torch.nn.ParameterDict

⑤layers:

Convolutional layer: torch.nn.Conv2d, torch.nn.ConvTranspose2d,
linear layer: torch.nn.Linear, torch.nn.Bilinear (bilinear)
pooling layer: torch.nn.MaxPool2d, torch.nn.AvgPool2d ;
Flattening layer: torch.nn.Flatten
normalization layer: torch.nn.BatchNorm2d,
discarding layer: torch.nn.Dropout, torch.nn.AlphaDropout (after discarding a part, the original variance and mean of the data will not be changed)
Activation function: torch.nn.LeakyReLU, torch.nn.LogSigmoid, torch.nn.ReLU, torch.nn.SELU, torch.nn.Sigmoid, torch.nn.Tanh, torch.nn.Softmax,
loss function: torch. nn.MSELoss, torch.nn.CrossEntropyLoss, torch.nn.L1Loss, torch.nn.BCELoss
up sampling: torch.nn.Upsample, torch.nn.UpsamplingNearest2d, torch.nn.UpsamplingBilinear2d

Third, torch.nn.functionalthe commonly used APIs are mainly some loss functions and layers, and torch.nnare very similar

1, almost all of its functions torch.nncan be found under, especially those layers, but these functions with the same name in two different namespacethere are several different points under:
Insert picture description here
Insert picture description here
Insert picture description here

2, of course, some torch.nnwithout, or both, but more commonly torch.nn.functionalin

①onehot编码:torch.nn.functional.one_hot(tensor, num_classes=-1)
②loss函数(一般用这个,不用torch.nn的)
torch.nn.functional.binary_cross_entropy_with_logits,torch.nn.functional.binary_cross_entropy,torch.nn.functional.cross_entropy,torch.nn.functional.l1_loss,torch.nn.functional.mse_loss,torch.nn.functional.nll_loss,
③上采样:torch.nn.functional.upsample,torch.nn.functional.upsample_bilinear
torch.nn.functional.upsample_nearest

Fourth, torch.Tensordefine first tensor=torch.tensor(), and then the following are the operations that can be performed on tensor

1. The data types in torch are mainly divided into dtype,CPU tensor,GPU tensorthree types.
Insert picture description here
Insert picture description here
2. The attributes of tensor itself, many of which are similar to the functions under torch, but it is the attribute of tensor itself, and the calling method is tensor.; and the calling method of the former istorch.

①定义数据:tensor.new_tensor,tensor.new_full,tensor.new_ones,tensor.new_empty,tensor.new_zeros

②View the properties of tensor: tensor.is_cuda, tensor.device, tensor.grad (commonly used), tensor.ndim, requirements_grad

③ Perform calculation operations on tensor: tensor.T, tensor.abs(), tensor.abs_(), tensor.add(value), tensor.add_(value), tensor.argmax(),tensor.argmin(), tensor .backward(commonly used),clamp(min, max),clamp_(min, max),cos(),cos_(),div(),div_(),double(),dot(),eq(),eq_( ),equal(),exp(),exp_(),min(),max(),mean(),median(),pow(),pow_(),repeat(commonly used),sort(),sqrt() ,sqrt_(),

④index操作:index_add,index_add_,index_fill_,index_fill,index_select

⑤Convert tensor: bool(), byte(), char(), clone(commonly used), cuda(commonly used), cpu(commonly used), detach(commonly used), detach_(commonly used), item(commonly used), numpy( Commonly used), permute (commonly used, dimension exchange), requires_grad_(requires_grad=True, commonly used), reshape(*shape),
reshape_as(other), resize_(*sizes), resize_as_(other), to(device=None,dtype= None),view(*shape),view_as(other),where(condition, y)

⑥Operation on bool type tensor: all(), any()

Five, torch.cudathe commonly used API

torch.cuda.current_device(),torch.cuda.device_count(),torch.cuda.get_device_name,torch.cuda.init(),torch.cuda.is_available(),torch.cuda.is_initialized(),torch.cuda.set_device

Six, torch.nn.initthe commonly used APIs are used to initialize tensor

torch.nn.init.uniform_(tensor, a=0.0, b=1.0),torch.nn.init.normal_(tensor, mean=0.0, std=1.0),torch.nn.init.constant_(tensor, val),torch.nn.init.ones_(tensor),torch.nn.init.zeros_(tensor)

Seven, torch.optimthe commonly used API under, used to select the optimization algorithm to define the optimizer

①Common optimization algorithms: torch.optim.Adam, torch.optim.SGD, they all have attributes.step()
②example: Three methods of passing formal parameters, the last one is suitable for migration learning, because different layers require different optimization strengths

optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
optimizer = optim.Adam([var1, var2], lr=0.0001)
optim.SGD([
                {
    
    'params': model.base.parameters()},
                {
    
    'params': model.classifier.parameters(), 'lr': 1e-3}
            ], lr=1e-2, momentum=0.9)


③Methods to optimize the learning rate lr: torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1),
torch.optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda, last_epoch=-1)

8. torch.utils.dataCommonly used APIs to generate data sets

Generate data set: torch.utils.data.Dataset, torch.utils.data.DataLoader (batch_size, shuffle and other parameters can be set)

9. torch.hubCommonly used APIs below, mainly used to download some models

torch.hub.list(github, force_reload=False)
torch.hub.help(github, model, force_reload=False)
torch.hub.load(github, model, *args, **kwargs)
torch.hub.download_url_to_file(url, dst, hash_prefix=None, progress=True)
torch.hub.load_state_dict_from_url(url, model_dir=None, map_location=None, progress=True, check_hash=False)

Ten. torchvisionCommonly used APIs are mainly used to download some data sets and models, and can also transform data

①torchvision.datasets:

torchvision.datasets.MNIST(root,train=True,transform=None,target_transform=None, download=False)
torchvision.datasets.FashionMNIST(root, train=True, transform=None, target_transform=None, download=False)
torchvision.datasets.CocoCaptions(root, annFile, transform=None, target_transform=None, transforms=None)
torchvision.datasets.ImageFolder(root, transform=None, target_transform=None, loader=, is_valid_file=None)
Insert picture description here

②Image data conversion:

torchvision.transforms.Compose(transforms), transforms are some data processing methods, such as:
CenterCrop, ToTensor, RandomCrop, RandomHorizontalFlip, RandomResizedCrop, Resize

③Converting tensor is generally used after image data conversion, because the image is converted to tensor:

Normalize

④Inversely transform the tensor matrix into an image in PIL format:

torchvision.transforms.ToPILImage

Guess you like

Origin blog.csdn.net/qq_39507748/article/details/110756605