Notes on common functions of Torch

torch.ones(*sizes, out=None):

Returns a tensor of all 1s, whose shape is defined by the variable parameter sizes

true_w, true_b = torch.ones((num_inputs, 1)) * 0.01, 0.05

torch.zeros(*sizes, out=None):

Returns a Tensor of all scalar 0s with shape defined by the variadic size parameters

torch.normal(means, std, out=None):

Returns a tensor containing random numbers drawn from a discrete normal distribution with the given means, std. The shapes of the mean and standard deviation do not have to match, but each tensor must have the same number of elements

  1. means (Tensor) - means
  2. std(Tensor) - standard deviation
  3. out (Tensor, optional) - optional output tensor

Pytorch's sequential container torch.nn.Sequential()

How to use:

# 写法一
net = nn.Sequential(
    nn.Linear(num_inputs, 1)
    # 此处还可以传入其他层
    )

# 写法二
net = nn.Sequential()
net.add_module('linear', nn.Linear(num_inputs, 1))
# net.add_module ......

# 写法三
from collections import OrderedDict
net = nn.Sequential(OrderedDict([
          ('linear', nn.Linear(num_inputs, 1))
          # ......
        ]))

  1. Method 1:
    This is a sequential container, and the specific neural network modules are added to the calculation graph in the order they are passed to the constructor for execution.
  2. Method 2:
    You can also pass an ordered dictionary (OrderedDict) with a specific neural network module as an element as a parameter.
  3. Method 3:
    You can also use the add_module function to insert a specific neural network module into the calculation graph. The add_module function is a method in the base class (torch.nn.Module) of the neural network module, and is used to add a submodule to an existing module as described below.

torch.nn.MSELoss()

Find the loss between predict and target

MSE: Mean Squared Error (mean square error, mean square error)
meaning: mean square error, which is the average of the sum of squares of the difference between the predicted value and the true value, that is: the
insert image description here
new version of the nn.MSELoss() function has only one reduction parameter. Reduction means whether the dimension should be reduced, and how to reduce it. There are three options:

  1. ‘none’: no reduction will be applied.
  2. ‘mean’: the sum of the output will be divided by the number of elements in the output.
  3. ‘sum’: the output will be summed.

If the reduction parameter is not set, the default is 'mean'.

Reference: Program example blog

Guess you like

Origin blog.csdn.net/weixin_43629813/article/details/119298139