Pytorch 学习笔记:

pytorch 0.4 相比0.3版本有了很大的变化,具体可参考以下链接

https://blog.csdn.net/jacke121/article/details/80597759
-----------------------------------------------------------------------
w1.grad.zero_():  .zero_()的作用 在计算完w1的梯度并完成w1的更新后,需要将w1的梯度重新置为0

备注:在向后计算梯度之前,需要先将所有的梯度置零.

--------------------------------------------------------------------------------------------------------------

loss.item(): .item() 的作用

--------------------------------------------------------------------------------------------------------------

@staticmethod:含义 参考:https://www.cnblogs.com/elie/p/5876210.html

-----------------------------------------------------------------------------------------------------------------------------------

with torch.no_grad()的作用:将with 内部的变量设定为不需要计算梯度,即将y从计算图中排除

"with torch.no_grad() or .data to avoid tracking history in autograd"--官方文档

import torch 
x = torch.zeros(1, requires_grad=True)
with torch.no_grad(): # 将y 从计算图中排除 ... 
    y = x * 2 
print(y.requires_grad)

--------------------
输出结果: False

---------------------------------------------------------------------------------------------------------------------------------------

loss.backward()会对所有该求梯度的变量求梯度,但是用梯度来更新对应的值,需要人工来操作,需要人工选择对那些权重的梯度进行更新

------------------------------------------------------------------------------------------------------------------------

torchvision.transforms.Normalize(mean, std)

对Tensor进行变换, 给定均值:(R,G,B) 方差:(R,G,B),将会把Tensor正则化。即:Normalized_image=(image-mean)/std

-----------------------------------------------------------------------------------------------------------------

torch.nn.Linear(in_features, out_features, bias=True)

 

猜你喜欢

转载自blog.csdn.net/Strive_For_Future/article/details/83183789
今日推荐