Pytorch——3.1. 热身:Pytorch基础

Pytorch github代码链接:https://github.com/L1aoXingyu/pytorch-beginner

ch3:多层全连接神经网络

3.1 热身:Pytorch基础

3.1.1 Tensor(张量)

  1. 不同数据类型的tensor:
32位浮点型torch.FloatTensor   #Tensor默认数据类型
64位浮点型torch.DoubleTensor
16位整型torch.ShortTensor
32位整型torch.IntTensor
64位整型torch.LongTensor

tensor 应用实例:

import torch
a=torch.Tensor([[1,2],[2,3],[3,4]]);
print('a is:{}'.format(a))
print('a size is :{}'.format(a.size()))

#改变tensor的类型
b=torch.LongTensor([[1,2],[2,3],[3,4]])
print('b is:{}'.format(b))

#建立一个全零的tensor
c=torch.zeros((3,2))
print('c is:{}'.format(c))

#取一个正态分布作为随机初始值
d=torch.randn((3,2))
print('d is:{}'.format(d))

运行结果:

a is:
 1  2
 2  3
 3  4
[torch.FloatTensor of size 3x2]

a size is :torch.Size([3, 2])
b is:
 1  2
 2  3
 3  4
[torch.LongTensor of size 3x2]

c is:
 0  0
 0  0
 0  0
[torch.FloatTensor of size 3x2]

d is:
-0.2037 -0.6000
-0.4322  0.2700
 0.6836  0.3145
[torch.FloatTensor of size 3x2]
  1. 可以像numpy一样通过索引的方式取得其中的元素,同时改变它的值:
    比如:
a[0,1]=100
print('a is:{}'.format(a))

运行结果为:

a is:
   1  100
   2    3
   3    4
[torch.FloatTensor of size 3x2]
  1. 除此之外,还可以在Tensor和numpy.ndarray 之间相互转换。
    通过b.numpy()可以将tensor b转换为numpy
    通过torch.from_numpy(e)可以讲numpy e转换为tensor;
numpy_b=b.numpy()
print('b convert to numpy is:{}'.format(numpy_b))

运行结果为:

b convert to numpy is:[[1 2]
 [2 3]
 [3 4]]
e=np.array([[2,3],[4,5]])
print('e is:{}'.format(e))
torch_e=torch.from_numpy(e)
print('torch_e is:{}'.format(torch_e))

运行结果为:

e is:[[2 3]
 [4 5]]
torch_e is:[[2 3]
 [4 5]]

如果需要改变tensor的数据类型,只需要在转换后的tensor后面加上所需的数据类型即可。

f_torche=torch_e.float()
print('f_torche is:{}'.format(f_torche))

结果为:

f_torche is:
 2  3
 4  5
[torch.FloatTensor of size 2x2]

如果需要将Tensor放到GPU上,只需要a.cuda()就能将tensor a放到GPU上了。

if torch.cuda.is_available():   #判断是否支持GPU
    a_cuda=a.cuda()
    print(a_cuda)

3.1.2:Variable(变量)

Variable(变量)提供了自动求导的功能。
将一个tensor a 变成Variable(变量),只需要Variable(a)就可以了。
Variable有三个比较重要的属性:data, grad 和 grad_fn.
通过data可以取出Variable里面的tensor数值,grad_fn表示的是得到这个Variable的操作,比如通过加减还是乘除来得到的, 最后grad就是这个Variable的反向传播梯度。

1)标量求导

注意:
1.构建变量时,要注意参数requires_grad=True, 这个参数表示是否对这个变量求梯度,默认的是false——不求梯度。
2.y.backward(),这一行的代码就是所谓的自动求导,这个函数其实等价于y.backward(torch.FloatTensor([1])), 只不过对于标量求导里面的参数可以不写。
自动求导不需要明确地写出那个函数对哪个函数求导,直接通过这行代码就可以对所有的需要梯度的变量进行求导,得到他们的梯度,然后通过x.grad就可以得到x 的梯度。

扫描二维码关注公众号,回复: 1464372 查看本文章
import torch
from torch.autograd import Variable

#创建变量
x=Variable(torch.Tensor([1]), requires_grad=True)   
w=Variable(torch.Tensor([2]), requires_grad=True)
b=Variable(torch.Tensor([3]), requires_grad=True)


#建立计算图
y=w*x+b     #y=2*x+3


#计算梯度
y.backward()
 #打印梯度
print(x.grad)   #x.grad=2
print(w.grad)   #w.grad=1
print(b.grad)   #b.grad=1

结果为:

Variable containing:
 2
[torch.FloatTensor of size 1]

Variable containing:
 1
[torch.FloatTensor of size 1]

Variable containing:
 1
[torch.FloatTensor of size 1]

2)矩阵求导

x=torch.randn(3)
print('x is :{}'.format(x))
x=Variable(x,requires_grad=True)

y=x*2
print(y)

y.backward(torch.FloatTensor([1,0.1,0.01]))

print(x.grad)

相当于给出了一个三维向量去做运算,这时候得到的结果 y 就是一个向量。对这个向量求导就不能写成y.backward(), 这样程序会报错。
这个时候需要传入参数生命,比如y.backward(torch.FloatTensor([1,1,1])) ,这样得到的结果就是他们每个分量的梯度;或者可以传入y.backward(torch.FloatTensor([1,0.1,0.01])) ,这样得到的梯度就是他们原本的梯度分别乘上1, 0.1, 0.01.

Dataset(数据集)

3.2.3 多维线性回归

3.2.4 一维线性回归

文章链接:10分钟快速入门 PyTorch (1) - 线性回归

训练数据
散点图用matplotlib画出来如图:
(matplotlib用法链接:python学习之matplotlib绘制散点图实例

#import torch
import numpy as np
import matplotlib.pyplot as plt

#from torch.autograd import Variable
x_train=np.array([[3.3],[4.4],[5.5],[6.71],[6.93],[4.168],[9.779],
                  [6.182],[7.59],[2.167],[7.042],[10.791],[5.313],[7.997],[3.1]],dtype=np.float32)
y_train=np.array([[1.7],[2.76],[2.09],[3.19],[1.694],[1.573],[3.366],[2.596],
                  [2.53],[1.22],[2.827],[3.465],[1.65],[2.904],[1.3]],dtype=np.float)
#plt.plot(x_train,y_train)  
plt.scatter(x_train, y_train, s=50) 
# 设置图表标题并给坐标轴加上标签
plt.title('Numbers', fontsize=24)
plt.xlabel('x_Value', fontsize=14)
plt.ylabel('y_Value', fontsize=14)

# 设置刻度标记的大小
plt.tick_params(axis='both', which='major', labelsize=14)

# 设置每个坐标轴的取值范围
#函数axis()要求提供四个值:x、y坐标轴的最小值和最大值。[xmin,xmax,ymin,ymax]
plt.axis([0, 15, 0, 4])
plt.show()               

这里写图片描述

我们想要做的事情就是找一条直线去逼近这些点,希望这条直线离这些点的距离之和最小。
1)先将numpy.array 转换成Tensor, 因为Pytorch里面的处理单元是Tensor.

x_train = torch.from_numpy(x_train)
y_train = torch.from_numpy(y_train)

2)接着需要建立模型:

class LinearRegression(nn.Module):
    def __init__(self):
        super(LinearRegression, self).__init__()
        self.linear = nn.Linear(1, 1)  # input and output is 1 dimension

    def forward(self, x):
        out = self.linear(x)
        return out

if torch.cuda.is_available():
    model = LinearRegression().cuda()
else:
    model = LinearRegression()

3)然后需要定义loss和optimizer,就是误差和优化函数

criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=1e-4)

4)接着就可以训练模型了:

# 开始训练
num_epochs = 1000
for epoch in range(num_epochs):
    inputs = Variable(x_train)
    target = Variable(y_train)

    # forward
    out = model(inputs)
    loss = criterion(out, target)
    # backward
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    if (epoch+1) % 20 == 0:
        print('Epoch[{}/{}], loss: {:.6f}'
              .format(epoch+1, num_epochs, loss.data[0]))

5)做完训练之后可以预测一下结果:

model.eval()
predict = model(Variable(x_train))
predict = predict.data.numpy()
plt.plot(x_train.numpy(), y_train.numpy(), 'ro', label='Original data')
plt.plot(x_train.numpy(), predict, label='Fitting Line')

总的代码实现为:

'''
__author__ = 'SherlockLiao'

'''
#/home/np/BoostTree.py
import torch
from torch import nn, optim
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt

x_train = np.array([[3.3], [4.4], [5.5], [6.71], [6.93], [4.168],
                    [9.779], [6.182], [7.59], [2.167], [7.042],
                    [10.791], [5.313], [7.997], [3.1]], dtype=np.float32)

y_train = np.array([[1.7], [2.76], [2.09], [3.19], [1.694], [1.573],
                    [3.366], [2.596], [2.53], [1.221], [2.827],
                    [3.465], [1.65], [2.904], [1.3]], dtype=np.float32)


x_train = torch.from_numpy(x_train)

y_train = torch.from_numpy(y_train)


# Linear Regression Model
class LinearRegression(nn.Module):
    def __init__(self):
        super(LinearRegression, self).__init__()
        self.linear = nn.Linear(1, 1)  # input and output is 1 dimension

    def forward(self, x):
        out = self.linear(x)
        return out

if torch.cuda.is_available():
    model = LinearRegression().cuda()
else:
    model = LinearRegression()


# 定义loss和优化函数
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=1e-4)

# 开始训练
num_epochs = 1000
for epoch in range(num_epochs):
    inputs = Variable(x_train)
    target = Variable(y_train)

    # forward
    out = model(inputs)
    loss = criterion(out, target)
    # backward
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    if (epoch+1) % 20 == 0:
        print('Epoch[{}/{}], loss: {:.6f}'
              .format(epoch+1, num_epochs, loss.data[0]))

model.eval()
predict = model(Variable(x_train))
predict = predict.data.numpy()
plt.plot(x_train.numpy(), y_train.numpy(), 'ro', label='Original data')
plt.plot(x_train.numpy(), predict, label='Fitting Line')
# 显示图例
plt.legend() 
plt.show()

# 保存模型
torch.save(model.state_dict(), './linear.pth')

3.2.5 多项式回归

链接:Pytorch 系列教程之一 使用Pytorch拟合多项式(多项式回归)
实现代码:

import torch
from torch.autograd import Variable
from torch import nn
from torch import optim
import matplotlib.pyplot as plt
import numpy as np

def make_features(x):
    x = x.unsqueeze(1)
    return torch.cat([x ** i for i in range(1,4)] , 1)

def f(x):
    return x.mm(w_target)+b_target[0]

def get_batch(batch_size=32):
    random = torch.randn(batch_size)
    x = make_features(random)

    '''Compute the actual results'''

    y = f(x)
    if torch.cuda.is_available():
        return Variable(x).cuda(), Variable(y).cuda()
    else:
        return Variable(x), Variable(y)

class poly_model(nn.Module):
    def __init__(self):
        super(poly_model, self).__init__()
        self.poly = nn.Linear(3,1)

    def forward(self, x):
        out = self.poly(x)
        return out

w_target = torch.FloatTensor([0.5,3,2.4]).unsqueeze(1)
b_target = torch.FloatTensor([0.9])

if torch.cuda.is_available():
    model = poly_model().cuda()
else:
    model = poly_model()

criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr = 1e-3)

epoch = 0
while True:
    batch_x,batch_y = get_batch()
    output = model(batch_x)
    loss = criterion(output,batch_y)
    print_loss = loss.data[0]
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    epoch+=1
    if print_loss < 1e-3:
        break

'''Generate some random numbers to see the results'''

x_test = np.linspace(-5,5,50).astype(np.float32)
y_test = 0.9+0.5*x_test+np.square(x_test)*3+np.power(x_test,3)*2.4

model.eval()
predict = model(make_features(Variable(torch.from_numpy(x_test))))
predict = predict.cpu()
predict = predict.data.numpy()

plt.figure()
plt.plot(x_test,y_test,'-r',label = 'Original Data')
plt.scatter(x_test, predict)
plt.legend()

plt.show()

逻辑回归

__author__ = 'SherlockLiao'

import torch
from torch import nn, optim
#import torch.nn.functional as F
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision import datasets
import time
# 定义超参数
batch_size = 32
learning_rate = 1e-3
num_epoches = 100

# 下载训练集 MNIST 手写数字训练集
train_dataset = datasets.MNIST(
    root='./data', train=True, transform=transforms.ToTensor(), download=True)

test_dataset = datasets.MNIST(
    root='./data', train=False, transform=transforms.ToTensor())

train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)


# 定义 Logistic Regression 模型
class Logstic_Regression(nn.Module):
    def __init__(self, in_dim, n_class):
        super(Logstic_Regression, self).__init__()
        self.logstic = nn.Linear(in_dim, n_class)

    def forward(self, x):
        out = self.logstic(x)
        return out


model = Logstic_Regression(28 * 28, 10)  # 图片大小是28x28
use_gpu = torch.cuda.is_available()  # 判断是否有GPU加速
if use_gpu:
    model = model.cuda()
# 定义loss和optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=learning_rate)

# 开始训练
for epoch in f(num_epoches):
    print('*' * 10)
    print('epoch {}'.format(epoch + 1))
    since = time.time()
    running_loss = 0.0
    running_acc = 0.0
    for i, data in enumerate(train_loader, 1):
        img, label = data
        img = img.view(img.size(0), -1)  # 将图片展开成 28x28
        if use_gpu:
            img = Variable(img).cuda()
            label = Variable(label).cuda()
        else:
            img = Variable(img)
            label = Variable(label)
        # 向前传播
        out = model(img)
        loss = criterion(out, label)
        running_loss += loss.data[0] * label.size(0)
        _, pred = torch.max(out, 1)
        num_correct = (pred == label).sum()
        running_acc += num_correct.data[0]
        # 向后传播
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if i % 300 == 0:
            print('[{}/{}] Loss: {:.6f}, Acc: {:.6f}'.format(
                epoch + 1, num_epoches, running_loss / (batch_size * i),
                running_acc / (batch_size * i)))
    print('Finish {} epoch, Loss: {:.6f}, Acc: {:.6f}'.format(
        epoch + 1, running_loss / (len(train_dataset)), running_acc / (len(
            train_dataset))))
    model.eval()
    eval_loss = 0.
    eval_acc = 0.
    for data in test_loader:
        img, label = data
        img = img.view(img.size(0), -1)
        if use_gpu:
            img = Variable(img, volatile=True).cuda()
            label = Variable(label, volatile=True).cuda()
        else:
            img = Variable(img, volatile=True)
            label = Variable(label, volatile=True)
        out = model(img)
        loss = criterion(out, label)
        eval_loss += loss.data[0] * label.size(0)
        _, pred = torch.max(out, 1)
        num_correct = (pred == label).sum()
        eval_acc += num_correct.data[0]
    print('Test Loss: {:.6f}, Acc: {:.6f}'.format(eval_loss / (len(
        test_dataset)), eval_acc / (len(test_dataset))))
    print('Time:{:.1f} s'.format(time.time() - since))
    print()

# 保存模型
torch.save(model.state_dict(), './logstic.pth')

猜你喜欢

转载自blog.csdn.net/zhenaoxi1077/article/details/80538314
今日推荐