深度学习模型的保存和参数的保存

这篇博文主要简单介绍深度学习框架keras和pytorch的模型保存方法和模型参数保存方法。

keras:

import keras
form keras.models import load_model,Sequential,Model,Input
from keras.layers import Dense,........######
model = Sequential()
model.add(Dense(units = 10,input_shape = (),name = 'dense'))
model.compile(loss = 'mse',optimizer = 'adam')
model.fit()
#########这是整个模型的保存方法
model.save('/home/路径/model.h5')
###### 导出模型   
model = load_model('model.h5')#  具体路径先不写了,因为我懒

如果你只想保存模型参数的话:

#先按照上面的方法搭建一个model
model.sequential()
model.add(Dense(12),name )
model.save_weights('weights.h5')
#####想要导出weights的话
model.load_weights('weights.h5')
###其实在model.save_weights中还有一个参数,by_name 这个参数可以指定导入一些模型的参数
model.load_weights('weights.h5',by_name = True)

pytorch:

刚入手pytorch 没多久 ,
直接上代码吧
先写个简单的主要框架。。

import torch
import torch.optim as optim
import torch.nn as nn2d
class NeuralModel(nn.Module):#### 这里的nn.Module是pytorch 神经网络的结构
    def __init__(self):
        super(NeuralModel,self).__init__()
        self.conv = nn.Sequential(nn.Conv2d()
        nn.BatchNorm2d(通道数)
        nn.ReLU(inplace = True))
        self.fc1 = nn.Linear(input_dim,units)
    def forward(self,x):
        out = self.conv1(x)
        out = self.fc(x)
        return out

model = NeuralModel().cuda() ########## 土豪如果有GPU就用cuda吧
print(model)#######直接将model 打印出来
lr = 0.1
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=lr, momentum=0.9, nesterov=True, weight_decay=0.0001)
is_train = True
if is_train = True:
    for epoch in epochs:
        model.train()
        for i,(features,labels) in enumerate(train_loader):  #train_loader 用pytorch自用的生成器代码
            features = Variable(features).cuda()
            labels = Variable(labels).cuda()#cuda 是土豪专用,请谨慎使用,一旦这里所有的变量使用了cuda,那么模型的加速也就需要使用cuda,就是上面提到的model = NeuralModel().cuda(),这里的cuda不能落下,在测试阶段也一样,输入必须为cuda tensor
            optimizer.zero_grad()
            outputs = model(images)
            loss = criterion(outputs, labels)
            loss.backward()
            optimizer.step()
            if (i+1) % 100 == 0:
                print("Epoch [%d/%d], Iter [%d/%d] Loss: %.4f" %(epoch+1, total_epoch, i+1, len(train_loader), loss.data[0]))
        print('the epoch takes time:',time.time()-tims)
        print('evaluate test set:')
        acc = test(model, test_loader, btrain=True)
        if acc > acc_best:
             acc_best = acc
             print('current best acc,', acc_best)
             torch.save(model.state_dict(), model_file)
        # Decaying Learning Rate
        if (epoch+1) / float(total_epoch) == 0.3 or (epoch+1) / float(total_epoch) == 0.6 or (epoch+1) / float(total_epoch) == 0.9:
            lr /= 10
            print('reset learning rate to:', lr)
            for param_group in optimizer.param_groups:
                param_group['lr'] = lr
                print(param_group['lr'])
            # optimizer = torch.optim.Adam(model.parameters(), lr=lr)
            # optim.SGD(model.parameters(), lr=lr, momentum=0.9, nesterov=True, weight_decay=0.0001)
    # Save the Model
    torch.save(model.state_dict(), 'last_model_92_sgd.pkl')
else:
    model.load_state_dict(torch.load(model_file))####### 这个就是导出模型的参数
    model.eval()
    pred = model(test_features)

torch 这部分的代码其实已经搭了一个简单的torch框架,具体有关torch的一些细节我们之后讨论,torch中用来保存模型参数的代码主要是torch.save(model.state_dict(),model_file)
将参数load到model中的代码是model.load_state_dict(torch.load(model_file))

猜你喜欢

转载自blog.csdn.net/baidu_36161077/article/details/81057217
今日推荐