Kaggle项目——Spaceship Titanic

做了下Kaggle的Spaceship Titanic项目,算是对CNN学习的一个总结,也是回归机器学习的一个响应(第三章的练习题提出可以做一做Kaggle的Titanic案例),具体代码如下:

# 定义网络模型
from torch import nn
import torch.nn.functional as F


class mymodule(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(20, 64)
        self.fcc = nn.Linear(64, 64)
        self.fc2 = nn.Linear(64, 32)
        self.fc3 = nn.Linear(32, 2)
        self.relu = nn.ReLU()
        self.drop = nn.Dropout(p=0.2)
        self.bn1 = nn.BatchNorm1d(64)
        self.bn2 = nn.BatchNorm1d(32)

    def forward(self, x):
        x = x.to(torch.float32)  # 解决数据类型的问题
        x = self.relu(self.fc1(x))
        x = self.bn1(x)
        x = self.drop(x)
        x = self.relu(self.fcc(x))
        x = self.bn1(x)
        x = self.drop(x)
        x = self.relu(self.fc2(x))
        x = self.bn2(x)
        x = self.fc3(x)
        x = torch.sigmoid(x)
        return x


'''
----------------------------------------------------------------------
'''

# 设置随机数种子,保证结果可复现
seed = 548814
torch.manual_seed(seed)  # 设置CPU
# torch.cuda.manual_seed(seed)  # 设置GPU


# 训练模型
from torch import optim

model = mymodule()  # 实例化模型
# 适应设备
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)


# 选用Adam优化器,传入模型参数,设置学习率为0.002,正则化参数weight_decay=0.1
optimizer = optim.Adam(model.parameters(), lr=0.001, weight_decay=0.01)
criterion = nn.CrossEntropyLoss()  # 选用交叉熵作为损失函数Loss
epoch = 70  # 设定迭代次数

for epoch in range(epoch+1):  # 开始迭代循环
    for x, y in train_dr:  # 从dataloader中取x,y
        pred = model(x)  # 正向传播
        loss = criterion(pred, y)  # 计算损失函数
        optimizer.zero_grad()  # 优化器的梯度清零
        loss.backward()  # 反向传播
        optimizer.step()  # 参数更新
    # 计算准确率
    with torch.no_grad():  # 在该模块下,所有计算得出的tensor的requires_grad都自动设置为False。
        y_pred = model(x_train_tensor)  # 得到训练集的预测标签
        acc_train = (y_pred.argmax(dim=1) == y_train_tensor).float().mean().item()  # 计算训练集的准确率
        y_pred = model(x_test_tensor)  # 得到测试集的预测标签
        acc_test = (y_pred.argmax(dim=1) == y_test_tensor).float().mean().item()  # 计算训练集的准确率
        print('epoch:', epoch, '  Accuracy for train:', acc_train, '  Accuracy for test:', acc_test)

以上是一次仅用全连接层的尝试,使用CNN方法的代码在之前已经给出(数据预处理过程也是),最终在Kaggle上Score: 0.80079,算是一次不错的尝试和开始吧。接下来继续努力!
在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/Morganfs/article/details/124206600