基于Bert+对抗训练的文本分类实现

由于Bert的强大,它文本分类取得了非常好的效果,而通过对抗训练提升模型的鲁棒性是一个非常有研究意义的方向,下面将通过代码实战与大家一起探讨交流对抗训练在Bert文本分类领域的应用。

目录

一、Bert文本分类在input_ids添加扰动

二、Bert文本分类在embedding添加扰动

三、Bert实现文本分类利用FGSM在embedding添加扰动

四、Bert文本分类在embedding添加扰动并考虑对抗样本的防御

五、防御对抗性样本的方法


一、Bert文本分类在input_ids添加扰动

python实现基于bert文本分类中用到对抗学习,我们可以使用PyTorch和Transformers库实现基于BERT模型的文本分类,并在训练过程中应用对抗学习来提高模型的鲁棒性。

首先,我们需要安装必要的库:

pip install torch transformers -i https://pypi.tuna.tsinghua.edu.cn/simple

接下来,让我们编写代码实现基于BERT的文本分类模型:

import torch
import torch.nn as nn
from transformers import BertModel, BertTokenizer

class BertForTextClassification(nn.Module):
    def __init__(self, bert_model_path, num_classes):
        super().__init__()
        self.bert_model = BertModel.from_pretrained(bert_model_path)
        self.dropout = nn.Dropout(0.1)
        self.classifier = nn.Linear(self.bert_model.config.hidden_size, num_classes)

    def forward(self, input_ids, attention_mask):
        outputs = self.bert_model(input_ids, attention_mask=attention_mask)
        pooled_output = outputs[1]
        pooled_output = self.dropout(pooled_output)
        logits = self.classifier(pooled_output)
        return logits

该模型使用预先训练过的BERT模型作为特征提取器,并添加一个用于分类的线性层。我们还使用dropout来减少过拟合。

接下来,我们需要定义训练和评估函数:

def train(model, train_dataloader, optimizer, criterion, epsilon):
    model.train()
    total_loss, total_correct = 0., 0.
    for batch in train_dataloader:
        batch = tuple(t.to(device) for t in batch)
        input_ids, attention_mask, labels = batch

        # adversarial training
        if epsilon:
            delta = torch.zeros_like(input_ids).uniform_(-epsilon, epsilon)
            delta = delta.to(device)
            output = model(input_ids + delta, attention_mask=attention_mask)
        else:
            output = model(input_ids, attention_mask=attention_mask)

        loss = criterion(output, labels)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        total_loss += loss.item()

        _, predicted = torch.max(output.data, 1)
        total_correct += (predicted == labels).sum().item()

    return total_loss / len(train_dataloader), total_correct / len(train_dataloader.dataset)


def evaluate(model, test_dataloader):
    model.eval()
    total_loss, total_correct = 0., 0.
    with torch.no_grad():
        for batch in test_dataloader:
            batch = tuple(t.to(device) for t in batch)
            input_ids, attention_mask, labels = batch
            output = model(input_ids, attention_mask=attention_mask)
            loss = criterion(output, labels)
            total_loss += loss.item()

            _, predicted = torch.max(output.data, 1)
            total_correct += (predicted == labels).sum().item()

    return total_loss / len(test_dataloader), total_correct / len(test_dataloader.dataset)

上述函数中的 train 函数用于训练模型,并支持对抗性训练。epsilon 参数控制对抗样本的大小。如果epsilon为0,则使用标准的训练方法;如果epsilon大于0,则在每个样本上添加一个小的扰动,从而得到对抗性样本。evaluate函数用于评估模型在测试数据上的性能。

最后,我们可以编写主函数来加载数据集并开始训练模型:

def load_data():
    # your code to load dataset here
    pass

if __name__ == '__main__':
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    bert_model_path = 'bert-base-uncased'
    num_classes = 2
    epsilon = 0.0  # set to non-zero value for adversarial training

    # load data
    train_dataloader, test_dataloader = load_data()

    # create model, optimizer, loss function
    model = BertForTextClassification(bert_model_path, num_classes)
    model.to(device)
    optimizer = torch.optim.Adam(model.parameters(), lr=5e-5)
    criterion = nn.CrossEntropyLoss()

    # train and evaluate the model
    for epoch in range(10):
        train_loss, train_acc = train(model, train_dataloader, optimizer, criterion, epsilon=epsilon)
        test_loss, test_acc = evaluate(model, test_dataloader)
        print(f"Epoch {epoch+1} - Train Loss: {train_loss:.4f} - Train Acc: {train_acc:.4f} - Test Loss: {test_loss:.4f} - Test Acc: {test_acc:.4f}")

上述代码中的 load_data 函数应该由用户自己实现,以加载并处理所需的数据集。在此示例中,我们使用了基于Google的BERT文本分类数据集,其中包含了一个二分类任务。

二、Bert文本分类在embedding添加扰动

python实现基于bert文本分类中用到对embedding添加扰动的对抗学习 ,基于PyTorch和Transformers库实现的示例,该示例使用对抗性样本进行训练以提高模型的鲁棒性。

在这个示例中,我们将使用FGSM对embedding向量进行扰动,从而生成对抗性文本样本。FGSM是一种简单但有效的对抗性攻击技术,其基本思想是利用梯度计算对模型进行扰动,以使其产生错误的预测结果。

下面是示例代码:

import torch
import torch.nn.functional as F
from transformers import BertForSequenceClassification, BertTokenizer

# 加载BERT模型和Tokenizer
model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

# 定义对抗性扰动函数
def fgsm_attack(embeddings, gradient, epsilon):
    # 计算扰动的符号
    sign_gradient = gradient.sign()
    # 计算embedding的对抗性扰动
    perturbed_embeddings = embeddings + epsilon*sign_gradient
    # 将扰动后的embedding截断在[-1, 1]之间
    perturbed_embeddings = torch.clamp(perturbed_embeddings, min=-1, max=1)
    return perturbed_embeddings

# 训练模型
def train(model, optimizer, train_loader, epsilon):
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):
        optimizer.zero_grad()
        data = data.to(device)
        target = target.to(device)
        # 对输入的embedding添加对抗性扰动
        embeddings = model.bert.embeddings.word_embeddings(data)
        embeddings.requires_grad = True
        output = model(inputs_embeds=embeddings, attention_mask=(data>0).to(data.device))
        loss = F.cross_entropy(output.logits, target)
        loss.backward()
        # 生成对抗性样本
        gradient = embeddings.grad.data
        perturbed_embeddings = fgsm_attack(embeddings.data, gradient, epsilon)
        # 重新计算模型输出
        adv_logits = model(inputs_embeds=perturbed_embeddings, attention_mask=(data>0).to(data.device)).logits
        # 计算对抗性训练损失
        adv_loss = F.cross_entropy(adv_logits, target)
        total_loss = loss + adv_loss
        total_loss.backward()
        optimizer.step()

# 测试模型
def test(model, test_loader):
    model.eval()
    test_loss = 0
    correct = 0
    with torch.no_grad():
        for data, target in test_loader:
            data = data.to(device)
            target = target.to(device)
            output = model(data, attention_mask=(data>0).to(data.device))
            test_loss += F.cross_entropy(output.logits, target, reduction='sum').item()
            pred = output.logits.argmax(dim=1, keepdim=True)
            correct += pred.eq(target.view_as(pred)).sum().item()

    test_loss /= len(test_loader.dataset)
    accuracy = 100. * correct / len(test_loader.dataset)
    print('Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)'.format(
        test_loss, correct, len(test_loader.dataset), accuracy))


# 加载数据
def load_data():
    # 加载数据集
    # TODO: 根据您的数据集实现此函数
    pass

# 定义超参数
batch_size = 32
learning_rate = 5e-5
epochs = 10
epsilon = 0.1

# 加载数据
train_loader, test_loader = load_data()

# 将模型和数据加载到GPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

# 定义优化器
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

# 训练模型
for epoch in range(epochs):
    print("Epoch:", epoch+1)
    train(model, optimizer, train_loader, epsilon)
    test(model, test_loader)

需要替换 load_data 函数以实现您自己的数据集,并根据需要调整超参数。此外,要使用此代码,您需要使用PyTorch和Transformers(Hugging Face团队的自然语言处理库)库。 

三、Bert实现文本分类利用FGSM在embedding添加扰动

python实现基于bert中文文本分类中用到对embedding添加扰动的对抗学习,提升中文文本分类的准确率 

使用 PyTorch 预训练的 Bert 模型进行中文文本分类,并利用对抗性样本进行训练以提高模型的鲁棒性。

对抗学习主要分为以下三个步骤:

  1. 生成对抗性样本:利用 FGSM 算法对文本的 embedding 进行扰动,生成对抗性文本样本。
  2. 计算对抗性损失:利用对抗性样本与正常样本进行分类,计算损失函数。
  3. 更新模型参数:根据对抗性和正常样本的损失值来更新模型参数。

下面是完整的实现代码,您只需要根据自己的数据集修改 load_data 函数中的代码。

import torch
import torch.nn as nn
from transformers import BertTokenizer, BertModel

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

# 加载预训练Bert模型和tokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
model = BertModel.from_pretrained('bert-base-chinese').to(device)

# 分类模型
class BertClassifier(nn.Module):
    def __init__(self, num_classes):
        super(BertClassifier, self).__init__()
        self.bert = BertModel.from_pretrained('bert-base-chinese')
        self.fc = nn.Linear(768, num_classes)

    def forward(self, input_ids, attention_mask):
        outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)
        pooled_output = outputs[1]
        out = self.fc(pooled_output)
        return out

# 定义对抗扰动攻击函数
def fgsm_attack(embeds, grad, epsilon=0.3):
    sign_grad = grad.sign()
    perturb = epsilon * sign_grad
    perturb_embeds = embeds + perturb
    return perturb_embeds

# 加载数据集
def load_data(file_path):
    # 在此处实现加载数据集的代码
    return train_data, valid_data, test_data

# 训练模型
def train_model(model, criterion, optimizer, scheduler, train_data, valid_data, num_epochs=10):
    best_acc = 0.0
    for epoch in range(num_epochs):
        model.train()
        running_loss = 0.0
        correct = 0
        total = 0
        for data in train_data:
            texts, labels = data.text, data.label
            input_ids = []
            attention_masks = []
            for text in texts:
                encoded_dict = tokenizer.encode_plus(
                                    text,                      # 文本数据
                                    add_special_tokens = True, # 添加 '[CLS]' 和 '[SEP]'
                                    max_length = 128,           # 设置最大长度
                                    pad_to_max_length = True,   # 使用padding 
                                    return_attention_mask = True,  # 返回attention mask
                                    return_tensors = 'pt',      # 返回PyTorch张量格式的输出
                               )
                input_ids.append(encoded_dict['input_ids'])
                attention_masks.append(encoded_dict['attention_mask'])

            # 将列表转换为张量形式
            input_ids = torch.cat(input_ids, dim=0).to(device)
            attention_masks = torch.cat(attention_masks, dim=0).to(device)
            labels = labels.to(device)

            optimizer.zero_grad()

            # 对抗性样本训练
            input_ids.requires_grad_()
            attention_masks.requires_grad_()
            output = model(input_ids, attention_masks)
            loss = criterion(output, labels)
            loss.backward()
            grad = input_ids.grad.data
            perturb_input_ids = fgsm_attack(input_ids, grad, epsilon=0.3)
            perturb_output = model(perturb_input_ids, attention_masks)
            perturb_loss = criterion(perturb_output, labels)
            adv_loss = 0.5 * loss + 0.5 * perturb_loss

            _, predicted = torch.max(output.data, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()

            adv_loss.backward()
            optimizer.step()

            running_loss += adv_loss.item()

        scheduler.step()

        # 在验证集上计算准确率
        model.eval()
        valid_loss = 0.0
        valid_correct = 0
        valid_total = 0
        with torch.no_grad():
            for data in valid_data:
                texts, labels = data.text, data.label
                input_ids = []
                attention_masks = []
                for text in texts:
                    encoded_dict = tokenizer.encode_plus(
                                        text,                      # 文本数据
                                        add_special_tokens = True, # 添加 '[CLS]' 和 '[SEP]'
                                        max_length = 128,           # 设置最大长度
                                        pad_to_max_length = True,   # 使用padding 
                                        return_attention_mask = True,  # 返回attention mask
                                        return_tensors = 'pt',      # 返回PyTorch张量格式的输出
                                   )
                    input_ids.append(encoded_dict['input_ids'])
                    attention_masks.append(encoded_dict['attention_mask'])

                # 将列表转换为张量形式
                input_ids = torch.cat(input_ids, dim=0).to(device)
                attention_masks = torch.cat(attention_masks, dim=0).to(device)
                labels = labels.to(device)

                output = model(input_ids, attention_masks)
                loss = criterion(output, labels)
                valid_loss += loss.item()

                _, predicted = torch.max(output.data, 1)
                valid_total += labels.size(0)
                valid_correct += (predicted == labels).sum().item()

        epoch_loss = running_loss / len(train_data)
        epoch_acc = correct / total
        valid_loss = valid_loss / len(valid_data)
        valid_acc = valid_correct / valid_total
        print('Epoch [{}/{}], train_loss: {:.4f}, train_acc: {:.4f}, valid_loss: {:.4f}, valid_acc: {:.4f}'.format(epoch+1, num_epochs, epoch_loss, epoch_acc, valid_loss, valid_acc))

        if valid_acc > best_acc:
            best_acc = valid_acc

    return model

# 加载数据
train_data, valid_data, test_data = load_data('./data/train.csv')

# 设置超参数
num_classes = 2
learning_rate = 2e-5
num_epochs = 10
batch_size = 32

# 构造Bert分类器模型
classifier = BertClassifier(num_classes).to(device)

# 损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(params=classifier.parameters(), lr=learning_rate)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=2, gamma=0.1)

# 训练模型
trained_model = train_model(classifier, criterion, optimizer, scheduler, train_data, valid_data, num_epochs=num_epochs)

# 在测试集上进行测试
test_loss = 0.0
test_correct = 0
test_total = 0
with torch.no_grad():
    for data in test_data:
        texts, labels = data.text, data.label
        input_ids = []
        attention_masks = []
        for text in texts:
            encoded_dict = tokenizer.encode_plus(
                                text,                      # 文本数据
                                add_special_tokens = True, # 添加 '[CLS]' 和 '[SEP]'
                                max_length = 128,           # 设置最大长度
                                pad_to_max_length = True,   # 使用padding 
                                return_attention_mask = True,  # 返回attention mask
                                return_tensors = 'pt',      # 返回PyTorch张量格式的输出
                           )
            input_ids.append(encoded_dict['input_ids'])
            attention_masks.append(encoded_dict['attention_mask'])

        # 将列表转换为张量形式
        input_ids = torch.cat(input_ids, dim=0).to(device)
        attention_masks = torch.cat(attention_masks, dim=0).to(device)
        labels = labels.to(device)

        output = trained_model(input_ids, attention_masks)
        loss = criterion(output, labels)
        test_loss += loss.item()

        _, predicted = torch.max(output.data, 1)
        test_total += labels.size(0)
        test_correct += (predicted == labels).sum().item()

test_loss = test_loss / len(test_data)
test_acc = test_correct / test_total
print('Test_loss: {:.4f}, Test_acc: {:.4f}'.format(test_loss, test_acc))

需要注意的是,本示例中仅考虑了对正常样本进行攻击的情况。在实际应用中,还需要考虑到对抗性样本的防御。如果您需要了解更多有关对抗性样本的防御方法,可以查看 adversarial defense GitHub - Trusted-AI/adversarial-robustness-toolbox: Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

四、Bert文本分类在embedding添加扰动并考虑对抗样本的防御

python实现基于bert中文文本分类中用到对embedding添加扰动的对抗学习,提升中文文本分类的准确率,并考虑到对抗样本的防御

import random
import jieba
import torch
import torch.nn as nn
from transformers import BertTokenizer, BertModel
from torch.utils.data import Dataset, DataLoader
from torch.optim import Adam
from torch.nn.utils.rnn import pad_sequence
from torch.utils.tensorboard import SummaryWriter


# 设定随机种子
random.seed(42)
torch.manual_seed(42)

# BERT模型和tokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
bert_model = BertModel.from_pretrained('bert-base-chinese')

# 自定义的中文文本分类器
class TextClassifier(nn.Module):
    def __init__(self, bert_model, num_classes):
        super(TextClassifier, self).__init__()
        self.bert = bert_model
        self.fc = nn.Linear(768, num_classes)

    def forward(self, input_ids, attention_mask):
        outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)
        pooled_output = outputs[1]
        logits = self.fc(pooled_output)
        return logits

# 加载数据集函数
def load_data(file_path):
    data = []
    with open(file_path, encoding='utf-8') as f:
        for line in f:
            text, label = line.strip().split('\t')
            label = int(label)
            seg_list = jieba.cut(text)
            text = ' '.join(seg_list)
            data.append((text, label))
    return data

# 构建dataset和dataloader
class TextDataset(Dataset):
    def __init__(self, data, tokenizer):
        self.sentences = [item[0] for item in data]
        self.labels = [item[1] for item in data]
        self.tokenizer = tokenizer

    def __len__(self):
        return len(self.sentences)

    def __getitem__(self, index):
        sentence = self.sentences[index]
        label = self.labels[index]
        encoded_dict = self.tokenizer.encode_plus(
                            sentence,                      # 输入文本
                            add_special_tokens = True,      # 添加 '[CLS]' 和 '[SEP]'
                            max_length = 32,                # 填充 & 截断长度
                            pad_to_max_length = True,
                            return_attention_mask = True,   # 返回 attention mask
                            return_tensors = 'pt',          # 返回 pytorch tensor
                      )
        input_ids = encoded_dict['input_ids'][0]
        attention_mask = encoded_dict['attention_mask'][0]
        return input_ids, attention_mask, label

def collate_fn(batch):
    # 按输入文本长度排序,pad_sequence所需要的batch需要排序后的
    batch.sort(key=lambda x: len(x[0]), reverse=True)
    input_ids, attention_masks, labels = zip(*batch)

    input_ids = pad_sequence(input_ids, batch_first=True)
    attention_masks = pad_sequence(attention_masks, batch_first=True)

    return input_ids, attention_masks, torch.tensor(labels)

def train(model, train_loader, optimizer, criterion, num_epochs):
    writer = SummaryWriter()
    global_step = 0
    for epoch in range(num_epochs):
        total_loss = 0
        for i, (input_ids, attention_mask, labels) in enumerate(train_loader):
            print(input_ids.shape)
            optimizer.zero_grad()
            logits = model(input_ids, attention_mask)
            loss = criterion(logits, labels)
            total_loss += loss.item()
            loss.backward()

            # 对embedding向量进行扰动
            epsilon = 0.5
            step_size = 0.05
            perturbation = torch.zeros_like(input_ids).uniform_(-epsilon, epsilon)
            perturbation.requires_grad_()
            logits_perturbed = model(input_ids+perturbation, attention_mask)
            loss_perturbed = criterion(logits_perturbed, labels)
            loss_perturbed.backward()
            perturbation = step_size * perturbation.grad
            with torch.no_grad():
                input_ids = input_ids+perturbation
                input_ids.clamp_(0, tokenizer.vocab_size-1)

            optimizer.step()

            global_step += 1
            if global_step % 10 == 0:
                writer.add_scalar('training_loss', loss.item(), global_step)

        avg_loss = total_loss / len(train_loader)
        print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch + 1, num_epochs, avg_loss))
    writer.close()

def test(model, test_loader):
    total = 0
    correct = 0
    with torch.no_grad():
        for input_ids, attention_mask, labels in test_loader:
            logits = model(input_ids, attention_mask)
            predicted = torch.argmax(logits, dim=1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
    accuracy = correct / total
    print('Accuracy on test set: {:.4f}'.format(accuracy))
    
def main():
    train_data = load_data('train.txt')
    test_data = load_data('test.txt')
    train_dataset = TextDataset(train_data, tokenizer)
    test_dataset = TextDataset(test_data, tokenizer)
    train_loader = DataLoader(train_dataset, batch_size=128, shuffle=True, collate_fn=collate_fn)
    test_loader = DataLoader(test_dataset, batch_size=128, shuffle=False, collate_fn=collate_fn)

    num_classes = 10
    model = TextClassifier(bert_model, num_classes)
    optimizer = Adam(model.parameters(), lr=2e-5)
    criterion = nn.CrossEntropyLoss()

    num_epochs = 3
    train(model, train_loader, optimizer, criterion, num_epochs)
    test(model, test_loader)

if __name__ == '__main__':
    main()

我们的示例中使用的是FGSM方法来添加对抗性扰动,在训练过程中,在每个batch的每个样本中,我们利用perturbation.requires_grad_()创建一个使embedding向量不变的扰动,在下一步梯度更新时,反向传播不仅考虑了模型的损失,还有扰动的损失,然后利用perturbation = step_size * perturbation.grad更新扰动,最后用input_ids.clamp_(0, tokenizer.vocab_size-1)保证生成的对抗性样本仍然是有效的文本输入数据。关于对抗样本的防御,常见的防御方法有模型融合、数据扩增等手段。

五、防御对抗性样本的方法

防御对抗性样本的方法有很多种,以下是一些常用的方法:

  1. 对抗训练(Adversarial Training):对抗训练是通过在训练过程中,将对抗样本添加到训练数据中来增强模型的鲁棒性。具体来说,对抗训练包括两个步骤:生成对抗样本和使用对抗样本进行训练。在生成对抗样本时,我们可以使用FGSM、PGD等方法。在使用对抗样本进行训练时,我们将对抗样本和原始样本混合在一起,然后用混合样本来训练模型。这样可以使模型更加鲁棒,能够抵御对抗样本的攻击。

  2. 模型压缩(Model Compression):模型压缩是通过减少模型的复杂度来增强模型的鲁棒性。具体来说,模型压缩包括两个步骤:模型剪枝和量化。在模型剪枝时,我们可以通过删除一些冗余的神经元或层来减少模型的复杂度。在量化时,我们可以将模型中的浮点数参数转换为整数参数,从而减少模型的存储空间和计算复杂度。这样可以使模型更加简单,能够抵御对抗样本的攻击。

  3. 随机化防御(Randomization Defense):随机化防御是通过在模型中引入一些随机性来增强模型的鲁棒性。具体来说,随机化防御包括两个步骤:输入随机化和模型随机化。在输入随机化时,我们可以对输入数据进行一些随机化处理,如添加一些噪声、扰动或旋转等。在模型随机化时,我们可以在模型中引入一些随机性,如在卷积层中使用随机卷积核、在全连接层中使用随机连接等。这样可以使模型更加难以攻击,能够抵御对抗样本的攻击。

  4. 模型集成(Model Ensemble):模型集成是通过将多个模型组合在一起来增强模型的鲁棒性。具体来说,模型集成包括两个步骤:训练多个模型和组合多个模型。在训练多个模型时,我们可以使用不同的初始化、不同的超参数或不同的训练数据来训练多个模型。在组合多个模型时,我们可以使用投票、加权平均或模型融合等方法来组合多个模型。这样可以使模型更加稳健,能够抵御对抗样本的攻击。

这些方法都可以用来防御对抗性样本,提高模型的整体效能。但需要注意的是,每种方法都有其优缺点,具体的选择需要根据具体的应用场景和需求来进行权衡。

​​​​​​​

猜你喜欢

转载自blog.csdn.net/weixin_43734080/article/details/130888433