pytorch:词嵌入和n-gram

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/xckkcxxck/article/details/83007983

本文学习于《深度学习入门之Pytorch》      

对于图像分类的问题,我们会使用one-hot方式进行分类,但是对于NLP中的问题,处理单词这种十分多种类的问题时,使用one-hot是行不通的,这个时候就引入了词嵌入。

       词向量简单来说就是用一个向量去表示一个词语,但是这个向量并不是随机的,因为这样并没有任何意义,所以我们需要对每个词有一个特定的向量去表示他们,而有一些词的词性是相近的,比如”(love)喜欢”和”(like)爱”,对于这种词性相近的词,我们需要他们的向量表示也能够相近,如何去度量和定义向量之间的相近呢?非常简单,就是使用两个向量的夹角,夹角越小,越相近,这样就有了一个完备的定义。

N-Gram是基于一个假设:第n个词出现与前n-1个词相关,而与其他任何词不相关。(这也是隐马尔可夫当中的假设。)整个句子出现的概率就等于各个词出现的概率乘积。各个词的概率可以通过语料中统计计算得到。假设句子T是有词序列w1,w2,w3...wn组成,用公式表示N-Gram语言模型如下:

P(T)=P(w1)*p(w2)*p(w3)***p(wn)=p(w1)*p(w2|w1)*p(w3|w1w2)***p(wn|w1w2w3...)

贴一个n-gram的代码:

# -*- coding: utf-8 -*-
"""
Created on Thu Oct 11 10:06:50 2018

@author: www
"""
#CONTEXT_SIZE 表示我们希望由前面几个单词来预测这个单词,这里使用两个单词
CONTEXT_SIZE = 2 # 依据的单词数
EMBEDDING_DIM = 10 # 词向量的维度
# 我们使用莎士比亚的诗
test_sentence = """When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a totter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.""".split()

##CONTEXT_SIZE 表示我们希望由前面几个单词来预测这个单词,这里使用两个单词,EMBEDDING_DIM 表示词嵌入的维度
trigram = [((test_sentence[i], test_sentence[i+1]), test_sentence[i+2]) 
            for i in range(len(test_sentence)-2)]

# 总的数据量
len(trigram) #113

# 总的数据量
len(trigram)
#(('When', 'forty'), 'winters')

#建立每个词与数字的编码,据此构建词嵌入
vocb = set(test_sentence) # 使用 set 将重复的元素去掉
word_to_idx = {word: i for i, word in enumerate(vocb)}
idx_to_word = {word_to_idx[word]: word for word in word_to_idx}

#从上面可以看到每个词都对应一个数字,且这里的单词都各不相同

import torch
from torch import nn
import torch.nn.functional as F
from torch.autograd import Variable


# 定义模型
class n_gram(nn.Module):
    def __init__(self, vocab_size, context_size=CONTEXT_SIZE, n_dim=EMBEDDING_DIM):
        super(n_gram, self).__init__()
        
        self.embed = nn.Embedding(vocab_size, n_dim)
        self.classify = nn.Sequential(
            nn.Linear(context_size * n_dim, 128),
            nn.ReLU(True),
            nn.Linear(128, vocab_size)
        )
        
    def forward(self, x):
        voc_embed = self.embed(x) # 得到词嵌入
        voc_embed = voc_embed.view(1, -1) # 将两个词向量拼在一起
        out = self.classify(voc_embed)
        return out
        
net = n_gram(len(word_to_idx))

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=1e-2, weight_decay=1e-5)

for e in range(100):
    train_loss = 0
    for word, label in trigram: # 使用前 100 个作为训练集
        word = Variable(torch.LongTensor([word_to_idx[i] for i in word])) # 将两个词作为输入
        label = Variable(torch.LongTensor([word_to_idx[label]]))
        # 前向传播
        out = net(word)
        loss = criterion(out, label)
        train_loss += loss.item()
        # 反向传播
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    if (e + 1) % 20 == 0:
        print('epoch: {}, Loss: {:.6f}'.format(e + 1, train_loss / len(trigram)))

net = net.eval()
# 测试一下结果
word, label = trigram[24]
print('input: {}'.format(word))
print('label: {}'.format(label))
print()
word = Variable(torch.LongTensor([word_to_idx[i] for i in word]))
out = net(word)
pred_label_idx = out.max(1)[1].item()
predict_word = idx_to_word[pred_label_idx]
print('real word is {}, predicted word is {}'.format(label, predict_word))


#可以看到网络在训练集上基本能够预测准确,不过这里样本太少,特别容易过拟合

  

猜你喜欢

转载自blog.csdn.net/xckkcxxck/article/details/83007983
今日推荐