自然语言处理 | (5)英文文本处理与spaCy

本篇博客我们将介绍使用spaCy对英文文本进行一些处理,spaCy不仅包含一些基本的文本处理操作,还包含一些预训练的模型和词向量等,之后我们还会学习一些更高级的模型或方法,不过这些基本处理要熟练掌握,因为他们可以对我们的数据进行一些预处理,作为更高级模型或工具的输入;也可以作为相关任务的基线模型baseline。

目录

1.简介

2.英文Tokenization(标记化/分词) 

3.词性标注

4.命名实体识别

5.chunking/组块分析

6.句法依存分析

7.词向量的使用

8.词汇与文本相似度


完整代码

1.简介

2.英文Tokenization(标记化/分词) 

#! pip install spacy 安装spacy
import spacy
#!python -m spacy download en 在线安装



nlp = spacy.load('en')
doc = nlp('Hello World! My name is CoreJT.')
for token in doc:
    print('"'+token.text+'"')
print(list(doc))
print([token for token in doc])
print([token.text for token in doc])

每个token对象有着非常丰富的属性,如下的方式可以取出其中的部分属性。

doc = nlp("Next week I'll   be in Shanghai.")
for token in doc:
    print("{0}\t{1}\t{2}\t{3}\t{4}\t{5}\t{6}\t{7}".format(
        token.text, #单词
        token.idx,  #单词起始索引
        token.lemma_, 
        token.is_punct, #是否为标点
        token.is_space, #是否为空格
        token.shape_, #形式 Xxx(如第一个字母大写,其余小写)
        token.pos_, #词性标注
        token.tag_
    ))

断句功能在spaCy中也有体现,如下:

doc = nlp("Hello World! My name is CoreJT")
for sent in doc.sents:
    print(sent)
print(list(doc.sents))
print([sent for sent in doc.sents])
print([sent.text for sent in doc.sents])

3.词性标注

# 词性标注
doc = nlp("Next week I'll be in Shanghai.")
print([(token.text, token.tag_) for token in doc])

具体的词性标注编码和含义见如下对应表:

4.命名实体识别

doc = nlp("Next week I'll be in Shanghai.")
for ent in doc.ents:
    print(ent.text, ent.label_)

from nltk.chunk import conlltags2tree

doc = nlp("Next week I'll be in Shanghai.")
iob_tagged = [
    (
        token.text, 
        token.tag_, 
        "{0}-{1}".format(token.ent_iob_, token.ent_type_) if token.ent_iob_ != 'O' else token.ent_iob_
    ) for token in doc
]
 
print(iob_tagged)
# 按照nltk.Tree的格式显示
print(conlltags2tree(iob_tagged))

spaCy中包含的命名实体非常丰富,如下例所示:

doc = nlp("I just bought 2 shares at 9 a.m. because the stock went up 30% in just 2 days according to the WSJ")
for ent in doc.ents:
    print(ent.text, ent.label_)

还可以用非常漂亮的可视化做显示:

from spacy import displacy
 
doc = nlp('I just bought 2 shares at 9 a.m. because the stock went up 30% in just 2 days according to the WSJ')
displacy.render(doc, style='ent', jupyter=True)

5.chunking/组块分析

spaCy可以自动检测名词短语NP,并输出根(root)词,比如下面的"Journal","piece","currencies"

doc = nlp("Wall Street Journal just published an interesting piece on crypto currencies")
for chunk in doc.noun_chunks:
    print(chunk.text, chunk.label_, chunk.root.text)
print([(chunk.text,chunk.label_,chunk.root.text) for chunk in doc.noun_chunks])

6.句法依存分析

spaCy有着非常强大的句法依存解析功能,可以试试对句子进行解析。

doc = nlp('Wall Street Journal just published an interesting piece on crypto currencies')
 
for token in doc:
    print("{0}/{1} <--{2}-- {3}/{4}".format(
        token.text, token.tag_, token.dep_, token.head.text, token.head.tag_))

from spacy import displacy
#可视化依存关系
doc = nlp('Wall Street Journal just published an interesting piece on crypto currencies')
displacy.render(doc, style='dep', jupyter=True, options={'distance': 90})

7.词向量的使用

nlp = spacy.load('en_core_web_lg') #加载word2vec预训练的词向量
print(nlp.vocab['banana'].vector) #查看banana的词向量表示
print(len(nlp.vocab['banana'].vector)) #词向量长度

from scipy import spatial

# 余弦相似度计算
cosine_similarity = lambda x, y: 1 - spatial.distance.cosine(x, y)

# 男人、女人、国王、女王 的词向量
man = nlp.vocab['man'].vector
woman = nlp.vocab['woman'].vector
queen = nlp.vocab['queen'].vector
king = nlp.vocab['king'].vector
 
# 我们对向量做一个简单的计算,"man" - "woman" + "queen"
maybe_king = man - woman + queen
computed_similarities = []

# 扫描整个词库的词向量做比对,找到最接近的词向量
for word in nlp.vocab:
    if not word.has_vector:
        continue
 
    similarity = cosine_similarity(maybe_king, word.vector)
    computed_similarities.append((word, similarity))

# 排序与最接近结果展示
computed_similarities = sorted(computed_similarities, key=lambda item: -item[1])
print([ w[0].text for w in computed_similarities[:10]])
print(([w[0].text,w[1]) for w in computed_similarities[:10]])

8.词汇与文本相似度

# 词汇语义相似度(关联性)
banana = nlp.vocab['banana']
dog = nlp.vocab['dog']
fruit = nlp.vocab['fruit']
animal = nlp.vocab['animal']
 
print(dog.similarity(animal), dog.similarity(fruit)) # 0.6618534 0.23552845
print(banana.similarity(fruit), banana.similarity(animal)) # 0.67148364 0.2427285

# 文本语义相似度(关联性)
target = nlp("Cats are beautiful animals.")
 
doc1 = nlp("Dogs are awesome.")
doc2 = nlp("Some gorgeous creatures are felines.")#有些漂亮的动物是猫科动物。
doc3 = nlp("Dolphins are swimming mammals.")
 
print(target.similarity(doc1))  # 0.8901765218466683
print(target.similarity(doc2))  # 0.9115828449161616
print(target.similarity(doc3))  # 0.7822956752876101

 

猜你喜欢

转载自blog.csdn.net/sdu_hao/article/details/86755107