- Detailed use in Chinese word segmentation based on jieba package in python (2)
- 01. Preface
- 02. Keyword extraction
- 02.01Keyword extraction based on TF-IDF algorithm
- 02.02 Part-of-speech tagging
- 02.03 Parallel word segmentation
- 02.04Tokenize: Returns the starting and ending positions of words in the original text
- 02.05ChineseAnalyzer for Whoosh search engine
- 03. Lazy loading
- 04. Other dictionaries
- write at the end
Detailed use in Chinese word segmentation based on jieba package in python (2)
01. Preface
Detailed use in Chinese word segmentation based on jieba package in python (1) Some basic content of jieba word segmentation has been introduced, and now I will introduce it.
02. Keyword extraction
02.01Keyword extraction based on TF-IDF algorithm
import jieba.analyse
- jieba.analyse.extract_tags(sentence, topK=20, withWeight=False,
allowPOS=())
It should be noted that:
1.sentence is the text to be extracted
2.topK is to return several keywords with the largest TF/IDF weight , the default value is 20
3.withWeight is whether to return the keyword weight value together, the default value is False
4.allowPOS only includes words with the specified part of speech, the default value is empty, that is, no filtering - jieba.analyse.TFIDF(idf_path=None) Create a new TFIDF instance, idf_path is the IDF frequency file
code example
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Date : 2018-05-05 22:15:13
# @Author : JackPI ([email protected])
# @Link : https://blog.csdn.net/meiqi0538
# @Version : $Id$
import jieba
import jieba.analyse
#读取文件,返回一个字符串,使用utf-8编码方式读取,该文档位于此python同以及目录下
content = open('人民的名义.txt','r',encoding='utf-8').read()
tags = jieba.analyse.extract_tags(content,topK=10)
print(",".join(tags))
operation result
Building prefix dict from the default dictionary ...
Dumping model to file cache C:\Users\JACKPI~1\AppData\Local\Temp\jieba.cache
Loading model cost 1.280 seconds.
Prefix dict has been built succesfully.
侯亮,李达康,高育良,祁同伟,高小琴,瑞金,陈海,老师,丁义珍,成功
[Finished in 5.9s]
The Inverse Document Frequency (IDF) text corpus used for keyword extraction can be switched to the path of a custom corpus
- Usage: jieba.analyse.set_idf_path(file_name) # file_name is the path of the custom corpus
- Custom Corpus Example
劳动防护 13.900677652
勞動防護 13.900677652
生化学 13.900677652
生化學 13.900677652
奥萨贝尔 13.900677652
奧薩貝爾 13.900677652
考察队员 13.900677652
考察隊員 13.900677652
岗上 11.5027823792
崗上 11.5027823792
倒车档 12.2912397395
倒車檔 12.2912397395
编译 9.21854642485
編譯 9.21854642485
蝶泳 11.1926274509
外委 11.8212361103 - Usage example
import jieba
import jieba.analyse
#读取文件,返回一个字符串,使用utf-8编码方式读取,该文档位于此python同以及目录下
content = open('idf.txt.big','r',encoding='utf-8').read()
tags = jieba.analyse.extract_tags(content, topK=10)
print(",".join(tags))
result
Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\JACKPI~1\AppData\Local\Temp\jieba.cache
Loading model cost 1.186 seconds.
Prefix dict has been built succesfully.
13.2075304714,13.900677652,12.8020653633,12.5143832909,12.2912397395,12.1089181827,11.9547675029,11.8212361103,11.7034530746,11.598092559
[Finished in 20.9s]
The Stop Words text corpus used for keyword extraction can be switched to the path of a custom corpus
- Usage: jieba.analyse.set_stop_words(file_name) # file_name is the path of the custom corpus
- Example of custom corpus:
!
"
#
$
%
&
'
(
)
*
+
,
-
--
.
..
...
......
...................
./
.一
记者
数
年
月
日
时
分
秒
/
//
0
1
2
3
4
- Usage example
import jieba
import jieba.analyse
#读取文件,返回一个字符串,使用utf-8编码方式读取,该文档位于此python同以及目录下
content = open(u'人民的名义.txt','r',encoding='utf-8').read()
jieba.analyse.set_stop_words("stopwords.txt")
tags = jieba.analyse.extract_tags(content, topK=10)
print(",".join(tags))
result
Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\JACKPI~1\AppData\Local\Temp\jieba.cache
Loading model cost 1.316 seconds.
Prefix dict has been built succesfully.
侯亮,李达康,高育良,祁同伟,高小琴,瑞金,陈海,老师,丁义珍,成功
[Finished in 5.2s]
Example of keyword weight value returned with keywords
import jieba
import jieba.analyse
#读取文件,返回一个字符串,使用utf-8编码方式读取,该文档位于此python同以及目录下
content = open(u'人民的名义.txt','r',encoding='utf-8').read()
jieba.analyse.set_stop_words("stopwords.txt")
tags = jieba.analyse.extract_tags(content, topK=10,withWeight=True)
for tag in tags:
print("tag:%s\t\t weight:%f"%(tag[0],tag[1]))
result
Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\JACKPI~1\AppData\Local\Temp\jieba.cache
Loading model cost 1.115 seconds.
Prefix dict has been built succesfully.
tag:侯亮 weight:0.257260
tag:李达康 weight:0.143901
tag:高育良 weight:0.108856
tag:祁同伟 weight:0.098479
tag:高小琴 weight:0.062259
tag:瑞金 weight:0.060405
tag:陈海 weight:0.054036
tag:老师 weight:0.051980
tag:丁义珍 weight:0.049729
tag:成功 weight:0.046647
[Finished in 5.3s]
02.02 Part-of-speech tagging
- jieba.posseg.POSTokenizer(tokenizer=None) Create a new custom tokenizer. The tokenizer
parameter can specify the jieba.Tokenizer tokenizer used internally. jieba.posseg.dt is the default POS tagging tokenizer. - Label the part of speech of each word after sentence segmentation, using a notation method compatible with ictclas.
- Usage example
>>> import jieba.posseg as pseg
>>> words = pseg.cut("我爱北京天安门")
>>> for word, flag in words:
... print('%s %s' % (word, flag))
...
我 r
爱 v
北京 ns
天安门 ns
part-of-speech table
part-of-speech encoding | part-of-speech name | annotation |
---|---|---|
Ag | morpheme | Adjective morphemes. The adjective code is a, and the morpheme code g is preceded by an A. |
a | form word | Take the first letter of the English adjective adjective. |
ad | adverb | Adjectives that are directly adverbial. The adjective code a and the adverb code d are combined. |
an | noun | Adjectives that function as nouns. The adjective code a and the noun code n are combined. |
b | distinguishing word | Take the initials of the Chinese character "bie". |
c | conjunction | Take the first letter of the English conjunction conjunction. |
dg | paramorpheme | Adverbial morphemes. The adverb code is d, and the morpheme code g is preceded by a D. |
d | adverb | Take the 2nd letter of adverb because the 1st letter is already used for an adjective. |
e | interjection | Take the first letter of the English interjection exclamation. |
f | Position of the word | Take the Chinese character "square" |
g | morpheme | Most morphemes can be used as the "root" of compound words, taking the initials of the Chinese character "root". |
h | preceding ingredient | Take the first letter of the English head. |
i | idiom | Take the first letter of the English idiom idiom. |
j | Abbreviation | Take the initials of the Chinese character "Jian". |
k | followed by ingredients | |
l | idioms | The idiom has not yet become an idiom, and it is a bit "temporary", taking the initials of "pro". |
m | numeral | Take the third letter of English numeral, n, u have other uses. |
Ng | noun | Noun morphemes. The noun code is n, and the morpheme code g is preceded by N. |
n | noun | Take the first letter of the English noun noun. |
no | Personal name | The noun code n is combined with the initials of "ren". |
ns | Place name | The noun code n and the location word code s are combined. |
nt | Institutional groups | The initial consonant of "group" is t, and the noun codes n and t are combined together. |
nz | other proper names | The first letter of the initials of "special" is z, and the noun codes n and z are combined together. |
O | Onomatopoeia | Take the first letter of the English onomatopoeia onomatopoeia. |
p | preposition | Take the first letter of the English preposition prepositional. |
q | quantifier | Take the first letter of English quantity. |
r | pronoun | Take the second letter of the English pronoun pronoun, because p has been used for prepositions. |
s | place word | Take the first letter of English space. |
tg | time morpheme | Time part-of-speech morphemes. The time word code is t, and T is preceded by the code g of the morpheme. |
t | time word | Take the first letter of English time. |
u | particle | Take the English particle auxiliary |
vg | verb morphemes | Verb morpheme. The verb code is v. The morpheme code g is preceded by a V. |
v | verb | Take the first letter of the English verb verb |
vd | adverb | Direct adverbial verbs. Codes for verbs and adverbs are grouped together. |
vn | noun verb | Refers to verbs that function as nouns. Codes for verbs and nouns are grouped together. |
w | Punctuation | |
x | non-morpheme word | A non-morpheme word is just a symbol, and the letter x is usually used to represent unknown numbers, symbols. |
and | Modal | Take the initials of the Chinese character "language". |
with | status word | Take the previous letter of the initial consonant of the Chinese character "shape" |
a | unknown word | Unrecognized words and user-defined phrases. Take the first two letters of English Unkonwn. (Non-Peking University standard, defined in CSW participle) |
02.03 Parallel word segmentation
- Principle: After the target text is separated by lines, each line of text is allocated to multiple Python processes for parallel word segmentation, and then the results are merged to obtain a considerable improvement in word segmentation speed.
- Based on the multiprocessing module that comes with python, currently does not support Windows
- Usage Official use case
jieba.enable_parallel(4) # 开启并行分词模式,参数为并行进程数
jieba.disable_parallel() # 关闭并行分词模式
import sys
import time
sys.path.append("../../")
import jieba
jieba.enable_parallel()
url = sys.argv[1]
content = open(url,"rb").read()
t1 = time.time()
words = "/ ".join(jieba.cut(content))
t2 = time.time()
tm_cost = t2-t1
log_f = open("1.log","wb")
log_f.write(words.encode('utf-8'))
print('speed %s bytes/second' % (len(content)/tm_cost))
- Note: Parallel tokenization only supports the default tokenizers jieba.dt and jieba.posseg.dt.
02.04Tokenize: Returns the starting and ending positions of words in the original text
Note that the input parameter only accepts unicode
default mode
import jieba
import jieba.analyse
result = jieba.tokenize(u'永和服装饰品有限公司')
for tk in result:
print("word %s\t\t start: %d \t\t end:%d" % (tk[0],tk[1],tk[2]))
result
Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\JACKPI~1\AppData\Local\Temp\jieba.cache
Loading model cost 1.054 seconds.
Prefix dict has been built succesfully.
word 永和 start: 0 end:2
word 服装 start: 2 end:4
word 饰品 start: 4 end:6
word 有限公司 start: 6 end:10
[Finished in 3.3s]
- search mode
result = jieba.tokenize(u'永和服装饰品有限公司', mode='search')
for tk in result:
print("word %s\t\t start: %d \t\t end:%d" % (tk[0],tk[1],tk[2]))
result
word 永和 start: 0 end:2
word 服装 start: 2 end:4
word 饰品 start: 4 end:6
word 有限 start: 6 end:8
word 公司 start: 8 end:10
word 有限公司 start: 6 end:10
02.05ChineseAnalyzer for Whoosh search engine
- 引用: from jieba.analyse import ChineseAnalyzer
- Official case
# -*- coding: UTF-8 -*-
from __future__ import unicode_literals
import sys,os
sys.path.append("../")
from whoosh.index import create_in,open_dir
from whoosh.fields import *
from whoosh.qparser import QueryParser
from jieba.analyse import ChineseAnalyzer
analyzer = ChineseAnalyzer()
schema = Schema(title=TEXT(stored=True), path=ID(stored=True), content=TEXT(stored=True, analyzer=analyzer))
if not os.path.exists("tmp"):
os.mkdir("tmp")
ix = create_in("tmp", schema) # for create new index
#ix = open_dir("tmp") # for read only
writer = ix.writer()
writer.add_document(
title="document1",
path="/a",
content="This is the first document we’ve added!"
)
writer.add_document(
title="document2",
path="/b",
content="The second one 你 中文测试中文 is even more interesting! 吃水果"
)
writer.add_document(
title="document3",
path="/c",
content="买水果然后来世博园。"
)
writer.add_document(
title="document4",
path="/c",
content="工信处女干事每月经过下属科室都要亲口交代24口交换机等技术性器件的安装工作"
)
writer.add_document(
title="document4",
path="/c",
content="咱俩交换一下吧。"
)
writer.commit()
searcher = ix.searcher()
parser = QueryParser("content", schema=ix.schema)
for keyword in ("水果世博园","你","first","中文","交换机","交换"):
print("result of ",keyword)
q = parser.parse(keyword)
results = searcher.search(q)
for hit in results:
print(hit.highlights("content"))
print("="*10)
for t in analyzer("我的好朋友是李明;我爱北京天安门;IBM和Microsoft; I have a dream. this is intetesting and interested me a lot"):
print(t.text)
03. Lazy loading
jieba uses lazy loading, import jieba and jieba.Tokenizer() will not trigger the loading of the dictionary immediately, and start loading the dictionary to build the prefix dictionary once it is necessary. If you want to manually initialize jieba, you can also manually initialize it.
import jieba
jieba.initialize() # 手动初始化(可选)
Official use case
#encoding=utf-8
from __future__ import print_function
import sys
sys.path.append("../")
import jieba
def cuttest(test_sent):
result = jieba.cut(test_sent)
print(" ".join(result))
def testcase():
cuttest("这是一个伸手不见五指的黑夜。我叫孙悟空,我爱北京,我爱Python和C++。")
cuttest("我不喜欢日本和服。")
cuttest("雷猴回归人间。")
cuttest("工信处女干事每月经过下属科室都要亲口交代24口交换机等技术性器件的安装工作")
cuttest("我需要廉租房")
cuttest("永和服装饰品有限公司")
cuttest("我爱北京天安门")
cuttest("abc")
cuttest("隐马尔可夫")
cuttest("雷猴是个好网站")
if __name__ == "__main__":
testcase()
jieba.set_dictionary("foobar.txt")
print("================================")
testcase()
04. Other dictionaries
1.占用内存较小的词典文件 https://github.com/fxsjy/jieba/raw/master/extra_dict/dict.txt.small
2.支持繁体分词更好的词典文件 https://github.com/fxsjy/jieba/raw/master/extra_dict/dict.txt.big
下载你所需要的词典,然后覆盖 jieba/dict.txt 即可;或者用 jieba.set_dictionary('data/dict.txt.big')
写在最后
由于jieba分词的内容比较多,功能也是比较强大的,笔者只是针对官方的文档进行了一定的解释。有对自然语言处理的可以关注个人订阅号,这里有关于自然语言处理、机器学习等学习资料。