The previous chapter described the building before jieba word about the prefix dictionary , chapter introduces the main jieba: jieba.cut .
jieba word has three modes: Full mode, fine mode, search mode. Full mode and precise mode jieba.cut achieve search engine mode corresponds cut_for_search, and a three-caught HMM parameters can be determined by whether the new word recognition. Official Examples:
# encoding=utf-8
import jieba
seg_list = jieba.cut("我来到北京清华大学", cut_all=True)
print("Full Mode: " + "/ ".join(seg_list)) # 全模式
# 【全模式】: 我/ 来到/ 北京/ 清华/ 清华大学/ 华大/ 大学
seg_list = jieba.cut("我来到北京清华大学", cut_all=False)
print("Default Mode: " + "/ ".join(seg_list)) # 精确模式
# 【精确模式】: 我/ 来到/ 北京/ 清华大学
seg_list = jieba.cut("他来到了网易杭研大厦") # 默认是精确模式
print(", ".join(seg_list))
# 【新词识别】:他, 来到, 了, 网易, 杭研, 大厦 (此处,“杭研”并没有在词典中,但是也被Viterbi算法识别出来了)
seg_list = jieba.cut_for_search("小明硕士毕业于中国科学院计算所,后在日本京都大学深造") # 搜索引擎模式
print(", ".join(seg_list))
# 【搜索引擎模式】: 小明, 硕士, 毕业, 于, 中国, 科学, 学院, 科学院, 中国科学院, 计算, 计算所, 后, 在, 日本, 京都, 大学, 日本京都大学, 深造
jieba.cut
Word main function
def cut(self, sentence, cut_all=False, HMM=True):
'''
jieba分词主函数,返回generator
参数:
- sentence: 待切分文本.
- cut_all: 切分模式. True 全模式, False 精确模式.
- HMM: 是否使用隐式马尔科夫.
'''
sentence = strdecode(sentence) # sentence转unicode
if cut_all:
# re_han_cut_all = re.compile("([\u4E00-\u9FD5]+)", re.U)
re_han = re_han_cut_all
# re_skip_cut_all = re.compile("[^a-zA-Z0-9+#\n]", re.U)
re_skip = re_skip_cut_all
else:
# re_han_default = re.compile("([\u4E00-\u9FD5a-zA-Z0-9+#&\._%]+)", re.U)
re_han = re_han_default
# re_skip_default = re.compile("(\r\n|\s)", re.U)
re_skip = re_skip_default
if cut_all:
cut_block = self.__cut_all # cut_all=True, HMM=True or False
elif HMM:
cut_block = self.__cut_DAG # cut_all=False, HMM=True
else:
cut_block = self.__cut_DAG_NO_HMM # cut_all=False, HMM=False
blocks = re_han.split(sentence)
for blk in blocks:
if not blk:
continue
if re_han.match(blk): # 符合re_han匹配的串
for word in cut_block(blk):
yield word
else:
tmp = re_skip.split(blk)
for x in tmp:
if re_skip.match(x):
yield x
elif not cut_all:
for xx in x:
yield xx
else:
yield x
Each word can be seen that an iterative returns jieba.cut generator, may be used for loop obtained after word obtained ( can be returned directly by jieba.lcut result word list ).
- cut_all = True, HMM = _ corresponding to the full mode, i.e. all occurrences of words in the dictionary will be sliced out, implemented function __cut_all;
- cut_all = False, HMM = False corresponding to the HMM without using the fine mode; press Unigram gram joint probability to find the maximum word combinations, to achieve function __cut_DAG;
- cut_all = False, HMM = True corresponding to the precise mode and using the HMM; maximum joint probability based on a combination of the word, the HMM identify unknown words, function implemented __cut_DAG_NO_HMM.
def __cut_DAG_NO_HMM(self, sentence):
DAG = self.get_DAG(sentence) # 构建有向无环图
route = {}
self.calc(sentence, DAG, route) # 动态规划计算最短路
x = 0
N = len(sentence)
buf = ''
while x < N:
y = route[x][1] + 1
l_word = sentence[x:y]
if re_eng.match(l_word) and len(l_word) == 1:
buf += l_word
x = y
else:
if buf:
yield buf
buf = ''
yield l_word
x = y
if buf:
yield buf
buf = ''