【346】TF-IDF

Ref: 文本挖掘预处理之向量化与Hash Trick

Ref: 文本挖掘预处理之TF-IDF

>>> from sklearn.feature_extraction.text import TfidfTransformer
>>> from sklearn.feature_extraction.text import CountVectorizer
>>> corpus=["I come to China to travel", 
    "This is a car polupar in China",          
    "I love tea and Apple ",   
    "The work is to write some papers in science"]
>>> vectorizer=CountVectorizer()
>>> transformer = TfidfTransformer()
>>> tfidf = transformer.fit_transform(vectorizer.fit_transform(corpus))
>>> print(tfidf)
  (0, 16)	0.4424621378947393
  (0, 15)	0.697684463383976
  (0, 4)	0.4424621378947393
  (0, 3)	0.348842231691988
  (1, 14)	0.45338639737285463
  (1, 9)	0.45338639737285463
  (1, 6)	0.3574550433419527
  (1, 5)	0.3574550433419527
  (1, 3)	0.3574550433419527
  (1, 2)	0.45338639737285463
  (2, 12)	0.5
  (2, 7)	0.5
  (2, 1)	0.5
  (2, 0)	0.5
  (3, 18)	0.3565798233381452
  (3, 17)	0.3565798233381452
  (3, 15)	0.2811316284405006
  (3, 13)	0.3565798233381452
  (3, 11)	0.3565798233381452
  (3, 10)	0.3565798233381452
  (3, 8)	0.3565798233381452
  (3, 6)	0.2811316284405006
  (3, 5)	0.2811316284405006
>>> print(vectorizer.get_feature_names())
['and', 'apple', 'car', 'china', 'come', 'in', 'is', 'love', 'papers', 'polupar', 'science', 'some', 'tea', 'the', 'this', 'to', 'travel', 'work', 'write']

说明:其中 (0, 16) 表示第一行文本,索引为 16 的词,对应的是“travel”,以此类推。

继续上面的信息,获取对应 term 的 tfidf 值,tfidf 变量对应的是 (4, 19) 矩阵的值,对应不同的句子,不同的 term。

>>> tfidf_array = tfidf.toarray()    #获取array,然后遍历array,并分别转为list
>>> names_list = vectorizer.get_feature_names()    #获取names的list
>>> for i in range(0, len(corpus)):
	print(corpus[i],'\n')
	tmp_list = tfidf_array[i].tolist()
	for j in range(0, len(names_list)):
		if tmp_list[j] != 0:
			if len(names_list[j])>=7:
				print(names_list[j],'\t',tmp_list[j])
			else:
				print(names_list[j],'\t\t',tmp_list[j])
	print('')

	
I come to China to travel 

china 		 0.348842231691988
come 		 0.4424621378947393
to 		 0.697684463383976
travel 		 0.4424621378947393

This is a car polupar in China 

car 		 0.45338639737285463
china 		 0.3574550433419527
in 		 0.3574550433419527
is 		 0.3574550433419527
polupar 	 0.45338639737285463
this 		 0.45338639737285463

I love tea and Apple  

and 		 0.5
apple 		 0.5
love 		 0.5
tea 		 0.5

The work is to write some papers in science 

in 		 0.2811316284405006
is 		 0.2811316284405006
papers 		 0.3565798233381452
science 	 0.3565798233381452
some 		 0.3565798233381452
the 		 0.3565798233381452
to 		 0.2811316284405006
work 		 0.3565798233381452
write 		 0.3565798233381452

>>> 

猜你喜欢

转载自www.cnblogs.com/alex-bn-lee/p/10212235.html