Tenserflow常用api汇总

api1: tf.contrib.learn.preprocessing.VocabularyProcessor

import tensorflow as tf
tf.contrib.learn.preprocessing.VocabularyProcessor (max_document_length, min_frequency=0, vocabulary=None, tokenizer_fn=None)

参数:

max_document_length: 文档的最大长度。如果文本的长度大于最大长度,那么它会被剪切,反之则用0填充。 
min_frequency: 词频的最小值,出现次数小于最小词频则不会被收录到词表中。 
vocabulary: CategoricalVocabulary 对象。 
tokenizer_fn:分词函数

代码:

from tensorflow.contrib import learn
import numpy as np
max_document_length = 4
x_text =[
    'i love you',
    'me too'
]
vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
vocab_processor.fit(x_text)
print(next(vocab_processor.transform(['i me too'])).tolist())
x = np.array(list(vocab_processor.fit_transform(x_text)))
print(x)

结果:

[1, 4, 5, 0]
[[1 2 3 0]
 [4 5 0 0]]

猜你喜欢

转载自blog.csdn.net/DylanYuan/article/details/83746440
今日推荐