13 垃圾邮件分类2

1.数据读取

# ①数据读取
sms=open("./data/SMSSpamCollection",'r',encoding='utf-8')  #数据读取
sms_data=[]         #字符串列表
sms_label=[]
csv_reader=csv.reader(sms,delimiter='\t')

# ②数据预处理
for line in csv_reader:                                 #6.对每封邮件进行预处理
    sms_label.append(line[0])
    sms_data.append(preprocessing(line[1]))             #对每封优先预处理,生成有效词的字符串
sms.close()       #关闭文件

2.数据预处理

def preprocessing(text):
    # 1-2.对输入的文本进行分句、分词,并进分解后的词存放在token中
    tokens=[word for sent in nltk.sent_tokenize(text)for word in nltk.word_tokenize(sent)]

    #3.去除停用词(如i\me\my)
    stops=stopwords.words("english")
    tokens = [token for token in tokens if token not in stops]

    #4.大小写转换,并去掉短于3的词
    tokens=[token.lower() for token in tokens if len(token) >=3]

    #NLTK词性标注(
    nltk.pos_tag(tokens)

    #5.词性还原Lemmatisation
    lemmatizer=WordNetLemmatizer()  #定义还原对象
    tokens=[lemmatizer.lemmatize(token,pos='n')for token in tokens]  #名词(单复数)还原
    tokens=[lemmatizer.lemmatize(token,pos='v')for token in tokens]  #动词(时态)还原
    tokens=[lemmatizer.lemmatize(token,pos='a')for token in tokens]  #形容词(级别)还原

    return tokens;  #返回处理完成后的文本

【以上两步详细请查看作业12朴素贝叶斯-垃圾邮件分类】

3.数据划分—训练集和测试集数据划分

from sklearn.model_selection import train_test_split

x_train,x_test, y_train, y_test = train_test_split(data, target, test_size=0.2, random_state=0, stratify=y_train)

# ③数据划分—训练集和测试集数据划分
# 训练集与测试以8:2的比例划分
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(sms_data, sms_label, test_size=0.2, random_state=0, stratify=sms_label)
print("样本总数:",len(sms_data),"训练集样本总数:",len(x_train),"测试集样本总数:",len(y_train))

4.文本特征提取

sklearn.feature_extraction.text.CountVectorizer

https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html?highlight=sklearn%20feature_extraction%20text%20tfidfvectorizer

sklearn.feature_extraction.text.TfidfVectorizer

https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html?highlight=sklearn%20feature_extraction%20text%20tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer

from sklearn.feature_extraction.text import TfidfVectorizer

tfidf2 = TfidfVectorizer()

观察邮件与向量的关系

向量还原为邮件

5.模型选择

from sklearn.naive_bayes import GaussianNB

from sklearn.naive_bayes import MultinomialNB

说明为什么选择这个模型?

6.模型评价:混淆矩阵,分类报告

from sklearn.metrics import confusion_matrix

confusion_matrix = confusion_matrix(y_test, y_predict)

说明混淆矩阵的含义

from sklearn.metrics import classification_report

说明准确率、精确率、召回率、F值分别代表的意义

7.比较与总结

如果用CountVectorizer进行文本特征生成,与TfidfVectorizer相比,效果如何?

猜你喜欢

转载自www.cnblogs.com/HvYan/p/12938308.html