使用countvectorizer 和tf-idf来编码文字/DNA序列

在做文字识别和自然语言处理时,countvectorizer 和tf-idf是常见的两种常见的对文字进行编码的方式。DNA在打断成kmer后,也可以按照文字编码的方式对kme进行编码。DNA如何编码可参考本人博客文章字符串(如DNA序列,蛋白质序列)的编码和用于机器学习和神经网络.

这篇文章展示如何将DNA序列打断成kmer然后用countvectorizer 和tf-idf来编码,当然也可以直接用文字来编码,如果用来处理word文本,直接从第2步开始。

1. 对dna 打断成kmer并处理成word格式

# 读取fasta 序列
from Bio import SeqIO
chromid = []
chromseq = []
for sample in SeqIO.parse("your——file.fa", "fasta"):
    chromid.append(sample.id)
    chromseq.append(str(sample.seq))
chrom.head()

读入的序列以字符串格式储层:

	id	seq
0	HF952106	atagtgaaaaagagatatttaacttgttgtctgatcttcgtgaaaa...
1	APOS01000035	ataaataaagtagcttgttgtgattttcggattaaaattgcgtcaa...
2	JKMH01000030	gctggtgaactccggcaccgaggccactatgagcgccgtgcggctg...
3	JDYX01000005	aaatgcttgagggttcgtaattaacttgaatacatcttcatcaata...
4	ALQS01000022	aggattggatcgaggtcaatgactaaagttgttttagtgacgggct...

#打断成6mer

def getKmers(sequence, size=6):
    return [sequence[x:x+size].lower() for x in range(len(sequence) - size + 1)]
    
chrom['6mers'] = chrom.apply(lambda x: getKmers(x['seq']), axis=1)
chrom.head()
id	                             seq	                                                                                6mers
0	HF952106	atagtgaaaaagagatatttaacttgttgtctgatcttcgtgaaaa...	        [atagtg, tagtga, agtgaa, gtgaaa, tgaaaa, gaaaa...
1	APOS01000035	ataaataaagtagcttgttgtgattttcggattaaaattgcgtcaa...	        [ataaat, taaata, aaataa, aataaa, ataaag, taaag...
2	JKMH01000030	gctggtgaactccggcaccgaggccactatgagcgccgtgcggctg...	[gctggt, ctggtg, tggtga, ggtgaa, gtgaac, tgaac...
3	JDYX01000005	aaatgcttgagggttcgtaattaacttgaatacatcttcatcaata...	        [aaatgc, aatgct, atgctt, tgcttg, gcttga, cttga...
4	ALQS01000022	aggattggatcgaggtcaatgactaaagttgttttagtgacgggct...	    [aggatt, ggattg, gattgg, attgga, ttggat, tggat...
#把kmer转化为word¶
textword = list(chrom["6mers"]) # 形成字符串
for i in range(len(textword)): # 字符串连接成文字
    textword[i] = ' '.join(textword[i])

2. 使用单词计数向量countvectorizer来编码kmer训练

如果是编码对象是文字,可以删除以上步骤,直接从本步骤开始编码。文字对象同textword一样操作

from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer() # 默认 ngram_range (1,1)
# cv = CountVectorizer(ngram_range=(4,4))
X_cv = cv.fit_transform(textword)

# 获得每个列的名称
kmers=cv.get_feature_names()

查看转化后的矩阵
import pandas as pd
pd.DataFrame(X_cv.toarray(), columns=kmers).head()
aaaaaa	aaaaac	aaaaag	aaaaat	aaaaca	aaaacc	aaaacg	aaaact	aaaaga	aaaagc	...	ttttcg	ttttct	ttttga	ttttgc	ttttgg	ttttgt	ttttta	tttttc	tttttg	tttttt
0	3	2	3	6	3	1	0	0	8	0	...	0	3	1	1	4	1	8	2	2	2
1	1	1	1	3	1	1	0	2	0	1	...	1	0	0	1	2	1	3	1	2	2
2	0	0	0	0	0	0	0	0	0	0	...	0	0	0	0	0	0	0	0	0	0
3	1	0	1	1	1	0	1	1	0	1	...	2	2	2	1	0	4	7	5	1	7
4	0	1	0	1	1	0	0	2	4	2	...	1	0	5	1	4	1	0	1	3	2
5 rows × 4096 columns

转化后的词频矩阵X_cv可以直接输入机器学习模型进行训练

3. 使用tf-idf 方式编码

3.1 直接输入文字进行tf-dif编码

from sklearn.feature_extraction.text import TfidfVectorizer as TFIDF
vec = TFIDF()
vec # 默认l2正则化

tfidf = vec.fit(textword)
X1= tfidf.transform(textword)
# 产看tf-idf矩阵
pd.DataFrame(X1.toarray(), columns=kmers).head()
	aaaaaa	aaaaac	aaaaag	aaaaat	aaaaca	aaaacc	aaaacg	aaaact	aaaaga	aaaagc	...	ttttcg	ttttct	ttttga	ttttgc	ttttgg	ttttgt	ttttta	tttttc	tttttg	tttttt
0	0.034601	0.022988	0.033851	0.065821	0.035696	0.012726	0.000000	0.000000	0.097071	0.000000	...	0.000000	0.035694	0.011770	0.012287	0.053774	0.012437	0.092475	0.022493	0.022888	0.023118
1	0.012082	0.012040	0.011820	0.034474	0.012464	0.013331	0.000000	0.027011	0.000000	0.012556	...	0.014536	0.000000	0.000000	0.012871	0.028164	0.013028	0.036326	0.011781	0.023976	0.024216
2	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	...	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000
3	0.011787	0.000000	0.011531	0.011211	0.012160	0.000000	0.013553	0.013175	0.000000	0.012249	...	0.028362	0.024318	0.024057	0.012556	0.000000	0.050838	0.082690	0.057466	0.011695	0.082686
4	0.000000	0.011881	0.000000	0.011340	0.012299	0.000000	0.000000	0.026654	0.050170	0.024780	...	0.014344	0.000000	0.060834	0.012701	0.055585	0.012856	0.000000	0.011625	0.035489	0.023896

3.2 countvectorizer编码的词频向量转化为tf-idf编码

# 词频向量化后转化为tf-idf
from sklearn.feature_extraction.text import TfidfTransformer
transformer = TfidfTransformer()
training_tfidf = transformer.fit_transform(X_cv)

# 查看
pd.DataFrame(training_tfidf.toarray(), columns=kmers).head()
aaaaaa	aaaaac	aaaaag	aaaaat	aaaaca	aaaacc	aaaacg	aaaact	aaaaga	aaaagc	...	ttttcg	ttttct	ttttga	ttttgc	ttttgg	ttttgt	ttttta	tttttc	tttttg	tttttt
0	0.034601	0.022988	0.033851	0.065821	0.035696	0.012726	0.000000	0.000000	0.097071	0.000000	...	0.000000	0.035694	0.011770	0.012287	0.053774	0.012437	0.092475	0.022493	0.022888	0.023118
1	0.012082	0.012040	0.011820	0.034474	0.012464	0.013331	0.000000	0.027011	0.000000	0.012556	...	0.014536	0.000000	0.000000	0.012871	0.028164	0.013028	0.036326	0.011781	0.023976	0.024216
2	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	...	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000	0.000000
3	0.011787	0.000000	0.011531	0.011211	0.012160	0.000000	0.013553	0.013175	0.000000	0.012249	...	0.028362	0.024318	0.024057	0.012556	0.000000	0.050838	0.082690	0.057466	0.011695	0.082686
4	0.000000	0.011881	0.000000	0.011340	0.012299	0.000000	0.000000	0.026654	0.050170	0.024780	...	0.014344	0.000000	0.060834	0.012701	0.055585	0.012856	0.000000	0.011625	0.035489	0.023896
5 rows × 4096 columns

可见两种方式转化的tf-idf编码向量大小完全一致

4. 进行机器学习训练

本例中使用朴素贝叶斯来训练,也可以采用其他模型训练;此处使用countvectorizer编码的词频向量来训练,使用tf-idf向量时直接把X_cv换成tf-idf向量即可,

from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X_cv, target, test_size=0.3, random_state=420) # 若使用tf-idf向量,把X_cv换成X1即可。
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB(alpha=0.1)
classifier.fit(X_train, Y_train)
y_pred = classifier.predict(X_test)
# 计算准确率,召回率,和f1
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
print("Confusion matrix\n")
print(pd.crosstab(pd.Series(Y_test, name='Actual'), pd.Series(y_pred, name='Predicted')))
def get_metrics(Y_test, y_predicted):
    accuracy = accuracy_score(Y_test, y_predicted)
    precision = precision_score(Y_test, y_predicted, average='weighted')
    recall = recall_score(Y_test, y_predicted, average='weighted')
    f1 = f1_score(Y_test, y_predicted, average='weighted')
    return accuracy, precision, recall, f1
accuracy, precision, recall, f1 = get_metrics(Y_test, y_pred)
print("accuracy = %.3f \nprecision = %.3f \nrecall = %.3f \nf1 = %.3f" % (accuracy, precision, recall, f1))
Confusion matrix

Predicted     0     1
Actual               
0          1802  1323
1          1099  2085
accuracy = 0.616 
precision = 0.616 
recall = 0.616 
f1 = 0.615

5. 采用神经网络来训练

使用朴素贝叶斯机器学习的方法在以前可以取得较好结果,由于深度学习的快速发展,目前自然语言处理(NLP)领域采用lstm和gru神经网络模型较多。DNA序列处理成kmer后可以当作自然语言处理的方式来进行,具体可见本人文章Tensorflow 2.0 LSTM训练模型。该文章最后一部分“如何把文字用数字唯一编码,输入tensorflow进行embeding”中可以直接将1中DNA kmer产生的textword当成语言处理。

猜你喜欢

转载自blog.csdn.net/weixin_44022515/article/details/104103895
今日推荐