跟我一起学Python——机械学习实现之数据预处理(混淆矩阵,印第安人糖尿病案例)(Second day)

首先说明一下,我使用的是上一篇所说的jupyter Notebook ,所以有一些是标记,并没有带“#”注释,这里边也用到了上次所说的两个库,sklearn以及panda数据导入的预处理。是以印第安人糖尿病数据集为例,文件也会发上去的,先来一个网盘链接用着链接:https://pan.baidu.com/s/11UPMzeugLPZ4Kce-FXGH-g
提取码:3lbm


Python 3  


##
from sklearn import datasets
iris=datasets.load_iris()
##ris数据展示
print(iris.data)[[5.1 3.5 1.4 0.2]
 [4.9 3.  1.4 0.2]
 [4.7 3.2 1.3 0.2]
 [4.6 3.1 1.5 0.2]
 [5.  3.6 1.4 0.2]
 [5.4 3.9 1.7 0.4]
 [4.6 3.4 1.4 0.3]
 [5.  3.4 1.5 0.2]
 [4.4 2.9 1.4 0.2]
 [4.9 3.1 1.5 0.1]
 [5.4 3.7 1.5 0.2]
 [4.8 3.4 1.6 0.2]
 [4.8 3.  1.4 0.1]
 [4.3 3.  1.1 0.1]
 [5.8 4.  1.2 0.2]
 [5.7 4.4 1.5 0.4]
 [5.4 3.9 1.3 0.4]
 [5.1 3.5 1.4 0.3]
 [5.7 3.8 1.7 0.3]
 [5.1 3.8 1.5 0.3]
 [5.4 3.4 1.7 0.2]
 [5.1 3.7 1.5 0.4]
 [4.6 3.6 1.  0.2]
 [5.1 3.3 1.7 0.5]
 [4.8 3.4 1.9 0.2]
 [5.  3.  1.6 0.2]
 [5.  3.4 1.6 0.4]
 [5.2 3.5 1.5 0.2]
 [5.2 3.4 1.4 0.2]
 [4.7 3.2 1.6 0.2]
 [4.8 3.1 1.6 0.2]
 [5.4 3.4 1.5 0.4]
 [5.2 4.1 1.5 0.1]
 [5.5 4.2 1.4 0.2]
 [4.9 3.1 1.5 0.2]
 [5.  3.2 1.2 0.2]
 [5.5 3.5 1.3 0.2]
 [4.9 3.6 1.4 0.1]
 [4.4 3.  1.3 0.2]
 [5.1 3.4 1.5 0.2]
 [5.  3.5 1.3 0.3]
 [4.5 2.3 1.3 0.3]
 [4.4 3.2 1.3 0.2]
 [5.  3.5 1.6 0.6]
 [5.1 3.8 1.9 0.4]
 [4.8 3.  1.4 0.3]
 [5.1 3.8 1.6 0.2]
 [4.6 3.2 1.4 0.2]
 [5.3 3.7 1.5 0.2]
 [5.  3.3 1.4 0.2]
 [7.  3.2 4.7 1.4]
 [6.4 3.2 4.5 1.5]
 [6.9 3.1 4.9 1.5]
 [5.5 2.3 4.  1.3]
 [6.5 2.8 4.6 1.5]
 [5.7 2.8 4.5 1.3]
 [6.3 3.3 4.7 1.6]
 [4.9 2.4 3.3 1. ]
 [6.6 2.9 4.6 1.3]
 [5.2 2.7 3.9 1.4]
 [5.  2.  3.5 1. ]
 [5.9 3.  4.2 1.5]
 [6.  2.2 4.  1. ]
 [6.1 2.9 4.7 1.4]
 [5.6 2.9 3.6 1.3]
 [6.7 3.1 4.4 1.4]
 [5.6 3.  4.5 1.5]
 [5.8 2.7 4.1 1. ]
 [6.2 2.2 4.5 1.5]
 [5.6 2.5 3.9 1.1]
 [5.9 3.2 4.8 1.8]
 [6.1 2.8 4.  1.3]
 [6.3 2.5 4.9 1.5]
 [6.1 2.8 4.7 1.2]
 [6.4 2.9 4.3 1.3]
 [6.6 3.  4.4 1.4]
 [6.8 2.8 4.8 1.4]
 [6.7 3.  5.  1.7]
 [6.  2.9 4.5 1.5]
 [5.7 2.6 3.5 1. ]
 [5.5 2.4 3.8 1.1]
 [5.5 2.4 3.7 1. ]
 [5.8 2.7 3.9 1.2]
 [6.  2.7 5.1 1.6]
 [5.4 3.  4.5 1.5]
 [6.  3.4 4.5 1.6]
 [6.7 3.1 4.7 1.5]
 [6.3 2.3 4.4 1.3]
 [5.6 3.  4.1 1.3]
 [5.5 2.5 4.  1.3]
 [5.5 2.6 4.4 1.2]
 [6.1 3.  4.6 1.4]
 [5.8 2.6 4.  1.2]
 [5.  2.3 3.3 1. ]
 [5.6 2.7 4.2 1.3]
 [5.7 3.  4.2 1.2]
 [5.7 2.9 4.2 1.3]
 [6.2 2.9 4.3 1.3]
 [5.1 2.5 3.  1.1]
 [5.7 2.8 4.1 1.3]
 [6.3 3.3 6.  2.5]
 [5.8 2.7 5.1 1.9]
 [7.1 3.  5.9 2.1]
 [6.3 2.9 5.6 1.8]
 [6.5 3.  5.8 2.2]
 [7.6 3.  6.6 2.1]
 [4.9 2.5 4.5 1.7]
 [7.3 2.9 6.3 1.8]
 [6.7 2.5 5.8 1.8]
 [7.2 3.6 6.1 2.5]
 [6.5 3.2 5.1 2. ]
 [6.4 2.7 5.3 1.9]
 [6.8 3.  5.5 2.1]
 [5.7 2.5 5.  2. ]
 [5.8 2.8 5.1 2.4]
 [6.4 3.2 5.3 2.3]
 [6.5 3.  5.5 1.8]
 [7.7 3.8 6.7 2.2]
 [7.7 2.6 6.9 2.3]
 [6.  2.2 5.  1.5]
 [6.9 3.2 5.7 2.3]
 [5.6 2.8 4.9 2. ]
 [7.7 2.8 6.7 2. ]
 [6.3 2.7 4.9 1.8]
 [6.7 3.3 5.7 2.1]
 [7.2 3.2 6.  1.8]
 [6.2 2.8 4.8 1.8]
 [6.1 3.  4.9 1.8]
 [6.4 2.8 5.6 2.1]
 [7.2 3.  5.8 1.6]
 [7.4 2.8 6.1 1.9]
 [7.9 3.8 6.4 2. ]
 [6.4 2.8 5.6 2.2]
 [6.3 2.8 5.1 1.5]
 [6.1 2.6 5.6 1.4]
 [7.7 3.  6.1 2.3]
 [6.3 3.4 5.6 2.4]
 [6.4 3.1 5.5 1.8]
 [6.  3.  4.8 1.8]
 [6.9 3.1 5.4 2.1]
 [6.7 3.1 5.6 2.4]
 [6.9 3.1 5.1 2.3]
 [5.8 2.7 5.1 1.9]
 [6.8 3.2 5.9 2.3]
 [6.7 3.3 5.7 2.5]
 [6.7 3.  5.2 2.3]
 [6.3 2.5 5.  1.9]
 [6.5 3.  5.2 2. ]
 [6.2 3.4 5.4 2.3]
 [5.9 3.  5.1 1.8]]
data/pima-indians-diabetes.data.csv'
pima=pd.read_csv(path)
pima.head()
#数据预处理
import pandas as pd
​
path='data/pima-indians-diabetes.data.csv'
pima=pd.read_csv(path)
pima.head()
pregnant	glucose	bp	skin	insulin	bimi	pedigree	age	label
0	6	148	72	35	0	33.6	0.627	50	1
1	1	85	66	29	0	26.6	0.351	31	0
2	8	183	64	0	0	23.3	0.672	32	1
3	1	89	66	23	94	28.1	0.167	21	0
4	0	137	40	35	168	43.1	2.288	33	1
insulin
#
feature_names=['pregnant','insulin','bimi','age']
X=pima[feature_names]
y=pima.label
#维度确认
print(X.shape)
print(y.shape)
(768, 4)
(768,)
#数据分离
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,random_state=0)
#模型训练
from sklearn.linear_model import LogisticRegression
logreg=LogisticRegression()
logreg.fit(X_train,y_train)
D:\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
  FutureWarning)
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
                   intercept_scaling=1, l1_ratio=None, max_iter=100,
                   multi_class='warn', n_jobs=None, penalty='l2',
                   random_state=None, solver='warn', tol=0.0001, verbose=0,
                   warm_start=False)
#测试数据集结果预测
y_pred=logreg.predict(X_test)
#使用准确率进行评估
from sklearn import metrics
print(metrics.accuracy_score(y_test,y_pred))
0.6927083333333334
#确认正负样本数据量
y_test.value_counts()0    130
1     62
Name: label, dtype: int64
#1的·比例
y_test.mean()
0.3229166666666667
#0的比例
1-y_test.mean()
0.6770833333333333
#空的比例
max(y_test.mean(),1-y_test.mean())
0.6770833333333333
#计算并展示混淆矩阵
print(metrics.confusion_matrix(y_test,y_pred))
[[118  12]
 [ 47  15]]
混淆矩阵,又称误差矩阵,用于衡量分类算法的准确程度
true Positives(TP):预测准确、实际为正样本的数量(实际为1,预测为1)
true Negatives(TN):预测准确、实际为负样本的数量(实际为0,预测为0)
true Positives(FP):预测错误、实际为负样本的数量(实际为0,预测为1)
true Negatives(FN):预测错误、实际为正样本的数量(实际为1,预测为0)
​
y_test
#展示部分实际结果(25组)
print("true:",y_test.values[0:25])
print("pred:",y_pred[0:25])
true: [1 0 0 1 0 0 1 1 0 0 1 1 0 0 0 0 1 0 0 0 1 1 0 0 0]
pred: [0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0]
#四个因子赋值
confusion=metrics.confusion_matrix(y_test,y_pred)
TN=confusion[0,0]
FP=confusion[0,1]
FN=confusion[1,0]
TP=confusion[1,1]
print(TN,FP,FN,TP)
118 12 47 15
### 混淆矩阵指标#准确率:整体样本中,预测正确样本数的比例
#·Accuracy=(TP+TN)/(TP+TN+FP+FN)
accuracy=(TP+TN)/(TP+TN+FP+FN)
print(accuracy)
print(metrics.accuracy_score(y_test,y_pred))
0.6927083333333334
0.6927083333333334
#错误率:整体样本中,预测错误样本数的比例
#·Misclassification Rate=(FP+FN)/(TP+TN+FP+FN)
mis_rate=(FP+FN)/(TP+TN+FP+FN)
print(mis_rate)
print(1-metrics.accuracy_score(y_test,y_pred))0.3072916666666667
0.30729166666666663
#灵敏率(召回率):正样本中,预测正确的比例
#·Sensitivity=Recall=TP/(TP+FN)
recall=TP/(TP+FN)
print(recall)0.24193548387096775
#特异度:负样本中,预测正确的比例
#·Specificity=TN/(TN+FP)
specificity=TN/(TN+FP)
print(specificity)
0.9076923076923077
精确率:预测结果为正的样本中,预测正确的比例
·Precision=TP/(TP+FP)
precision=TP/(TP+FP)
print(precision)
0.5555555555555556
ecall/(Precision+recall)
F1分数:综合Precision和recall的一个判断指标
    ·F1 Score=2*Precision X Recall/(Precision+recall)
F1_score=2*precision*recall/(precision+recall)
print(F1_score)
0.3370786516853933
print

猜你喜欢

转载自blog.csdn.net/vs20s18/article/details/104711434