第五章(药物选择决策支持分析)

一、连接数据库

安装了mysql之后自动安装了ODBC,在开始菜单下搜索“ODBC”即可,打开ODBC,在用户DCN中设置需要连接的数据库即可。

1、选择消费金额前10的顾客

根据订单表、订单明细表、客户表写类似sql的语句,根据订单id汇总总金额,根据总金额降序选择前10条,根据订单id合并订单表、最后根据客户id合并客户表即可。

用mysql导入数据:打开sqlyog,点击相应的数据里——开始新的工作——下一步——系统/用户DSN——选择之前在ODBC添加的数据库——选择相应的表(客户、订单、订单明细)——下一步ok。

在mysql中的操作:

SELECT * FROM 客户 AS d
INNER JOIN 
(SELECT b.订单ID,客户ID ,总金额
FROM 订单 AS a
INNER JOIN (SELECT 订单ID,(单价*数量*(1-折扣)) AS 总金额
FROM 订单明细
GROUP BY 订单ID
ORDER BY 总金额 DESC
LIMIT 10) AS b
ON a.`订单ID`=b.订单ID) AS c
ON d.`客户ID`=c.客户ID

二、分析产品之间的关联

数据结构如下图

步骤:源节点——类型结点——网络结点

粗线为强关联,细线弱关联

三、直邮目标客户挖掘

数据结构

变量:年龄、性别、客户所在地、收入、是否结婚、孩子数量、有无汽车、是否有储蓄账户(save_act)、是否有活期账户(current_act)、是否有抵押贷款(mortgage)、是否响应(pep:yes为响应,no为不响应)。

建模

步骤:源节点——分区(分为训练集和测试集)——类型(定义输入输出)——c5.0(以信息增益率划分特征的决策树)——分析(模型评估)

评估:

10折交叉验证,平均正确率91.3%。

注意:在spss modeler中对分类变量值为字符串格式的字段不需要像sklearn那样转为数字或者生成哑变量(one-hot),spss modeler内部会自动编码。

模型应用:

用sklearn试下

import pandas as pd
import numpy as np

data = pd.read_csv(r"C:\Users\Administrator\Desktop\mailshot.csv",encoding = "utf-8")

data.drop(["id"],inplace = True ,axis =1)#丢掉ID列

#将分类字符串改为数字
tra_dict = {"MALE" : 0,"FEMALE":1}
data["sex"] = data["sex"].map(tra_dict)
tra_dict1 = {"NO":0,"YES":1}
for u in ["married","car","save_act","current_act","mortgage","pep"]:
    data[u] = data[u].map(tra_dict1)  
tra_dic2 = {"INNER_CITY":0,"TOWN":1,"RURAL":2,"SUBURBAN":3}
data.region = data.region.map(tra_dic2)

#分出自变量和因变量
col_list = list(data.columns)
col_list.pop()
x_data = data[col_list]
y_data = data.pep

#建模
#交叉验证调参不用分测试集和验证集,可以防止过拟合
from sklearn import tree
from sklearn.grid_search import GridSearchCV
from sklearn.model_selection import StratifiedKFold

clf = tree.DecisionTreeClassifier(criterion="gini",random_state = 1)
param_grid = dict(max_depth=[3,4,5,6])
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=1)
GS = GridSearchCV(clf, param_grid, scoring='accuracy', n_jobs=-1,cv=list(kfold.split(x_data,y_data)))
GS_result = GS.fit(x_data, y_data)

print('Best: %f using %s'% (GS_result.best_score_, GS_result.best_params_))

Best: 0.836667 using {'max_depth': 6}

正确率和spss modeler的c5.0差好多,criterion = "entropy"反而更低,因此选了默认的。

#之后在加些参数调整,最高就是0.876667了
Best: 0.876667 using {'min_samples_leaf': 2, 'splitter': 'best', 'max_depth': 6}

四、药物选择决策支持

历史数据表

1、数据探索性分析(DEA)

分析不同变量和应变量药物的关系

连续变量和分类因变量(直方图,按颜色标准化)

年龄

钾含量

随着病人钾含量的提高,用药物X的比例明显上升

钠含量

分类自变量

分布图,按颜色标准化

性别

血压

胆固醇

2、建模

分别建立c5.0、logistics、神经网络模型

其中神经网络模型最好,在测试集正确率达到99%以上,衍生新的变量后,模型在测试集上的正确率都得到提高

利用python实现

年龄

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use("ggplot")

data = pd.read_csv(r"C:\Users\Administrator\Desktop\历史数据.csv",encoding = "gb2312")

bins = [0,6,12,18,24,30,36,42,48,54,60,66,72,78]
data["age"] = pd.cut(data.年龄,bins = bins)

data.groupby(["age","药物"]).药物.count().unstack()

data_age = data.groupby(["age","药物"]).药物.count().unstack().apply(lambda x:x/x.sum(),axis = 1)

ax = data_age.plot(kind  = "bar",stacked = True,figsize = (10,5))
ax.legend(loc = "upper right")

钾含量

data["Na"] = pd.cut(data["钠含量"],bins=10)
data_Na = data.groupby(['Na',"药物"]).药物.count().unstack().apply(lambda x:x/x.sum(),axis = 1)
ax = data_Na.plot(kind = "bar",stacked = True,figsize = (10,5))
ax.legend(loc = "upper right")

钾含量

data["k"] = pd.cut(data["钾含量"],bins=10)
data_Na = data.groupby(['k',"药物"]).药物.count().unstack().apply(lambda x:x/x.sum(),axis = 1)
ax = data_Na.plot(kind = "bar",stacked = True,figsize = (10,5))
ax.legend(loc = "upper right")

性别

#显示中文
plt.rcParams['font.sans-serif']=['SimHei']
plt.rcParams['axes.unicode_minus'] = False

ax = data.groupby(['性别','药物']).药物.count().unstack().apply(lambda x:x/x.sum(),axis=1).plot(kind = "bar",stacked = True,figsize = (10,10))
ax.legend(loc = "upper right")

胆固醇

ax = data.groupby(['胆固醇','药物']).药物.count().unstack().apply(lambda x:x/x.sum(),axis=1).plot(kind = "bar",stacked = True,figsize = (10,10))
ax.legend(loc = "upper right")

血压

ax = data.groupby(['血压','药物']).药物.count().unstack().apply(lambda x:x/x.sum(),axis=1).plot(kind = "bar",stacked = True,figsize = (10,10))
ax.legend(loc = "upper right")

建模

import pandas as pd
import numpy as np

data = pd.read_csv(r"C:\Users\Administrator\Desktop\历史数据.csv",encoding = "gbk")

tra_sex = {"F":0,"M":1}
tra_blood = {"HIGH":2,"LOW":0,"NORMAL":1}
tra_cho = {"HIGH":1,"NORMAL":0}
tra_drug = {"drugA":0,"drugB":1,"drugC":2,"drugX":3,"drugY":4}

data.性别 = data.性别.map(tra_sex)
data.血压 = data.血压.map(tra_blood)
data.胆固醇 = data.胆固醇.map(tra_cho)
data.药物 = data.药物.map(tra_drug)

x_data = data[["性别","年龄","血压","胆固醇","钠含量","钾含量"]]
y_data = data["药物"]

from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x_data,y_data,test_size = 0.33,random_state = 1)#和spss modeler切分相同比例的测试集

from sklearn import tree
from sklearn.grid_search import GridSearchCV
from sklearn.model_selection import StratifiedKFold

clf1 = tree.DecisionTreeClassifier(random_state = 1)#决策树
param_grid = dict(max_depth=[5,6,7,8],min_samples_leaf=[2,3,4,5])
kfold = StratifiedKFold(n_splits = 10,shuffle = True,random_state = 1)
model = GridSearchCV(clf1,param_grid,scoring = "accuracy",n_jobs = -1,cv = list(kfold.split(x_train,y_train)))
model_result = model.fit(x_train,y_train)

print('train_data_Best: %f using %s'% (model_result.best_score_, model_result.best_params_))

predicts = model_result.predict(x_test)
accuracy = sum(predicts == y_test)/len(y_test)

print("test_data_score:%f"%(accuracy))
train_data_Best: 0.949005 using {'max_depth': 7, 'min_samples_leaf': 3}
test_data_score:0.964646
#效果不错

逻辑回归

from sklearn import linear_model

clf2 = linear_model.LogisticRegression(penalty = "l1")
param_grid = dict(C=[2.8,3,3.2,3.3,3.4,3.5])
kfold = StratifiedKFold(n_splits = 10,shuffle = True,random_state = 1)
model = GridSearchCV(clf2,param_grid,scoring = "accuracy",n_jobs = -1,cv = list(kfold.split(x_train,y_train)))
model_result = model.fit(x_train,y_train)

print('train_data_Best: %f using %s'% (model_result.best_score_, model_result.best_params_))

predicts = model_result.predict(x_test)
accuracy = sum(predicts == y_test)/len(y_test)

print("test_data_score:%f"%(accuracy))
train_data_Best: 0.939055 using {'C': 3.3}
test_data_score:0.939394
#比决策树稍微差些

神经网络

from sklearn.neural_network import MLPClassifier

clf3 = MLPClassifier(hidden_layer_sizes=(200,),solver = "lbfgs", max_iter=2000)

param_grid = dict(alpha = [1e-4],activation = ['tanh'])
kfold = StratifiedKFold(n_splits = 5,shuffle = True,random_state = 1)
model = GridSearchCV(clf3,param_grid,scoring = "accuracy",n_jobs = -1,cv = list(kfold.split(x_train,y_train)))
model_result = model.fit(x_train,y_train)

print('train_data_Best: %f using %s'% (model_result.best_score_, model_result.best_params_))

predicts = model_result.predict(x_test)
accuracy = sum(predicts == y_test)/len(y_test)

print("test_data_score:%f"%(accuracy))

sklearn里神经网络调参训练实在是慢,直接放上调的正确率还可以的参数了

train_data_Best: 0.937811 using {'activation': 'tanh', 'alpha': 0.0001}
test_data_score:0.979798
#虽然是三个模型中最好的,但是比spss modeler里的还是差一些

特征工程后的真确率

x_data["N_k_pro"] = x_data["钠含量"]/x_data["钾含量"]
#重新设置下x_data,训练的代码不变

决策树

train_data_Best: 0.992537 using {'max_depth': 5, 'min_samples_leaf': 3}
test_data_score:0.997475

逻辑回归

train_data_Best: 0.961443 using {'C': 2.8}
test_data_score:0.964646

神经网络
train_data_Best: 0.951493 using {'activation': 'tanh', 'alpha': 0.0001}
test_data_score:0.972222

至少在训练集上都变优了,而神经网络在训练集上正确率反而下降了些,可能存在过拟合(神经网络交叉验证调参训练真的是慢,后面也就没继续调参提高正确率了,下次了解下TensorFlow)

猜你喜欢

转载自blog.csdn.net/weixin_40300458/article/details/81081063
今日推荐