Kaggle Titanic: Machine Learning from Disaster(入门尝试)

  • 题目要求:
    In this challenge, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.
    (分析哪类人更有可能获救,并预测出测试集中哪些乘客会获救。)

  • 变量

变量 定义
survival 是否存活
pclass 票务舱
sex 性别
age 年龄
sibsp 泰坦尼克号上的兄弟姐妹/配偶
parch 泰坦尼克号上的父母/孩子们
ticket 票号
fare 乘客票价
cabin 机舱号码
embarked 登船港口
- 导入要用的模块
# Load in our libraries
import pandas as pd
import numpy as np
import re
import sklearn
import xgboost as xgb
import seaborn as sns
import matplotlib.pyplot as plt
#matplotlib inline
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.tools as tls

import warnings
warnings.filterwarnings('ignore')

# Going to use these 5 base models for the stacking
from sklearn.ensemble import (RandomForestClassifier, AdaBoostClassifier, 
                              GradientBoostingClassifier, ExtraTreesClassifier)
from sklearn.svm import SVC
from sklearn.model_selection import KFold
  • 特征工程,数据清理
  • 导入数据
# Load in the train and test datasets
train = pd.read_csv('C:\\ProgramData\\Anaconda3\\kaggle_titanic\\train.csv')
test = pd.read_csv('C:\\ProgramData\\Anaconda3\\kaggle_titanic\\test.csv')

# Store our passenger ID for easy access
PassengerId = test['PassengerId']

train.head(3)
print(train.head(3))
  • 特征工程
full_data = [train, test]
# Some features of my own that I have added in
# Gives the length of the name
train['Name_length'] = train['Name'].apply(len) 
test['Name_length'] = test['Name'].apply(len)
# Feature that tells whether a passenger had a cabin on the Titanic
train['Has_Cabin'] = train["Cabin"].apply(lambda x: 0 if type(x) == float else 1)
test['Has_Cabin'] = test["Cabin"].apply(lambda x: 0 if type(x) == float else 1)

解释:
apply(function) 运行括号里的函数
lambda:匿名函数
这里先计算姓名的长度,然后用0表示没有机舱号码,用1表示有机舱号码。

# Create new feature FamilySize as a combination of SibSp and Parch
for dataset in full_data:
    dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
# Create new feature IsAlone from FamilySize
for dataset in full_data:
    dataset['IsAlone'] = 0
    dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1

解释:
创建一个新的变量FamilySize,用它来表示变量Sibsp和Parch的总和,当Sibsp和Parch的总和为0时,FamilySize为1,所以当变量FamilySize为1时,表示该乘客没有家属,创建一个新的变量IsAlone,当乘客没有家属时,IsAlone的值为1,否则为0.

# Remove all NULLS in the Embarked column
for dataset in full_data:
    dataset['Embarked'] = dataset['Embarked'].fillna('S')
# Remove all NULLS in the Fare column and create a new feature CategoricalFare
for dataset in full_data:
    dataset['Fare'] = dataset['Fare'].fillna(train['Fare'].median()) 
train['CategoricalFare'] = pd.qcut(train['Fare'], 4)

解释:
dataset[‘Embarked’] = dataset[‘Embarked’].fillna(‘S’):用‘S’填充变量Embarked为空的地方
dataset[‘Fare’] = dataset[‘Fare’].fillna(train[‘Fare’].median()) :用训练集中变量Fare的平均值填充其空白的地方
train[‘CategoricalFare’] = pd.qcut(train[‘Fare’], 4):我不明白为什么要创建这个变量

# Create a New feature CategoricalAge
for dataset in full_data:
    age_avg = dataset['Age'].mean()
    age_std = dataset['Age'].std()
    age_null_count = dataset['Age'].isnull().sum()
    age_null_random_list = np.random.randint(age_avg - age_std, age_avg + age_std, size=age_null_count)
    dataset['Age'][np.isnan(dataset['Age'])] = age_null_random_list
    dataset['Age'] = dataset['Age'].astype(int)
train['CategoricalAge'] = pd.cut(train['Age'], 5)

解释:计算平均年龄和标准差,然后在平均值±标准差的范围内随机选取整数填充Age变量为空的地方,然后将变量Age的值都强制转换成int型,最后创建新变量CategoricalAge(但我还是不明白为什么要创建这个变量)。

# Define function to extract titles from passenger names
def get_title(name):
    title_search = re.search(' ([A-Za-z]+)\.', name)
    # If the title exists, extract and return it.
    if title_search:
        return title_search.group(1)
    return ""

解释:定义一个获取title(其实就是称呼)的函数,如果title存在就将其提取出来并返回。

# Create a new feature Title, containing the titles of passenger names
for dataset in full_data:
    dataset['Title'] = dataset['Name'].apply(get_title)
# Group all non-common titles into one single grouping "Rare"
for dataset in full_data:
    dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col','Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
    dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
    dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
    dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')

解释:首先获取乘客名字中的称呼即title,然后将其中比较不常见的称呼用Rare替代,标准化称称呼,用Miss替代Mile和Ms,用Mrs替代Mme。

接下来用好处理的数字替代变量值中的文字

 for dataset in full_data:
    # Mapping Sex
    dataset['Sex'] = dataset['Sex'].map( {'female': 0, 'male': 1} ).astype(int)
    
    # Mapping titles
    title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
    dataset['Title'] = dataset['Title'].map(title_mapping)
    dataset['Title'] = dataset['Title'].fillna(0)
    
    # Mapping Embarked
    dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
    
    # Mapping Fare
    dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] 						        = 0
    dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
    dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare']   = 2
    dataset.loc[ dataset['Fare'] > 31, 'Fare'] 							        = 3
    dataset['Fare'] = dataset['Fare'].astype(int)
    
    # Mapping Age
    dataset.loc[ dataset['Age'] <= 16, 'Age'] 					       = 0
    dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
    dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
    dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
    dataset.loc[ dataset['Age'] > 64, 'Age'] = 4 ;

变量值转换如下表

数值 Sex Title Embarked Fare Age
0 female S dataset['Fare'] <= 7.91 dataset['Age'] <= 16
1 male Mr C (dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454) (dataset['Age'] > 16) & (dataset['Age'] <= 32)
2 Miss Q (dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31) (dataset['Age'] > 32) & (dataset['Age'] <= 48)
3 Mrs dataset['Fare'] > 31 (dataset['Age'] > 48) & (dataset['Age'] <= 64)
4 Master dataset['Age'] > 64
5 Rare
# Feature selection
drop_elements = ['PassengerId', 'Name', 'Ticket', 'Cabin', 'SibSp']
train = train.drop(drop_elements, axis = 1)
train = train.drop(['CategoricalAge', 'CategoricalFare'], axis = 1)
test  = test.drop(drop_elements, axis = 1)
print(train.head(3))

解释:特征选择。
扔掉训练集中’PassengerId’, ‘Name’, ‘Ticket’, ‘Cabin’, ‘SibSp’和’CategoricalAge’, ‘CategoricalFare’]等列,扔掉测试集中[‘PassengerId’, ‘Name’, ‘Ticket’, ‘Cabin’, 'SibSp’等列

到目前为止,我们已经对特征进行了加工、处理和过滤。
接下来,我们需要简单的通过当前的数据进行一些可视化来帮助我们进一步进行分析。
首先,看一下目前这些特征之间的相关性吧:

colormap = plt.cm.RdBu
plt.figure(figsize=(14,12))
plt.title('Pearson Correlation of Features', y=1.05, size=15)
sns.heatmap(train.astype(float).corr(),linewidths=0.1,vmax=1.0, 
            square=True, cmap=colormap, linecolor='white', annot=True)

相关系数图如下:
(这个画图程序我也没搞明白)
Pearson相关系数(Pearson CorrelationCoefficient)是用来衡量两个数据集合是否在一条线上面,它用来衡量定距变量间的线性关系。当Pearson相关系数越接近1时,表示两个特征之间的相关性越强;而当两个特征之间的相关性越接近于0时,表示两个特征之间的相关性越低。
在这里插入图片描述
上图告诉我们的一件事是,没有太多的特征彼此强烈相关。 从将这些特征提供到学习模型中的观点来看,这是很好的,因为这意味着我们的训练集中没有太多冗余或多余的数据,我们很高兴每个特征都带有一些独特的信息。 这里有两个最相关的特征是FamilySize和Parch。

# Some useful parameters which will come in handy later on
ntrain = train.shape[0]
ntest = test.shape[0]
SEED = 0 # for reproducibility
NFOLDS = 5 # set folds for out-of-fold prediction
kf = KFold(n_splits= NFOLDS, random_state=SEED)

解释:
K折交叉验证:sklearn.model_selection.KFold(n_splits=3, shuffle=False, random_state=None)
思路:将训练/测试数据集划分n_splits个互斥子集,每次用其中一个子集当作验证集,剩下的n_splits-1个作为训练集,进行n_splits次训练和测试,得到n_splits个结果
注意点:对于不能均等份的数据集,其前n_samples % n_splits子集拥有n_samples // n_splits + 1个样本,其余子集都只有n_samples // n_splits样本
参数说明:
n_splits:表示划分几等份
shuffle:在每次划分时,是否进行洗牌
①若为Falses时,其效果等同于random_state等于整数,每次划分的结果相同
②若为True时,每次划分的结果都不一样,表示经过洗牌,随机取样的
random_state:随机种子数(随机种子数取相同时,每次得到的随机数相同。)

接下来,我们对Sklearn分类器进行一下封装,便于我们后续直接调用:

在这里插入代码片

猜你喜欢

转载自blog.csdn.net/weixin_40848065/article/details/86771544
今日推荐