DeepChem教程9:高级模型训练

DeepChem教程9:高级模型训练

到目前为止我们的模型训练按照如下简单的过程:加载数据集,创建模型,调用 fit()函数,评估模型,完成。这对于举例来说是可以的,但是实际的机器学习项目中过程通常更复杂。本教程我们看一下更真实的模型训练工作流程。

超参数优化

我们从加载HIV数据集开始。它基于是否抑制HIV复制酶来分类40000个分子。

In [1]:

import deepchem as dc
tasks, datasets, transformers = dc.molnet.load_hiv(featurizer='ECFP', split='scaffold')
train_dataset, valid_dataset, test_dataset = datasets
现在用数据集来训练模型。我们将使用MultitaskClassifier, 它只是多个全链接层的堆叠。但是还有很多选项。要有多少层呢?每层的数目是多少呢?dropout应该是多少?学习速率是多少?
这些都叫超参数。标准的方法是试用多个值,用训练集来训练每一个模型,用验证集来评估模型,看哪个模型更好。你可以亲手这么做,但是让计算机来为你做更好。DeepChem提供了选择超参数的算法,可以在dc.hyper包中找到。对于这个例子,我们使用GridHyperparamOpt,这是最基础的方法。我们只是给它每个超参数的参数列表,它会尽力的组合它们。

选项的列表由我们提供的dict定义。对于每一个模型参数,我们提供试用值的列表。本例我们考虑三种可能的隐藏层:宽度为500的单隐藏层,宽度为1000的单隐藏层,宽度为1000的双隐藏层。我们也考虑了两种dropout (20% 50%),以及两种学习速率(0.001 and 0.0001)

In [2]:

params_dict = {
   
   
    'n_tasks': [len(tasks)],
    'n_features': [1024],
    'layer_sizes': [[500], [1000], [1000, 1000]],
    'dropouts': [0.2, 0.5],
    'learning_rate': [0.001, 0.0001]
}
optimizer = dc.hyper.GridHyperparamOpt(dc.models.MultitaskClassifier)
metric = dc.metrics.Metric(dc.metrics.roc_auc_score)
best_model, best_hyperparams, all_results = optimizer.hyperparam_search(
        params_dict, train_dataset, valid_dataset, metric, transformers)

hyperparam_search() returns three arguments: the best model it found, the hyperparameters for that model, and a full listing of the validation score for every model. Let's take a look at the last one.

In [3]:

all_results

Out[3]:

{'_dropouts_0.200000_layer_sizes[500]_learning_rate_0.001000_n_features_1024_n_tasks_1': 0.759624393738977,
 '_dropouts_0.200000_layer_sizes[500]_learning_rate_0.000100_n_features_1024_n_tasks_1': 0.7680791323731138,
 '_dropouts_0.500000_layer_sizes[500]_learning_rate_0.001000_n_features_1024_n_tasks_1': 0.7623870149911817,
 '_dropouts_0.500000_layer_sizes[500]_learning_rate_0.000100_n_features_1024_n_tasks_1': 0.7552282358416618,
 '_dropouts_0.200000_layer_sizes[1000]_learning_rate_0.001000_n_features_1024_n_tasks_1': 0.7689915858318636,
 '_dropouts_0.200000_layer_sizes[1000]_learning_rate_0.000100_n_features_1024_n_tasks_1': 0.7619292572996277,
 '_dropouts_0.500000_layer_sizes[1000]_learning_rate_0.001000_n_features_1024_n_tasks_1': 0.7641491524593376,
 '_dropouts_0.500000_layer_sizes[1000]_learning_rate_0.000100_n_features_1024_n_tasks_1': 0.7609877155594749,
 '_dropouts_0.200000_layer_sizes[1000, 1000]_learning_rate_0.001000_n_features_1024_n_tasks_1': 0.770716980207721,
 '_dropouts_0.200000_layer_sizes[1000, 1000]_learning_rate_0.000100_n_features_1024_n_tasks_1': 0.7750327625906329,
 '_dropouts_0.500000_layer_sizes[1000, 1000]_learning_rate_0.001000_n_features_1024_n_tasks_1': 0.725972314079953,
 '_dropouts_0.500000_layer_sizes[1000, 1000]_learning_rate_0.000100_n_features_1024_n_tasks_1': 0.7546280986674505}
我们看到一些通用的模式。使用两层并有大的学习速率不会工作得很好。看来更深的模型需要更小的学习速率。我们也看到20% dropout通常比50% dropout好。基于这些观察,缩小我们的模型列表,所有的验证分非常接近,接近到变异主要为噪音。用哪个超参数集看来并没有什么区别,所以我们随便选择宽度为1000的单隐藏层和学习速率为0.0001
早停
还有一个我们没有考虑的超参数:我们要训练模型多长的时间?GridHyperparamOpt用固定的,很小的epochs来训练模型。这并不是最好的数值。
你可能希望训练更长的时间,得到的模型更好,但通常不是这样的。如果训练过长时间,模型就会过拟合一些不相关的细节。你可以指出什么时候发生,因为验证集的分值停止增加或甚至减少,而训练集的分值却还在增加。幸运的是我们不用多个不同的步数来训练不同的模型以识别最优的步数。我只要一次训练,监测验证集分值,看那个参数使它最大化。这叫早停“。

DeepChemValidationCallback类可以为我们自动的做这项工作。下面的例子中,我们让它每1000步计算验证集的ROC AUC。如果你增加save_dir参数,它会保存最好的模型到磁盘中。

In [4]:

model = dc.models.MultitaskClassifier(n_tasks=len(tasks),

                                      n_features=1024,

                                      layer_sizes=[1000],

                                      dropouts=0.2,

                                      learning_rate=0.0001)

callback = dc.models.ValidationCallback(valid_dataset, 1000, metric)

model.fit(train_dataset, nb_epoch=50, callbacks=callback)

Step 1000 validation: roc_auc_score=0.759757

Step 2000 validation: roc_auc_score=0.770685

Step 3000 validation: roc_auc_score=0.771588

Step 4000 validation: roc_auc_score=0.777862

Step 5000 validation: roc_auc_score=0.773894

Step 6000 validation: roc_auc_score=0.763762

Step 7000 validation: roc_auc_score=0.766361

Step 8000 validation: roc_auc_score=0.767026

Step 9000 validation: roc_auc_score=0.761239

Step 10000 validation: roc_auc_score=0.761279

Step 11000 validation: roc_auc_score=0.765363

Step 12000 validation: roc_auc_score=0.769481
Step 13000 validation: roc_auc_score=0.768523
Step 14000 validation: roc_auc_score=0.761306
Step 15000 validation: roc_auc_score=0.77397
Step 16000 validation: roc_auc_score=0.764848

Out[4]:

0.8040038299560547
学习速率计划

上面的例子我们在整个训练过程中使用相同的学习速率。有时候在训练过程中改变学习速率更好。为了在DeepChem改变学习速率,我们使用LearningRateSchedule 对象而不是为learning_rate 指明具体的数值。下面的例子我们使用指数衰减的学习速率。它开始为0.0002,然后是每1000步乘0.9

In [5]:

learning_rate = dc.models.optimizers.ExponentialDecay(0.0002, 0.9, 1000)
model = dc.models.MultitaskClassifier(n_tasks=len(tasks),
                                      n_features=1024,
                                      layer_sizes=[1000],
                                      dropouts=0.2,
                                      learning_rate=learning_rate)
model.fit(train_dataset, nb_epoch=50, callbacks=callback)
Step 1000 validation: roc_auc_score=0.736547
Step 2000 validation: roc_auc_score=0.758979
Step 3000 validation: roc_auc_score=0.768361
Step 4000 validation: roc_auc_score=0.764898
Step 5000 validation: roc_auc_score=0.775253
Step 6000 validation: roc_auc_score=0.779898
Step 7000 validation: roc_auc_score=0.76991
Step 8000 validation: roc_auc_score=0.771515
Step 9000 validation: roc_auc_score=0.773796

Step 10000 validation: roc_auc_score=0.776977

Step 11000 validation: roc_auc_score=0.778866

Step 12000 validation: roc_auc_score=0.777066

Step 13000 validation: roc_auc_score=0.77616

Step 14000 validation: roc_auc_score=0.775646

Step 15000 validation: roc_auc_score=0.772785

Step 16000 validation: roc_auc_score=0.769975

Out[5]:

0.22854619979858398

下载全文请到www.data-vision.net,技术联系电话13712566524

猜你喜欢

转载自blog.csdn.net/lishaoan77/article/details/114295191
今日推荐