Hyperas - 在Keras中自动选择超参数

Hyperas - 在Keras中自动选择超参数

深度学习做到后面都剩下调参数
而参数又不是那么容易调整,是个废力又废时的工作
这边将介绍透过Hyperas这个套件,自动选择符合模型最好的参数

安装Hyperas

使用pip进行安装

$ pip install hyperas

导入Hyperas

from hyperopt import Trials, STATUS_OK, tpe
from hyperas import optim
from hyperas.distributions import choice, uniform



导入Keras

from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import RMSprop

from keras.datasets import mnist
from keras.utils import np_utils

之后我们会依序

  1. 定义data
  2. 定义model
  3. 优化模型超参数

定义数据

使用MNIST的数据

def data():
    (X_train, y_train), (X_test, y_test) = mnist.load_data()
    X_train = X_train.reshape(60000, 784)
    X_test = X_test.reshape(10000, 784)
    X_train = X_train.astype('float32')
    X_test = X_test.astype('float32')
    X_train /= 255
    X_test /= 255
    nb_classes = 10
    Y_train = np_utils.to_categorical(y_train, nb_classes)
    Y_test = np_utils.to_categorical(y_test, nb_classes)
    return X_train, Y_train, X_test, Y_test


这边除了定义model外,还需完成training及testing,所以需把data传进来

求最后回传一个字典,其中包含:

  • loss:Hyperas会去选择最小值的model
  • status:直接回传STATUS_OK
  • model:可不回传(option
def create_model(X_train, Y_train, X_test, Y_test):
    model = Sequential()
    model.add(Dense(512, input_shape=(784,)))
    model.add(Activation('relu'))
    model.add(Dropout({{uniform(0, 1)}}))
    model.add(Dense({{choice([256, 512, 1024])}}))
    model.add(Activation('relu'))
    model.add(Dropout({{uniform(0, 1)}}))
    model.add(Dense(10))
    model.add(Activation('softmax'))

    rms = RMSprop()
    model.compile(loss='categorical_crossentropy', optimizer=rms, metrics=['accuracy'])
    
    model.fit(X_train, Y_train,
              batch_size={{choice([64, 128])}},
              nb_epoch=1,
              verbose=2,
              validation_data=(X_test, Y_test))
    score, acc = model.evaluate(X_test, Y_test, verbose=0)
    print('Test accuracy:', acc)
    return {'loss': -acc, 'status': STATUS_OK, 'model': model}


原本dropout需要传入一个0-1机率的
但我们这边不直接指定一个数字
而是透过uniform帮我们产生一个0-1的数字

model.add(Dropout({{uniform(0, 1)}}))

Dense擇是透過choice
傳入我們要哪些值

model.add(Dense({{choice([256, 512, 1024])}}))

最后回传的字典
我们目标是选择最高的
accuracy的model
但因为Huperas他会去选择loss这个值最小的的model
所以将
accuracy变直接负号
再丢给loss

return {'loss': -acc, 'status': STATUS_OK, 'model': model}

优化模型超参数

最後透過optim.minimize()來找出最好的model

  • model: 我們定義的model
  • data: 我們定義的data
  • algo: 使用TPE algorithm
  • max_evals: evaluation次數
X_train, Y_train, X_test, Y_test = data()

best_run, best_model = optim.minimize(model=create_model,
                                      data=data,
                                      algo=tpe.suggest,
                                      max_evals=5,
                                      trials=Trials())

print("Evalutation of best performing model:")
print(best_model.evaluate(X_test, Y_test))
print(best_run)

optim.minimize()会回传

  • best_run:最好的参数组合
  • best_model:最好的model

最后

  1. Hyperas好像跟注解很不合,在跑程式时需把注解都删掉,以免发生错误
  2. 如果是使用jupyter notebook需要在optim.minimize()多加入notebook_name这个参数且设定为ipynb的档名,假如目前为Untitled.ipynb就设定为:
best_run, best_model = optim.minimize(model=create_model,
                                      data=data,
                                      algo=tpe.suggest,
                                      max_evals=5,
                                      trials=Trials(),
                                      notebook_name='Untitled')

完整例子

# -*- coding: utf-8 -*-

from __future__ import print_function

from hyperopt import Trials, STATUS_OK, tpe, rand
from keras.layers.core import Dense, Dropout, Activation
from keras.layers.advanced_activations import LeakyReLU
from keras.models import Sequential
from keras.utils import np_utils
from sklearn.metrics import accuracy_score
from hyperas import optim
from hyperas.distributions import choice, uniform, conditional
from keras import optimizers


def data():
    import pandas as pd

    data = pd.read_csv(r'../input/data.csv', header=0, sep=',')
    data = data.drop(['id', 'Unnamed: 32'], 1)

    from sklearn import preprocessing
    le = preprocessing.LabelEncoder()
    le.fit(data['diagnosis'])
    y = le.transform(data['diagnosis'])

    data = data.drop('diagnosis', 1)

    from sklearn.model_selection import train_test_split

    X_train, X_test, y_train, y_test = train_test_split(data, y, test_size=0.25, random_state=777)

    X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=777)

    from sklearn.preprocessing import StandardScaler

    for i in X_train.columns:
        scaler = StandardScaler()
        scaler.fit(X_train[i].values.reshape(-1, 1))
        X_train[i] = scaler.transform(X_train[i].values.reshape(-1, 1))
        X_val[i] = scaler.transform(X_val[i].values.reshape(-1, 1))
        X_test[i] = scaler.transform(X_test[i].values.reshape(-1, 1))

    return X_train, X_val, X_test, y_train, y_val, y_test


def create_model(X_train, y_train, X_val, y_val):
    from keras import models
    from keras import layers
    import numpy as np

    model = models.Sequential()
    model.add(layers.Dense({{choice([np.power(2, 5), np.power(2, 6), np.power(2, 7)])}}, input_shape=(len(data.columns),)))
    model.add(LeakyReLU(alpha={{uniform(0.5, 1)}}))
    model.add(Dropout({{uniform(0.5, 1)}}))
    model.add(layers.Dense({{choice([np.power(2, 3), np.power(2, 4), np.power(2, 5)])}}))
    model.add(LeakyReLU(alpha={{uniform(0.5, 1)}}))
    model.add(Dropout({{uniform(0.5, 1)}}))
    model.add(layers.Dense(1, activation='sigmoid'))

    from keras import callbacks
        
    reduce_lr = callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.2,
                                  patience=5, min_lr=0.001)


    model.compile(optimizer={{choice(['rmsprop', 'adam', 'sgd'])}},
                  loss='binary_crossentropy',
                  metrics=['accuracy'])

    model.fit(X_train,
              y_train,
              epochs={{choice([25, 50, 75, 100])}},
              batch_size={{choice([16, 32, 64])}},
              validation_data=(X_val, y_val),
              callbacks=[reduce_lr])

    score, acc = model.evaluate(X_val, y_val, verbose=0)
    print('Test accuracy:', acc)
    return {'loss': -acc, 'status': STATUS_OK, 'model': model}


if __name__ == '__main__':

    best_run, best_model = optim.minimize(model=create_model,
                                          data=data,
                                          algo=tpe.suggest,
                                          max_evals=15,
                                          trials=Trials())
    X_train, X_val, X_test, y_train, y_val, y_test = data()
    print("Evalutation of best performing model:")
    print(best_model.evaluate(X_test, y_test))
    print("Best performing model chosen hyper-parameters:")
    print(best_run)

    best_model.save('breast_cancer_model.h5')

参考

https://github.com/maxpumperla/hyperas

猜你喜欢

转载自blog.csdn.net/mieleizhi0522/article/details/84988205
今日推荐