使用 keras 训练大规模数据

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/luoganttcc/article/details/83150866

参考1
参考2

train_on_batch
n_epoch = 12
batch_size = 16
for e in range(n_epoch):
    print("epoch", e)
    batch_num = 0
    loss_sum=np.array([0.0,0.0])
    for X_train, y_train in GET_DATASET_SHUFFLE(train_X, batch_size, True): # chunks of 100 images 
        for X_batch, y_batch in train_datagen.flow(X_train, y_train, batch_size=batch_size): # chunks of 32 samples
            loss = model.train_on_batch(X_batch, y_batch)
            loss_sum += loss 
            batch_num += 1
            break #手动break
        if batch_num%200==0:
            print("epoch %s, batch %s: train_loss = %.4f, train_acc = %.4f"%(e, batch_num, loss_sum[0]/200, loss_sum[1]/200))
            loss_sum=np.array([0.0,0.0])
    res = model.evaluate_generator(GET_DATASET_SHUFFLE(val_X, batch_size, False),int(len(val_X)/batch_size))
    print("val_loss = %.4f, val_acc = %.4f: "%( res[0], res[1]))

    model.save("weight.h5")

猜你喜欢

转载自blog.csdn.net/luoganttcc/article/details/83150866