[Tf.keras.utils.Sequence] Build your own data set generator

every blog every motto: You can do more than you think.

0. Preface

When training a model, we often do not load all the data into the memory at once, but load the data into the memory in batches.


  • One method is to use while True to traverse the data and generate it with yeid. For details, please refer to the semantic segmentation code explanation part
  • The other method is the tf.keras.utils.Sequence method that this article will explain

1. Text

__ len __ returns the number of iterations of 1 epoch, that is:
total number of samples / batch_size

__ getitem __ Generate data according to the number of iterations in len


Note: __ len __, __ getitem __ must be implemented

"""
测试
__getitem__
"""
import os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf


class Date(tf.keras.utils.Sequence):

    def __init__(self):
        print('初始化相关参数')

    def __len__(self):
        """
        此方法要实现,否则会报错
        正常程序中返回1个epoch迭代的次数
        :return:
        """
        return 5

    def __getitem__(self, index):
        """生成一个batch的数据"""
        print('index:', index)
        x_batch = ['x1', 'x2', 'x3', 'x4']
        y_batch = ['y1', 'y2', 'y3', 'y4']
        print('-'*20)
        return x_batch, y_batch


# 实例化数据
date = Date()

for batch_number, (x, y) in enumerate(date):
    print('正在进行第{} batch'.format(batch_number))
    print('x_batch:', x)
    print('y_batcxh:', y)

result:
Insert picture description here

references

[1] https://blog.csdn.net/weixin_39190382/article/details/105808830
[2] https://blog.csdn.net/weixin_43198141/article/details/89926262
[3] https://blog.csdn.net/u011311291/article/details/80991330

Guess you like

Origin blog.csdn.net/weixin_39190382/article/details/109195031