Keras frame for the data sets to build their own .flow_from_directoryt

0 Introduction

In reality, the actual process, often encountered in the data set is not similar to mnist already packaged as a data set, but there is a folder in the form of pictures, in this case there is no correlation functions (such as load_data () function) direct load, so here I will propose two ways to build a data set.

method 1

.flow_from_directory (), this function can be enhanced when used in data related to usage:

train_datagen = ImageDataGenerator(
        rescale=1./255,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
        'data/train',
        target_size=(150, 150),
        batch_size=32,
        class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
        'data/validation',
        target_size=(150, 150),
        batch_size=32,
        class_mode='binary')

model.fit_generator(
        train_generator,
        steps_per_epoch=2000,
        epochs=50,
        validation_data=validation_generator,
        validation_steps=800)
ImageDataGenerator () to enhance the image, usually in less time with a data set,
keras.preprocessing.image.ImageDataGenerator(featurewise_center=False,
    samplewise_center=False,
    featurewise_std_normalization=False,
    samplewise_std_normalization=False,
    zca_whitening = False,
    The zca_epsilo = 1E-6 ,
    rotation_range=0.,
    width_shift_range=0.,
    height_shift_range=0.,
    shear_range=0.,
    zoom_range=0.,
    channel_shift_range=0.,
    fill_mode='nearest',
    Gallop = 0,
    horizontal_flip=False,
    vertical_flip=False,
    rescale=None,
    preprocessing_function=None,
    data_format=K.image_data_format())
 

Interpretation of the relevant parameters:

If not enhance, the function is empty. To generate image data of a batch, real-time data upgrade. This function will generate unlimited data during training, until the specified number of times epoch so far.

parameter

  • featurewise_center: Boolean value, the input data set to the center (mean 0), performed by feature

  • samplewise_center: Boolean, each input data sample mean 0

  • featurewise_std_normalization: Boolean value, standard deviation divided by the input data set to complete the standardization performed feature

  • samplewise_std_normalization: Boolean value for each sample input divided by its own standard deviation

  • zca_whitening: Boolean value, is applied to input data whitening ZCA

  • zca_epsilon: eposilon ZCA using the default 1e-6

  • rotation_range: integer, the random data to enhance the image rotation angle

  • width_shift_range: float, a certain percentage of the image width, amplitude data to enhance the image horizontally offset

  • height_shift_range: float, a certain percentage of the image height, amplitude data to enhance the image vertically offset

  • shear_range: float, shear strength (shear transformation angle counterclockwise)

  • zoom_range: Float or shaped like [lower,upper]a list of random amplitude scaling, if a floating point number, is equivalent to[lower,upper] = [1 - zoom_range, 1+zoom_range]

  • Float amplitude random channel offset: channel_shift_range

  • fill_mode:; 'constant', 'nearest', 'reflect' one or 'wrap', beyond the border point when conversion processing according to the method given parameter

  • cval: floating point or integer, when fill_mode=constantthe time, would like to specify the value of the point beyond the boundary of the filler

  • horizontal_flip: Boolean random Flip Horizontal

  • vertical_flip: Boolean random vertically inverted

  • rescale: rescaling factor, if the default is None None 0 or scaling is not performed, otherwise the value is multiplied to the data (prior to application of other transforms).

  • preprocessing_function: function to be applied to each input. This function will zoom in and run after the data to enhance the picture. This function takes a parameter of a picture (the rank numpy array 3), and outputs the same shape having a numpy array

  • data_format: channel-dimensional position of the string, one of "channel_first" or "channel_last", the representative image. This parameter is the Keras 1.x image_dim_ordering, "channel_last" corresponding to the original "tf", "channel_first" corresponding to the original "th". RGB image to 128x128 for example, "channel_first" data should be organized as (3,128,128), while "channel_last" data should be organized as (128,128,3). The default value for this parameter is ~/.keras/keras.jsonthe value set, if never set, was "channel_last"

 

flow_from_directory (directory): folder path as a parameter to generate lift via the data / a of the normalized data, generate batch data unlimited in an infinite loop

 
  • directory: 目标文件夹路径,对于每一个类,该文件夹都要包含一个子文件夹.子文件夹中任何JPG、PNG、BNP、PPM的图片都会被生成器使用.详情请查看此脚本
  • target_size: 整数tuple,默认为(256, 256). 图像将被resize成该尺寸
  • color_mode: 颜色模式,为"grayscale","rgb"之一,默认为"rgb".代表这些图片是否会被转换为单通道或三通道的图片.
  • classes: 可选参数,为子文件夹的列表,如['dogs','cats']默认为None. 若未提供,则该类别列表将从directory下的子文件夹名称/结构自动推断。每一个子文件夹都会被认为是一个新的类。(类别的顺序将按照字母表顺序映射到标签值)。通过属性class_indices可获得文件夹名与类的序号的对应字典。
  • class_mode: "categorical", "binary", "sparse"或None之一. 默认为"categorical. 该参数决定了返回的标签数组的形式, "categorical"会返回2D的one-hot编码标签,"binary"返回1D的二值标签."sparse"返回1D的整数标签,如果为None则不返回任何标签, 生成器将仅仅生成batch数据, 这种情况在使用model.predict_generator()model.evaluate_generator()等函数时会用到.
  • batch_size: batch数据的大小,默认32
  • shuffle: 是否打乱数据,默认为True
  • seed: 可选参数,打乱数据和进行变换时的随机数种子
  • save_to_dir: None或字符串,该参数能让你将提升后的图片保存起来,用以可视化
  • save_prefix:字符串,保存提升后图片时使用的前缀, 仅当设置了save_to_dir时生效
  • save_format:"png"或"jpeg"之一,指定保存图片的数据格式,默认"jpeg"
  • flollow_links: 是否访问子文件夹中的软链
  • 同时变换图像和mask示例
    data_gen_args = dict(featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90.,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2)
    image_datagen = ImageDataGenerator(**data_gen_args)
    mask_datagen = ImageDataGenerator(**data_gen_args)
    
    # Provide the same seed and keyword arguments to the fit and flow methods
    seed = 1
    image_datagen.fit(images, augment=True, seed=seed)
    mask_datagen.fit(masks, augment=True, seed=seed)
    
    image_generator = image_datagen.flow_from_directory(
        'data/images',
        class_mode=None,
        seed=seed)
    
    mask_generator = mask_datagen.flow_from_directory(
        'data/masks',
        class_mode=None,
        seed=seed)
    
    # combine generators into one which yields image and masks
    train_generator = zip(image_generator, mask_generator)
    
    model.fit_generator(
        train_generator,
        steps_per_epoch=2000,
        epochs=50)

    参考链接;

https://keras-cn.readthedocs.io/en/latest/preprocessing/image/

train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( 'data/train', target_size=(150, 150), batch_size=32, class_mode='binary') validation_generator = test_datagen.flow_from_directory( 'data/validation', target_size=(150, 150), batch_size=32, class_mode='binary') model.fit_generator( train_generator, steps_per_epoch=2000, epochs=50, validation_data=validation_generator, validation_steps=800)

Guess you like

Origin www.cnblogs.com/hujinzhou/p/12368926.html