google-inceptionV4 training and deployment process

1, source download and installation dependent

(1) mounted git, git Baidu installation process itself.

(2) a download tensorflow built by training a frame encapsulates google-inceptionV4 image classification algorithm and other algorithms. Download the following command:

git clone https://github.com/MachineLP/train_arch

2, the installation parameter configuration, installation and demo authentication algorithm

(1) enters train_arch / train_cnn_v1 directory command is as follows:

cd train_arch/ train_cnn_v1

This directory is a directory other than the addition to test for model testing, and the rest of all directories and files are used for model training.

The following is a brief description of the role of the various directories and files:

gender: the directory where the training set images. This is an example of the training set, if you want to train their own model, only need to create a directory of its own, different types of images stored in different directories.

lib: The current framework relies on various core library directory.

1) model: each network model module.

2) data_aug: for image enhancement, comprising inside the two methods.

3) grad_cam: visualization module.

4) data_load: Load the training data module.

5) train: model training module.

6) utils: Tools module.

7) loss: loss function module.

8) optimizer: optimization module.

model: catalog model training process saved.

pretrain: storage directory pre-migration training model of learning.

config.py: for setting script training process model parameters.

main.py: Start training script (python main.py)

vis_cam.py: visual scripting (python vis_cam.py)

ckpt_pb.py: training for the generated model pb ckpt model file transfer format support into tensorflow.

test: a test model for the directory, including scripts and batch single picture test.

If (2) the implementation of training script, to verify the classification model can be generated

In config.py, the model parameters have been configured for gender classification catalog pictures, can direct training model, execute:

python3 main.py

After 2,000 iterations, you can see the model generated in the model directory.

3, the training data production and training parameters

(1) Production dataset:

Create a new directory, as a training set of images stored in the root directory, in this directory, create images based on the same number Catalog number (must be at least two categories), the number of categories, the number of directories on the new directory name is the category name. The same types of images into the same directory corresponding.

(2) modify the configuration file

config.py script contains the parameters google-inceptionV4 in training need to be configured. These parameters include:

# Training set root

sample_dir = "gender"

# Number of categories to be sorted

num_classes = 4

The minimum grant size # training

batch_size = 2

#
Select the model used, but also according to their preferences, select arch_multi_alexnet_v2 or arch_multi_vgg16_conv

arch_model="arch_inception_v4"

# Selected training network layer

checkpoint_exclude_scopes = "Logits_out"

# Dropout size

dropout_prob = 0.8

# Select training sample proportion

train_rate = 0.9

How many iterations performed on the entire training set #

epoch = 2000

# Whether early termination of the training

early_stop = True

EARLY_STOP_PATIENCE = 1000

# Whether to use learning_rate

learning_r_decay = True

learning_rate_base = 0.0001

decay_rate = 0.95

height, width = 299, 299

Saved path model #

train_dir = 'model'

# Whether to fine-tune. Selection parameters of fine-tune

fine_tune = False

# Are all the layers of training parameters

train_all_layers = True

# Migration network model parameter learning

checkpoint_path = 'pretrain/inception_v4/inception_v4_2016_09_09.ckpt'

Note that the value of the parameter settings to keep arch_model checkpoint_path correspondence, such as arch_model setting is arch_inception_v4, then checkpoint_path necessary to set inception_v4 related pre-trained model parameters, and can not use inception_resnet_v2 related pre-trained model parameters.

Model train stopped half-way, if you want to continue training, you need to modify the above checkpoint_path, the steps are:

The model of the newest generation of the model file is moved to the new directory, for example, the latest generation of the model file is model.ckpt-808.index

model.ckpt-808.data-00000-of-00001

Note that each iteration of the generated model files come in pairs, similar to the above two file naming format. These two files will then move to the new model_continue, and then modify checkpoint_path
= 'model_continue / model.ckpt-808'

4, model training and testing

(1) model training:

Here use transfer learning to train, you must first download inception_v4 related pre-trained model parameters inception_v4_2016_09_09.ckpt (ie, in others on the basis of pre-trained model, then train their own data set, using other people have been trained network parameters as initialization parameters of its own network, and its training), download address is:

http://download.tensorflow.org/models/inception_v4_2016_09_09.tar.gz

After the download is complete, the model moves to the pre-trained pretrain directory, i.e., the following:

tar -zxvf inception_v4_2016_09_09.tar.gz

cd pretrain

mkdir inception_v4

mv ../inception_v4_2016_09_09.ckpt inception_v4

He began training the following command:

python3 main.py

Training process, generates each iterative phase model in the model directory. The final model will save the last few iterations generated.

(2) model format conversion

After completion of the training model can not be used directly, but you want to convert it to tensorflow pb supported format. Ckpt_pb.py conversion using the following command:

python3 ckpt_pb.py

Script execution is completed, it will generate frozen_model.pb in the model directory, which is the model for the converted file format, the file name can be modified ckpt_pb.py in:

If you modify the model file name, file name following locations in the script have to do Consequential amendments:

(3) model test

Select a test picture test.png (belonging to the category of the training set of images of a certain kind of category) model could be tested, the command is as follows:

python3 predict.py -m model/frozen_model.pb test.png

Which, -m model / frozen_model.pb model path is specified, the final surface is the image path to be tested.

At this point you can see the results of the model to predict the current picture on the command line.

Guess you like

Origin www.cnblogs.com/shenggang/p/12144887.html