Model creation of CNN face recognition

1. Create a model

  • Import package
import numpy as np
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense,Dropout,Conv2D,MaxPooling2D,Flatten
from keras.optimizers import SGD,Adam
from keras.regularizers import l2
import _pickle as p
from keras import optimizers
import itertools
import cv2
import os
import random
from keras.regularizers import l2
print(1)
from sklearn.model_selection import train_test_split
  • Get Picture
    picture is that everyone has a face 500, in order to increase robustness, and then by introducing os, implementation file is read, the pictures were added to the label and image, label in, but it is one to one relationship, then It is normalized, and then the order is shuffled, but when it is shuffled, there is a one-to-one correspondence. If there is an error, then all errors will be made. Then in order to test its function, it can be divided into test data and training data.
# 读取训练数据到内存,这里数据结构是列表

# path_name是当前工作目录,后面会由os.getcwd()获得
def read_path(path_name):
    images = []
    labels = []

    for dir_item in os.listdir(path_name): # os.listdir() 方法用于返回指定的文件夹包含的文件或文件夹的名字的列表
        # 从当前工作目录寻找训练集图片的文件夹
        full_path = os.listdir(path_name+dir_item)
        print(dir_item)
        for file in full_path:
            path = path_name+dir_item+'/'+file
            img = cv2.imread(path)
            images.append(img)
            labels.append(path_name+dir_item)

    return images,labels

images,labels = read_path('./pic/')
# 将lsit转换为numpy array
images = np.asarray(images, dtype='float64')/256 
labels = np.asarray([0 if label.endswith('1') else 1 if label.endswith('2') else 2 for label in labels])
print(labels)
index = [i for i in range(len(images))] 
random.shuffle(index)
data = images[index]
label = labels[index]
label= np_utils.to_categorical(label, num_classes=3)
#标签打乱

X_train, X_test, Y_train, Y_test = train_test_split(data, label, test_size=0.30, random_state=42)
2.3, create a model

This is the most classic five-layer model used, the parameters of which are posted below

# 构建一个空的网络模型,它是一个线性堆叠模型,各神经网络层会被顺序添加,专业名称为序贯模型或线性堆叠模型
# model = Sequential()
# # 以下代码将顺序添加CNN网络需要的各层,一个add就是一个网络层
# model.add(Conv2D(
#     input_shape= (47,57,3),
#     filters = 32,
#     kernel_size = 5,
#     strides = 1,
#     padding = 'same',
#     activation = 'relu'
# ))
model=Sequential()
model.add(Conv2D(filters=36, kernel_size=5, padding='valid',kernel_regularizer=l2(0.003), input_shape=(100,100,3), activation='relu'))
model.add(Dropout(0.2))
model.add(MaxPooling2D(pool_size=(2,2)))
 
model.add(Conv2D(filters=16, kernel_size=(5,5), padding='valid', activation='relu'))
 
model.add(MaxPooling2D(pool_size=(2,2)))
 

model.add(Flatten())
 
#下面就是全连接层了
model.add(Dense(520, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(128, activation='relu'))
 
model.add(Dense(3, activation='softmax'))
#compile model
 
#事实证明,对于分类问题,使用交叉熵(cross entropy)作为损失函数更好些
model.compile(
    loss='categorical_crossentropy',
    optimizer=optimizers.Adadelta(lr=0.01, rho=0.95, epsilon=1e-06),
    metrics=['accuracy']
)
# 输出模型概况
model.summary()

Insert picture description here

2.4, configuration model

The cross-entropy loss function is used in the model. The activation function of the previous fully connected layer uses the'reul' function, and the last layer uses the'softmax' function.

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)  # 采用SGD+momentum的优化器进行训练,首先生成一个优化器对象
model.compile(loss='categorical_crossentropy',
                    optimizer=optimizers.Adadelta(lr=0.01, rho=0.95, epsilon=1e-06),
                    metrics=['accuracy'])  # 完成实际的模型配置工作
2.5, training model

Then the model generated'my_model.h5'

model.fit(X_train, Y_train, epochs = 10,batch_size=128)
loss_, accuracy_ = model.evaluate(X_test,Y_test)
loss, accuracy = model.evaluate(X_train,Y_train)
result = model.predict(X_test)
print(loss_)
print(accuracy_)
print(loss)
print(accuracy)
model.save('my_model.h5')
#model.load_weights('my_model.h5')

However, there is still a way for the model to realize face recognition by itself. You need to write something to recognize and distinguish the image by yourself, and then you can really realize the face recognition function. Write the model creation temporarily, and then update the model to use it. Face recognition.

Guess you like

Origin blog.csdn.net/qq_45125250/article/details/107035013