Deep learning practical facial expression recognition [source code + model + PyQt5 interface]

Insert image description here

AI facial expression recognition


research background and meaning

  Facial expression recognition is an important research direction in the field of computer vision, which involves understanding people's emotional states by analyzing facial expressions in facial images. This technology has extensive research background and practical application significance:

Research Background:

  1. Fundamentals of Psychology: Human emotions are conveyed and understood through facial expressions. The basic emotion model proposed by researchers such as Paul Ekman contains six basic expressions (happiness, anger, fear, sadness, surprise, disgust), which provides a basic theoretical basis for facial expression recognition research.

  2. Social interaction: In interactions between people, the transmission and understanding of emotions is very important. Facial expression recognition can be used to improve the interaction experience between computers and humans, for example, by identifying the user's emotional state to adaptively adjust the system's behavior.

  3. Entertainment and games: In the field of entertainment and games, facial expression recognition can be used to create more realistic virtual characters, allowing them to respond accordingly according to the emotional state of the player, enhancing the immersion of the game.

Research Significance and Application:

  1. Emotional analysis: Facial expression recognition technology can be applied to emotional analysis to help analyze people's emotional states in specific situations. This is of great value in fields such as market research and advertising evaluation.

  2. Mental health: Facial expression recognition can assist the field of mental health, help identify signs of depression, anxiety and other emotional disorders, and provide reference for clinical diagnosis.

  3. User experience improvement: In human-computer interaction, by analyzing the user's facial expressions, the system can understand the user's emotional state in real time, thereby adjusting the interface design, recommended content, etc., to provide a better user experience.

  4. Virtual reality and augmented reality: In virtual reality and augmented reality applications, facial expression recognition can make virtual characters more realistically simulate real emotional expressions and improve immersion.

  5. Security and surveillance: Facial expression recognition technology can be applied in the security field to help detect people's emotional changes in surveillance images, thereby detecting potential threats or abnormal behaviors early.

In short, facial expression recognition, as a comprehensive research and application field, has both a profound theoretical foundation and a wide range of practical applications. It is important for improving human-computer interaction, promoting emotional analysis, and enhancing virtual reality experience. meaning.


Friends who feel good, thank you for your likes, attention and collection! More useful content will be updated continuously...

Code download link

Follow the blogger's GZH [ Little Bee Vision ], and reply [ Emoji Recognition ] to get the download method

  If you want to get all the complete program files (including test pictures, test videos, py files, model weight files, debugging instructions, etc.) involved in the blog post, code acquisition and technical guidance, please refer to the blog and video for details. All files have been The files involved are packaged inside at the same time. There are specific instructions for software installation and debugging. We have professional debugging technicians who will remotely assist customers in debugging. Please see the details安装调试说明.txt . A screenshot of the complete file is as follows:

Insert image description here

1. Effect demonstration

  The AI ​​facial expression recognition system built in this article supports three image inputs: image, video and camera.

1.1 Image recognition

Insert image description here

1.2 Video recognition

Insert image description here

1.3 Camera Identification

Insert image description here

2. Technical principle

2.1 Overall technical process

  The overall process of the facial expression recognition system can usually be divided into the following steps: face detection (positioning), feature extraction, classifier construction and emotion classification. Here is a brief process:

  1. Face detection (positioning): The goal of this step is to locate the face from the image. Commonly used face detection methods include feature-based methods (such as Haar features, HOG features) and deep learning-based methods (such as convolutional neural networks). Once the face position is detected, the face area can be extracted for subsequent processing.

  2. Feature extraction: Extracting expression-related features from facial images, usually using various image processing and computer vision techniques. Commonly used feature extraction methods include local binary pattern (LBP), histogram of gradients (HOG), face key points, etc. These features can capture the texture and structural information of the human face and help distinguish different expressions.

  3. Classifier construction: After feature extraction, a classifier needs to be built to map the extracted features to different expression categories. In this step, you mentioned building a VGG classifier. VGG is a classic convolutional neural network structure suitable for image classification tasks. You can input the extracted features into the VGG network and obtain a classifier suitable for expression classification after training.

  4. Emotion classification: After the classifier is built, it is applied to new face images. This step involves feeding new image data into a classifier, which then outputs predicted emotion categories. Generally speaking, each emotion category corresponds to a specific facial expression, such as happiness, anger, sadness, etc.

2.2 Seven common facial expressions

Insert image description here
  When it comes to facial expression recognition, these words are often used to describe different emotion categories, each of which corresponds to different expression characteristics on the human face. Here is a brief introduction to each emotion category:

  1. Surprise: Surprise is a sudden, unexpected emotional experience, usually caused by something surprising. A surprised expression on a human face is usually characterized by wide eyes, raised eyebrows, and an open mouth.

  2. Fear: Fear is a reaction to a possible threat, danger, or unsafe situation. A fearful expression on a human face may include closed eyes, furrowed eyebrows, and a slightly open mouth.

  3. Disgust: Disgust is a strong aversion to something disgusting or disgusting. The disgusted expression on a human face is usually characterized by a wrinkled nose, closed eyes, and the mouth may be slightly curled.

  4. Happy: Happy is an emotional state of pleasure and joy. Happy facial expressions usually include eyes narrowed into a line and the mouth curved upward, possibly accompanied by laughter.

  5. Sadness: Sadness is an emotional experience caused by loss, sadness, or disappointment. Sad expressions on human faces usually include drooping eyes, downturned corners of the mouth, and the overall expression appears depressed.

  6. Angry: Anger is a strong emotional response to injustice, conflict, or harm. Angry expressions on a human face may include furrowed eyebrows, pursed lips, and possible tension.

  7. Neutral: Normal emotional state refers to a state without obvious emotional expression, also known as neutral emotion. In this case, the face usually displays a calm, non-emotional expression.

2.3 Traditional face positioning

# encoding:utf-8
import cv2
import numpy as np



# 通过numpy读取中文路径图像
def image_read_from_chinese_path(image_file_name):
    image_numpy_data = cv2.imdecode(np.fromfile(image_file_name, dtype=np.uint8), 1)
    #返回numpy的ndarray
    return image_numpy_data


# 运行之前,检查cascade文件路径是否在相应的目录下
face_cascade = cv2.CascadeClassifier('model/haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('model/haarcascade_eye.xml')

# 读取图像
img = image_read_from_chinese_path('./images/test2.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)  # 转为灰度图


# 检测脸部
faces = face_cascade.detectMultiScale(gray,
                            scaleFactor=1.1,
                            minNeighbors=5,
                            minSize=(100, 100),
                            flags=cv2.CASCADE_SCALE_IMAGE)


# 标记位置
for (x, y, w, h) in faces:
    img = cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)

    roi_gray = gray[y: y + h, x: x + w]
    roi_color = img[y: y + h, x: x + w]

    eyes = eye_cascade.detectMultiScale(roi_gray)
    for (ex, ey, ew, eh) in eyes:
        cv2.rectangle(roi_color, (ex, ey), (ex + ew, ey + eh), (0, 0, 255), 2)


label = f'OpenCV  Haar Detected {
      
      str(len(faces))} faces'
cv2.putText(img, label, (10, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 0), 1)

cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()

  If new students don’t know how to configure the environment, you can refer to the blogger’s [ Anaconda3 and PyCharm Installation and Configuration Nanny Tutorial ]

  Among them, the function image_read_from_chinese_pathfunction is mainly a solution to the problem that cv2 cannot read Chinese path images. For details, you can refer to the article written by the blogger [ opencv-python[cv2] reads Chinese path images ]

2.4 Deep learning face positioning

  The general process of the deep learning face positioning algorithm includes: first preprocessing the input image, including scaling, cropping and other operations, then using a convolutional neural network (CNN) to extract features, and then using a regressor or classifier to perform the obtained features Analysis, and finally output the position and size of the face.

2.4.1 MTCNN

  MTCNN (Multi-task Cascaded Convolutional Networks) is a multi-task cascaded convolutional network proposed by the Chinese Academy of Sciences. It can simultaneously perform tasks such as face detection, key point positioning, and posture estimation. It has high accuracy, fast speed, and It has the advantages of being able to handle faces at multiple scales.

  For specific instructions and code implementation, please refer to the tutorial written by the blogger MTCNN Face Detection Algorithm Implementation (Python)

2.4.2 RetinaFace

  RetinaFace is a more accurate face detection and key point positioning algorithm proposed by City University of Hong Kong. It uses a deformable convolutional network to achieve more accurate positioning. RetinaFace is especially suitable for small children. Localization of scaled faces.

2.4.3 CenterFace

  CenterFace is a lightweight face detection and key point positioning algorithm proposed by Huawei. The algorithm only requires a model size of 1.5MB and can run in real time on the mobile terminal. CenterFace uses the Hourglass model and Feature Pyramid Network. ) to achieve high-precision face positioning.

2.4.4 BlazeFace

  BlazeFace is an extremely lightweight face detection algorithm proposed by Google. Its model size is only about 2MB and can run in real time on the mobile terminal. BlazeFace adopts an innovative anchor-free detection method, which can achieve faster speeds. Face positioning.

2.4.5 YOLO

  YOLO is an end-to-end real-time target detection algorithm that can detect and locate multiple targets simultaneously. Because YOLO can divide the entire image into grids and predict the class and bounding box of the object on each grid, it is generally faster than other region-based object detection algorithms.

2.4.6 SSD

  SSD is a single-step target detection algorithm based on convolutional neural networks, which can detect multiple targets in one forward propagation. Compared with region-based detection algorithms such as Faster R-CNN, SSD is simpler and more efficient.

2.4.7 CascadeCNN

  CascadeCNN is a cascaded convolutional neural network proposed by Microsoft Research Asia, which can significantly reduce network size and computational complexity without sacrificing performance. The structure of CascadeCNN is composed of multiple cascade stages. Each stage contains multiple cascade convolution layers and pooling layers, which can effectively improve the accuracy and stability of face positioning.

2.5 Facial expression classification

2.5.1 Introduction to RAF-DB dataset

  The full name of RAF-DB is Real-world Affective Faces, which is a large-scale facial expression dataset. The dataset consists of 29,672 diverse facial images annotated with basic or compound expressions by 40 annotators.

  In addition, each image also includes 5 precise landmark locations, 37 automatic landmark locations, bounding box, race, age range and gender attribute annotations.

  The image faces in this dataset are extremely diverse in terms of age, gender and race, head posture, lighting conditions, occlusions (such as glasses, facial hair or self-occlusion), post-processing operations (such as various filters and special effects), etc. Big difference.

2.5.2 FER2013 Dataset Introduction

  The full name of FER2013 is Facial Expression Recognition 2013 Dataset. This data set contains approximately 30,000 facial RGB images of different expressions. The size of the images is 48×48 pixels.

  The main annotations in this dataset can be divided into 7 types: 0 = Angry, 1 = Disgust, 2 = Fear, 3 = Happy, 4 = Sad, 5 = Surprised, 6 = Neutral. Among them, the disgust expression has the smallest number of images, only 600, while each of the other types has nearly 5,000 samples.
FER2013默认提供的csv格式文件,如下代码是csv转png图像的python脚本

import numpy as np
import pandas as pd 
from PIL import Image
from tqdm import tqdm
import os

# convert string to integer
def atoi(s):
    n = 0
    for i in s:
        n = n*10 + ord(i) - ord("0")
    return n

# making folders
outer_names = ['test','train']
inner_names = ['angry', 'disgusted', 'fearful', 'happy', 'sad', 'surprised', 'neutral']
os.makedirs('data', exist_ok=True)
for outer_name in outer_names:
    os.makedirs(os.path.join('data',outer_name), exist_ok=True)
    for inner_name in inner_names:
        os.makedirs(os.path.join('data',outer_name,inner_name), exist_ok=True)

# to keep count of each category
angry = 0
disgusted = 0
fearful = 0
happy = 0
sad = 0
surprised = 0
neutral = 0
angry_test = 0
disgusted_test = 0
fearful_test = 0
happy_test = 0
sad_test = 0
surprised_test = 0
neutral_test = 0

df = pd.read_csv('./fer2013.csv')
mat = np.zeros((48,48),dtype=np.uint8)
print("Saving images...")

# read the csv file line by line
for i in tqdm(range(len(df))):
    txt = df['pixels'][i]
    words = txt.split()
    
    # the image size is 48x48
    for j in range(2304):
        xind = j // 48
        yind = j % 48
        mat[xind][yind] = atoi(words[j])

    img = Image.fromarray(mat)

    # train
    if i < 28709:
        if df['emotion'][i] == 0:
            img.save('data/train/angry/im'+str(angry)+'.png')
            angry += 1
        elif df['emotion'][i] == 1:
            img.save('data/train/disgusted/im'+str(disgusted)+'.png')
            disgusted += 1
        elif df['emotion'][i] == 2:
            img.save('data/train/fearful/im'+str(fearful)+'.png')
            fearful += 1
        elif df['emotion'][i] == 3:
            img.save('data/train/happy/im'+str(happy)+'.png')
            happy += 1
        elif df['emotion'][i] == 4:
            img.save('data/train/sad/im'+str(sad)+'.png')
            sad += 1
        elif df['emotion'][i] == 5:
            img.save('data/train/surprised/im'+str(surprised)+'.png')
            surprised += 1
        elif df['emotion'][i] == 6:
            img.save('data/train/neutral/im'+str(neutral)+'.png')
            neutral += 1

    # test
    else:
        if df['emotion'][i] == 0:
            img.save('data/test/angry/im'+str(angry_test)+'.png')
            angry_test += 1
        elif df['emotion'][i] == 1:
            img.save('data/test/disgusted/im'+str(disgusted_test)+'.png')
            disgusted_test += 1
        elif df['emotion'][i] == 2:
            img.save('data/test/fearful/im'+str(fearful_test)+'.png')
            fearful_test += 1
        elif df['emotion'][i] == 3:
            img.save('data/test/happy/im'+str(happy_test)+'.png')
            happy_test += 1
        elif df['emotion'][i] == 4:
            img.save('data/test/sad/im'+str(sad_test)+'.png')
            sad_test += 1
        elif df['emotion'][i] == 5:
            img.save('data/test/surprised/im'+str(surprised_test)+'.png')
            surprised_test += 1
        elif df['emotion'][i] == 6:
            img.save('data/test/neutral/im'+str(neutral_test)+'.png')
            neutral_test += 1

print("Done!")

2.5.3 vgg-16 facial expression classification

  VGG-16 (Visual Geometry Group 16) is a deep convolutional neural network architecture developed by the Visual Geometry Group of the University of Oxford. VGG-16 achieved great success in the 2014 ImageNet image classification competition, laying an important foundation for deep learning in the field of image classification.

Network structure:
  VGG-16 consists of 16 convolutional layers and 3 fully connected layers, where 16 indicates that the network has a total of 16 convolutional layers. The characteristic of this structure is that smaller 3x3 convolution kernels are continuously stacked to increase the depth of the network. In this way, the VGG-16 network achieves a deeper convolutional neural network than previous ones, allowing it to better learn the features of images.

Convolutional layer settings:
  The convolutional layer settings of VGG-16 can be divided into several stages, where each stage contains one or more convolutional layers, followed by a pooling layer to reduce the size of the feature map. The last part of the network is three fully connected layers that map convolutional features to predictions of different categories.

Features:

  1. Small convolution kernel: VGG-16 uses a small size 3x3 convolution kernel, which can increase the depth of the network and help capture features of different scales.

  2. Relatively simple architecture: The architecture of VGG-16 is relatively simple, using only convolution and pooling layers and no complex network structure modules, which makes the understanding and implementation of the network relatively easy.

  3. Stacked convolutional layers: VGG-16 gives the network deeper layers by stacking convolutional layers multiple times, which helps learn more complex image features.

def vgg16(input_shape, num_classes, weights_path=None, pooling='avg'):
    # 构造VGG16模型
    model = Sequential()

    # Block 1
    model.add(Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1', input_shape=input_shape))
    model.add(Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2'))
    # model.add(BatchNormalization(name='bn_1'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool'))

    # Block 2
    model.add(Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1'))
    model.add(Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2'))
    # model.add(BatchNormalization(name='bn_2'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool'))

    # Block 3
    model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1'))
    model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2'))
    model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3'))
    # model.add(BatchNormalization(name='bn_3'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool'))

    # Block 4
    model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1'))
    model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2'))
    model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3'))
    # model.add(BatchNormalization(name='bn_4'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool'))

    # Block 5
    model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1'))
    model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2'))
    model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3'))
    # model.add(BatchNormalization(name='bn_5'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool'))

    if weights_path:
        model.load_weights(weights_path)

    out = model.get_layer('block5_pool').output

    if pooling is None:
        out = Flatten(name='flatten')(out)
        out = Dense(512, activation='relu', kernel_initializer='he_normal', name='fc')(out)
        out = Dropout(0.5)(out)
        # out = Dense(512, activation='relu', kernel_initializer='he_normal', name='fc2')(out)
        # out = Dropout(0.5)(out)
    elif pooling == 'avg':
        out = GlobalAveragePooling2D(name='global_avg_pool')(out)
    elif pooling == 'max':
        out = GlobalMaxPooling2D(name='global_max_pool')(out)

    out = Dense(num_classes, activation='softmax', kernel_initializer='he_normal', name='predict')(out)

    model = Model(model.input, out)

    return model

2.5.4 Expression Classification Network Model Training

from tensorflow.keras import optimizers
from tensorflow.keras.callbacks import CSVLogger, ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.contrib import lite
from nets.choose_net import choose_net
from utils.data_manager import DataManager
from utils.data_manager import split_raf_data
from utils.preprocessor import process_img
from utils.plot import plot_log, plot_emotion_matrix
from config.train_cfg import *
from .evaluate import evaluate

# data generator
data_generator = ImageDataGenerator(
    rotation_range=30, horizontal_flip=True,
    width_shift_range=0.1, height_shift_range=0.1,
    zoom_range=0.2, shear_range=0.1,
    channel_shift_range=0.5)
#  channel_shift_range=50,
emotion_model = choose_net(USE_EMOTION_MODEL, INPUT_SHAPE, EMOTION_NUM_CLS)
sgd = optimizers.SGD(lr=LEARNING_RATE, decay=LEARNING_RATE/BATCH_SIZE, momentum=0.9, nesterov=True)
emotion_model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])

# callbacks
csv_logger = CSVLogger(EMOTION_LOG_NAME, append=False)
early_stop = EarlyStopping('val_loss', patience=PATIENCE)
reduce_lr = ReduceLROnPlateau('val_loss', factor=0.1, patience=int(PATIENCE/4), verbose=1)
# model_names = trained_models_path + '.{epoch:02d}-{val_acc:.2f}.hdf5'
model_checkpoint = ModelCheckpoint(EMOTION_MODEL_NAME, 'val_loss', verbose=1,
                                   save_weights_only=False, save_best_only=True)
callbacks = [model_checkpoint, csv_logger, reduce_lr, early_stop]

# loading dataset
data_loader = DataManager(USE_EMOTION_DATASET, image_size=INPUT_SHAPE[:2])
faces, emotions, usages = data_loader.get_data()
faces = process_img(faces)
num_samples, num_classes = emotions.shape
train_data, val_data = split_raf_data(faces, emotions, usages)
train_faces, train_emotions = train_data

# if os.path.exists(EMOTION_MODEL_NAME):
#     emotion_net = load_model(EMOTION_MODEL_NAME)

emotion_model.fit_generator(data_generator.flow(train_faces, train_emotions, BATCH_SIZE),
                            steps_per_epoch=len(train_faces) / BATCH_SIZE,epochs=EPOCHS,
                            verbose=1, callbacks=callbacks, validation_data=val_data)


if IS_CONVERT2TFLITE:
    converter = lite.TFLiteConverter.from_keras_model_file(EMOTION_MODEL_NAME)
    tflite_model = converter.convert()
    with open(TFLITE_NAME, "wb") as f:
        f.write(tflite_model)

truth, prediction, accuracy = \
        evaluate(USE_EMOTION_DATASET, INPUT_SHAPE, EMOTION_MODEL_NAME)
plot_log(EMOTION_LOG_NAME)
plot_emotion_matrix(USE_EMOTION_DATASET, USE_EMOTION_MODEL, truth, prediction, accuracy)

Insert image description here
Insert image description here

Friends who feel good, thank you for your likes, attention and collection! More useful content will be updated continuously...

Code download link

Follow the blogger's GZH [ Little Bee Vision ], and reply [ Emoji Recognition ] to get the download method

references

[1] Facial expression recognition based on multi-layer feature fusion of lightweight convolutional network. Shen Hao; Meng Qinghao; Liu Yinbo. Progress in Laser and Optoelectronics, 2021 [2] TP-FER: Three-dimensional recognition based on optimized convolutional neural
network Channel facial expression recognition method. Gao Jingwen; Cai Yongxiang; He Zongyi. Computer Application Research, 2021 [
3] Facial expression recognition method based on multi-scale detail enhancement. Tan Xiaohui; Li Zhaowei; Fan Yachun. Journal of Electronics and Information, 2019 [
4] Based on improved Partial occlusion facial expression recognition with GAN. Wang Haiyong; Liang Hongzhu. Computer Engineering and Application, 2020
[5] Research on transfer convolutional neural network for facial expression recognition. Zhai Yikui; Liu Jian. Signal Processing, 2018
[6] Driving Research on methods of employee's negative emotional state detection system. Ma Xingmin; Sun Wencai; Xu Yi; Zheng Pengyu. Journal of Jilin University (Information Science Edition), 2015 [7] Research on facial expression recognition based on convolutional neural network.
Shi Hao. Nanchang University, 2021
[8] Research on occlusion expression recognition based on parallel generative adversarial network. Sun Chao. Jilin University, 2020
[9] Research on facial expression feature extraction and recognition algorithm based on capsule network. Yao Yuqian. Beijing Jiaotong University, 2019
[10] Deep- Emotion: Facial Expression Recognition Using Attentional Convolutional Network. Minaee Shervin;Minaei Mehdi;Abdolrashidi Amirali.Sensors, 2021
[11] Reconstruction of Partially Occluded Facial Image for Classification. Zou Min;You Mengbo;Akashi Takuya.IEEJ Transactions on Electrical and Electronic Engineering,2021
[12] E-FCNN for tiny facial expression recognition. Jie Shao;Qiyu Cheng.Applied Intelligence,2020
[13] Region Attention Networks for Pose and Occlusion Robust Facial Expression Recognition… Wang Kai;;Peng Xiaojiang;;Yang Jianfei;;Meng Debin;;Qiao Yu.IEEE transactions on image processing : a publication of the IEEE Signal Processing Society,2020
[14] Multiple Attention Network for Facial Expression Recognition. Gan Yanling;Chen Jingying;Yang Zongkai;Xu Luhui.IEEE Access,2020
[15] Dual-Path Siamese CNN for Hyperspectral Image Classification With Limited Training Samples. Huang Lingbo;Chen Yushi.IEEE Geoscience and Remote Sensing Letters,2020
[16] A Novel Facial Expression Intelligent Recognition Method Using Improved Convolutional Neural Network. Shi Min;Xu Lijun;Chen Xiang.IEEE Access,2020
[17] Deep multi-path convolutional neural network joint with salient region attention for facial expression recognition. Siyue Xie;;Haifeng Hu;;Yongbo Wu.Pattern Recognition,2019
[18] Extended Deep Neural Network for Facial Emotion Recognition. Deepak Kumar Jain;;Pourya Shamsolmoali;;Paramjit Sehdev.Pattern Recognition Letters,2019
[19] Siamese Convolutional Neural Networks for Remote Sensing Scene Classification. Xuning Liu;Yong Zhou 0003;Jiaqi Zhao;Rui Yao;Bing Liu;Yi Zheng.IEEE Geoscience and Remote Sensing Letters,2019

Guess you like

Origin blog.csdn.net/weixin_40280870/article/details/132535350