Gesture recognition + face recognition + posture estimation (key point detection + tutorial + code)

Gesture recognition and gesture key point detection are an important research direction in the field of computer vision, which involves detecting the position and posture information of human hands from images or videos and inferring the meaning of gestures. Here are some methods and techniques that may be used:

Gesture Recognition

  1. Deep learning-based gesture recognition
    Deep learning-based gesture recognition is one of the most popular methods currently. It is usually trained using deep learning models such as convolutional neural networks (CNN) or recurrent neural networks (RNN) to learn the feature representations and patterns of gestures.
    Insert image description here

In the training phase, large-scale gesture data sets (such as MSR Action3D, NTU RGB+D, etc.) can be used to train deep learning models. During the testing phase, test images or videos can be fed into the deep learning model and the meaning of the gesture can then be inferred based on the output.

  1. Gesture recognition based on traditional machine learning
    In addition to deep learning methods, traditional machine learning methods can also be used for gesture recognition. This method usually uses hand-designed feature extractors (such as HOG, SIFT, etc.) to extract the features of gestures, and then inputs these features into a classifier (such as SVM, random forest, etc.) for classification.

  2. Gesture recognition based on key point detection
    Gesture recognition usually requires gesture key point detection first, that is, detecting the position and posture information of the hand from an image or video. This can be achieved by using deep learning models such as OpenPose, HandNet, etc.
    Insert image description here

Check key points in gestures

Guess you like

Origin blog.csdn.net/ALiLiLiYa/article/details/135433545