Human key point detection 1: Human pose estimation data set
Table of contents
Human key point detection 1: Human pose estimation data set
2. Human pose estimation data set
(4) Schematic diagram of key points
1. Human body pose estimation
Human Keypoints Detection, also known as human pose estimation 2D Pose, is a relatively basic task in computer vision and a prerequisite for human action recognition, behavior analysis, human-computer interaction, etc. Generally speaking, human body key point detection can be subdivided into single/multi-person key point detection, 2D/3D key point detection. At the same time, there are algorithms that will track key points after completing key point detection, which is also called human body. Posture tracking.
This article is one of a series of articles on theHuman body key point detection (Human posture estimation) projectHuman posture Estimation data set; mainly introduces COCO data set and MPII data set .
[Respect originality, please indicate the source when reprinting]https://blog.csdn.net/guyuealian/article/details/134703548
For more projects in the series of articles "Human Key Point Detection (Human Posture Estimation)" please refer to:
- Human key point detection 1: Human posture estimation data set (including download link) https://blog.csdn.net/guyuealian/article/details/134703548
- Human key point detection 2: Pytorch implements human key point detection (human posture estimation) including training code and data set https://blog.csdn.net/guyuealian/article /details/134837816
- Human body key point detection 3: Android implements human body key point detection (human posture estimation) with source code for real-time detection https://blog.csdn.net/guyuealian/article/ details/134881797
- Human body key point detection 4: C/C++ implements human body key point detection (human posture estimation) including source code for real-time detection https://blog.csdn.net/guyuealian/ article/details/134881797
- Hand key point detection 1: Hand key point (hand posture estimation) data set (including download link)https://blog.csdn.net/guyuealian/ article/details/133277630
- Hand key point detection 2: YOLOv5 implements hand detection (including training code and data set)https://blog.csdn.net/guyuealian/article/details/ 133279222
- Hand key point detection 3: Pytorch implements hand key point detection (hand posture estimation) including training code and data sethttps://blog.csdn.net/ guyuealian/article/details/133277726
- Hand key point detection 4: Android implements hand key point detection (hand posture estimation) with source code for real-time detectionhttps://blog.csdn.net/guyuealian /article/details/133931698
- Hand key point detection 5: C++ implements hand key point detection (hand posture estimation) including source code for real-time detectionhttps://blog.csdn.net/guyuealian /article/details/133277748
2. Human pose estimation data set
(1)COCO data set
Download address:https://cocodataset.org/#download
COCO human body key point annotation, up to 17 key points of the whole body are annotated, with an average of 2 people per image, and a maximum of 13 people; human body key point annotation, each human body key point annotation The distribution of the number of points, among which the number of human bodies in the range of 11-15 is the largest, with nearly 70,000 people, followed by 6-10, more than 40,000 people, followed by 16-17, 2-5, 1.
The COCO data set is relatively large, please be patient in downloading it.
data set | Download link |
---|---|
2017 Train images | http://images.cocodataset.org/zips/train2017.zip |
2017 Val images | http://images.cocodataset.org/zips/val2017.zip |
2017 Test images | http://images.cocodataset.org/zips/test2017.zip |
2017 Train/Val annotations | http://images.cocodataset.org/annotations/annotations_trainval2017.zip |
(2) MPII data set
Download address: http://human-pose.mpi-inf.mpg.de/#download
MPII human body key points marks 16 key points of the whole body and information about whether they are visible. Number of people: 28821 in train, 11701 in test, and 409 kinds of human activities; use mat struct format; the pedestrian frame is marked with center and scale, and the human body scale is about 200 pixels in height, which is divided by 200.
(3)Human3.6M
Download address:Human3.6M Dataset
Human3.6M is a large public data set used for 3D human bodypose estimation research , various SOTApaperswithcode =7> Algorithms and models are currently the most important data set for multi-view based 3D human pose research.
(4)Keypoint indicator
data set | Schematic diagram of key points | Explanation of key points |
COCO | # When the image is flipped left and right, pairs of key points (used for data enhancement during training) flip_pairs=[[1, 2], [3, 4], [5, 6], [7, 8],[9, 10], [11, 12], [13, 14], [15, 16]] # Key point connection line number (used for drawing images) skeleton =[[15, 13], [13, 11], [16, 14], [14, 12], [11, 12], [5, 11], [6, 12], [5, 6], [5, 7], [6, 8], [7, 9], [8, 10], [0, 1], [0, 2], [1, 3], [2, 4]] # Each key point number corresponds to the meaning of the key point on the human body "keypoints": { 0: "nose", 1: "left_eye", 2: "right_eye", 3: "left_ear", 4: "right_ear", 5: "left_shoulder", 6: "right_shoulder", 7: "left_elbow", 8: "right_elbow", 9: "left_wrist", 10: "right_wrist", 11: "left_hip", 12: "right_hip", 13: "left_knee", 14: "right_knee", 15: "left_ankle", 16: "right_ankle" } |
|
MPII | # When the image is flipped left and right, pairs of key points (used for data enhancement during training) # Key point connection line number (used for drawing images) skeleton=[[0, 1], [1, 2], [3, 4], [4, 5], [2, 6], [6, 3], [12, 11], [7, 12], [11, 10], [13, 14], [14, 15], [8, 9], [8, 7], [6, 7], [7, 13]] # Each key point number corresponds to the meaning of the key point on the human body |
|
human3.6M | ||
kinect | |