Mediapipe-based gesture recognition and synchronization to Unity human model

As the title states, since it is a commercial project, the source code cannot be disclosed. Here we mainly talk about the ideas for realizing this function.
Human body joint point recognition is developed
based on the Mediapipe Unity plug-in. CPU hosts with relatively low performance cannot run Mediapipe smoothly. Please pay attention to this.
The diagram of Mediapipe33 human body joint points is as follows:
Insert image description here
Mediapipe joint points are mapped to Unity human bones.
This is a difficult point in developing this function, which involves the mapping of joint point positions and bone positions, as well as the reverse movement of bones.
1. Use unity IK plug-ins such as Animation Rigging, Final IK, etc.
2. Refer to ThreeDPoseUnityBarracuda to map the joint points yourself.

What I use here is the second method. Mainly refer to the VNectModel.cs script in the ThreeDPoseUnityBarracuda project. The position of the joint point abdomenUpper requires us to manually calculate and assign values ​​based on the position passed by the Mediapipe. Other positions can be obtained directly from the values ​​passed by the Mediapipe.

In addition, when building, remember to change the resource loading method to StreamingAssets.
Insert image description here

The final effect is as follows:

MediaPipe Pose Tracking

Guess you like

Origin blog.csdn.net/weixin_41743629/article/details/132121370