(Project) The second week of Taijiquan learning system innovation training

This week's work summary

Speech recognition: Learn and configure the sphinx-ue4 plug-in, the basic commands have been implemented, and then you need to configure different commands according to different scenarios.

Learning Mode : First make a pure viewing mode, build a simple scene, and a panel with buttons and lens controls to enter this mode.

DTW algorithm: The algorithm has been implemented by C++. At present, the data is only a one-dimensional double, and then the implementation of a 3-dimensional double array or a 1-dimensional vector array is performed.

Motion capture equipment: When the motion capture equipment is in use, the feet will be twisted. Due to the lack of reference materials, there is no solution for the time being, and it is necessary to continue to query and solve it.

UI: Due to the version problem of the official document, we encountered some small difficulties in the UI, but we have found a solution and are making it for next week's Demo.

Real-time action matching: The method of calculating Euclidean distance is replaced by collision detection, and red light is displayed on the wrong part. The real-time connection between Noitom and UE4 has been configured.

The planned Demo is expected to be completed next week



Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325522813&siteId=291194637