(Personal) VR Taijiquan Learning System - The First Week of Innovative Training (1)

Project Overview

We expect to develop a real-time system based on Unreal Engine 4 to help users learn Tai Chi. The system will use the current popular human-computer interaction technologies to realize functions, such as virtual reality, motion capture, voice control, etc. We anticipate that when the system is developed, it should be able to score users' learning, visually display user errors, and allow users to target and reinforce training on parts of themselves that they lack.

individual division of labor

The part I am mainly responsible for in this project is the interaction between users and the system and the management of the git repository.

specific situation

Our system uses the HTC vive virtual reality kit. In the virtual scene, the kit uses a handle in the left and right hands to manipulate the virtual character and interact with the system, but in the Taijiquan learning system we want to implement, the user obviously does not It is possible to hold the handle in the hand at all times, and it is impossible to use the handle to control the system, so we need to find another solution.
At this point, it is natural to think that we can use our motion capture device to interact with the system, but the motion capture device we currently have does not track the movements of the user's finger, so there is no way to directly use the finger to interact with the virtual menu. Therefore, the only way we can use the motion capture device to interact is like kinect, through the user's body posture or placing physical buttons in the scene, etc. A more specific solution has not yet been thought of, but Leap Motion may be added later to capture the user's Hand movements, so that a more flexible interaction scheme can be realized, so this part is expected to be discussed after the realization of voice control.
The last interaction method is voice control. This solution is more suitable in our system, but its shortcomings are also obvious, that is, the accuracy of voice recognition may not be high, and the control is not flexible enough. In my opinion, only Can be used as an auxiliary means.
I think it will eventually be a combination of voice and gestures to implement interactive systems.

current progress

At present, my plan is to integrate the speech recognition module into ue4 first, so there are three steps:
1. Use the microphone to obtain the user's voice input in ue4
2. Process the microphone input into a format that can be used by the speech recognition api
3. Call the api recognition and get the result

From the information reviewed in the past few days, there are roughly three ways to obtain microphone input in ue4:
1. In versions before ue4.19, use the IVoiceCapture interface provided in the OnlineSubsystem module. This method should actually use the function of the network module, so you need to create a subsystem session before it can work properly. I am not familiar with the use of unreal c++, so I need to follow the discussion in the forum to find out how to implement it. According to Unreal Forum, the data obtained in this way is 16 bit PCM at 16khz, single channel.
2. The Audio Capture Plugin was officially updated in ue4.19. According to the discussion in the forum, the microphone-related components are provided to obtain input. The data obtained by this method should also be PCM, and the probability is the same as that of the first method. However, ue4.19 has just been released, and there are very few related discussions on the Internet, so you need to try it yourself. I have not found any relevant documents.
3. Access the microphone through the windows api in ue4. There is a blog that uses this approach, but it's unclear if there are no hidden issues in doing so.

next steps

I will try the three methods mentioned above in turn, starting with the official api and trying to get the data of the microphone.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325811216&siteId=291194637