Configure the body recognition environment under Anaconda
foreword
Recently, I started to work on body recognition projects. The environment configuration is quite unfriendly to Mengxin (pure Xiaobai). I experienced several configuration failures and concluded a set of correct installation methods. Here is a record of the process of installing the mediapipe environment under Anaconda and the problems encountered by myself and my classmates in each step. I hope it will be helpful to everyone.
1. What is anaconda
Anaconda is a software for installing and managing python-related packages. It also comes with python, Jupyter Notebook, Spyder, and a conda tool for managing packages, which is very useful. Anaconda refers to an open source Python distribution that includes more than 180 scientific packages such as conda and Python and their dependencies.
download anaconda
This is the anaconda official website link, download it on the official website: https://www.anaconda.com
install anaconda
Custom path installation
Install after checking as shown in the figure
The installation is complete
Second, configure the virtual environment
Open the anaconda prompt
Configure the virtual environment in the anaconda prompt
After opening the anaconda prompt, enter the following code to build the environment, the environment name is mediapipe (customizable), based on python3.7
conda create -n mediapipe python=3.7
([y]/[n]?) will appear in the middle. Type y and wait for the installation to complete.
Enter the virtual environment
After configuring the virtual environment, enter the following code to enter the virtual environment
activate mediapipe
At this point the environment becomes mediapipe (just the environment name)
virtual environment test
Enter python, if the following interface appears, it means that the python configuration is successful
mediapipe
what is mediapipe
Mediapipe is an open source project of Google that supports cross-platform common ML solutions. It supports many commonly used AI functions. Here are a few commonly used examples:
Face detection FaceMesh: Reconstruct the
3D Mesh of the human face from the image/video, which can be used for AR rendering
Function
Gesture tracking: 3D coordinates of 21 key points can be marked
Human body pose estimation: 3D coordinates of 33 key points can be given
Hair coloring: Hair can be detected and colored on the map
install mediapipe
Close the anaconda prompt and re-enter the mediapipe environment, enter the following code and press Enter to install
pip install mediapipe
或
pip install mediapipe -i https://pypi.douban.com/simple
pip install mediapipe is a foreign source, and the download speed is slow.
pip install mediapipe -i https://pypi.douban.com/simple is a domestic source, and the download speed is fast.
The domestic source and foreign source versions are inconsistent
([y]/[n]?) will appear in the middle, type y and wait for the installation to complete
If the installation fails or there is a problem, please see the solution at the end of the article
OpenCV
What is OpenCV
OpenCV is an open source software library for computer vision processing initiated, participated and maintained by Intel's Russian team. It supports many algorithms related to computer vision and machine learning, and is expanding day by day. OpenCV is implemented based on C++ and provides interfaces for languages such as python, Ruby, and Matlab. OpenCV-Python is the Python API of OpenCV, combining the best features of OpenCV C++ API and Python language.
Install OpenCV
Enter the following code in the virtual environment to install OpenCV
pip install opencv-python
If ([y]/[n]?) appears in the middle, type y and wait for the installation to complete
Install the OpenCV extension (some feature extraction algorithms are not available in OpenCV, you need to install the extension)
Enter the following code in the virtual environment to install the extension
pip install opencv-contrib-python
OpenCV test
Enter python in the virtual environment and press Enter
, and then as shown in the figure below,
the installation is successful as shown in the figure
Install jupyter notebook in the virtual environment
Enter the following code in the virtual environment
pip install requests -i https://pypi.douban.com/simple
Once done, enter the following code
conda install nb_conda
([y]/[n]?) will appear in the middle, type y and wait for the installation to complete
open jupyter notebook
Enter the following code in the virtual environment
jupyter notebook
After entering, it will jump to the following interface
Press Ctrl+c on this interface to exit jupyter notebook
Possible problems during installation
1. Version problem
The following happens after importing the library
Solution:
Enter the following code in the virtual environment
pip uninstall protobuf
([y]/[n]?) will appear in the middle, type y and wait for the installation to complete
2. How to solve the problem of mediapipe installation failure or missing module
If the installation fails or the model is missing during use, you can uninstall mediapipe and reinstall it.
Enter the following code in the virtual environment to uninstall mediapipe
pip uninstall mediapipe
([y]/[n]?) will appear in the middle, type y and wait until it is finished, then
enter the following code in the virtual environment to reinstall
pip install mediapipe -i https://pypi.douban.com/simple
([y]/[n]?) will appear in the middle. Type y and wait for the installation to complete.
The versions of domestic sources and foreign sources are inconsistent
Call the camera to detect hand key points (code)
Enter the following code in jupyter notebook to achieve
import sys
import cv2
import mediapipe as mp
mp_face_detection = mp.solutions.face_detection
mp_drawing = mp.solutions.drawing_utils
mp_drawing = mp.solutions.drawing_utils
mp_hands = mp.solutions.hands
# For webcam input:
cap = cv2.VideoCapture(0)
with mp_hands.Hands(
min_detection_confidence=0.9,
min_tracking_confidence=0.9) as hands:
while cap.isOpened():
success, image = cap.read()
if not success:
print("Ignoring empty camera frame.")
# If loading a video, use 'break' instead of 'continue'.
continue
# Flip the image horizontally for a later selfie-view display, and convert
# the BGR image to RGB.
image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB)
# To improve performance, optionally mark the image as not writeable to
# pass by reference.
image.flags.writeable = False
results = hands.process(image)
# Draw the hand annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp_drawing.draw_landmarks(
image, hand_landmarks, mp_hands.HAND_CONNECTIONS)
cv2.imshow('MediaPipe Hands', image)
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()
Summarize
The problems encountered in the process of configuring the environment and the solutions are all written in the article. The initial configuration also experienced many failures before it succeeded, and it also gained a lot.