foreword
Face recognition is a commonplace topic, and there are various algorithms for face recognition. There are many ways to achieve it, so I won't go into details here. What I want to reproduce today is yolo's face recognition algorithm, using the pre-trained onnx model, and connecting it to our ROS system at the same time.
first look at the effect
In the picture, the area of the face is marked, and there are 5 key points. Although the 5 key points are not many compared with other networks, the advantage is that onnx runs fast and supports CPU and GPU. Deployment can also be more conveniently deployed and run in Xavier. And after other tests, it can still have a very accurate recognition even if there is a mask.
YOLO-FACE reproduction
clone warehouse
Warehouse Address
https://github.com/derronqi/yolov7-face
https://github.com/hpc203/yolov7-detect-face-onnxrun-cpp-py
Command Line
git clone https://github.com/derronqi/yolov7-face
git clone https://github.com/hpc203/yolov7-detect-face-onnxrun-cpp-py
The first warehouse above is actually used for training/testing. For simplicity, we use the requirements file in the first warehouse to configure the environment. The second
warehouse below is for demonstration use. I currently only use the second one Reproduced from the warehouse
Use conda to create a python virtual environment
conda create -n rosface python=3.7
Configuration Environment
conda activate rosface
cd yolov7-face
pip install -r requirements.txt
Python is version 3.7. If it is version 3.8, you will encounter some minor problems with so file errors, and you need to rebuild the soft link.
Download of pre-trained models
The download of the pre-trained model is provided in https://github.com/hpc203/yolov7-detect-face-onnxrun-cpp-py
, and it is the connection of Baidu Netdisk. Thank you for your work so carefully.
Test a single image
cd yolov7-detect-face-onnxrun-cpp-py
python main.py --imgpath selfie.jpg --modelpath ../onnx_havepost_models/yolov7-lite-e.onnx
A simple description of the parameters:
--imgpath
specify the path of the test image
--modelpath
and specify the path of the above pre-trained model
original image
test picture
ROS
Subscriber
In fact, the single image test passed. For ROS, it is the workload of an interface. We need to make a sub to accept the video stream, and then publish the mark or the marked image to complete.
Based onmain.py
this, a new Subscriber is added here, and the subscribed topic is /image
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--modelpath', type=str, default='onnx_havepost_models/yolov7-lite-e.onnx',
help="onnx filepath")
parser.add_argument('--confThreshold', default=0.45, type=float, help='class confidence')
parser.add_argument('--nmsThreshold', default=0.5, type=float, help='nms iou thresh')
args = parser.parse_args()
# Initialize YOLOv7_face object detector
YOLOv7_face_detector = YOLOv7_face(args.modelpath, conf_thres=args.confThreshold, iou_thres=args.nmsThreshold)
rospy.init_node('yolo_face_dete_node', anonymous=True)
bridge = CvBridge()
rospy.Subscriber('/image', Image, callback)
rospy.spin()
callback & Publish
In the callback function, just copy the data processing function in main.py directly. I won't go into details here
rviz demo
The left side is the original video,
the middle is the mark with face recognition added,
and the right side is the intercepted part of face recognition