python real-time yawn detection

Effect

insert image description here

The basic idea

  1. Use the VideoCapture method in OpenCV to initialize the video render object
  2. Create grayscale images
  3. Import pre-trained models to recognize faces and facial landmarks
  4. Calculate upper and lower lip distance (other similar)
  5. Create an If condition for the distance between the lips, yawn if satisfied, or simply open your mouth if not satisfied
  6. Display frame/image

Part of the source code

  suc, frame = cam.read()
    # 读取不到退出
    if not suc:
        break

    # ---------FPS------------#
    ctime = time.time()
    fps = int(1 / (ctime - ptime))
    ptime = ctime
    cv2.putText(frame, f'FPS:{fps}', (frame.shape[1] - 120, frame.shape[0] - 20), cv2.FONT_HERSHEY_PLAIN, 2,
                (0, 200, 0), 3)

    # ------检测人脸------#
    # 转为灰度
    img_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    faces = face_model(img_gray)
    for face in faces:
        # 检测人脸,框起来-#
        x1 = face.left()
        y1 = face.top()
        x2 = face.right()
        y2 = face.bottom()
        # print(face.top())
        cv2.rectangle(frame, (x1, y1), (x2, y2), (200, 0, 00), 2)

        # ----------检测人脸标注-----------#
        shapes = landmark_model(img_gray, face)
        shape = face_utils.shape_to_np(shapes)

        # -------检测上下唇--------#
        lip = shape[48:60]
        cv2.drawContours(frame, [lip], -1, (0, 165, 255), thickness=3)

        # -------计算上下唇距离-----#
        lip_dist = cal_yawn(shape)
        # 打印距离
        # print(lip_dist)
        # 大于设定值,则认定是打哈欠
        if lip_dist > yawn_thresh:
            cv2.putText(frame, f'User Yawning!', (frame.shape[1] // 2 - 170, frame.shape[0] // 2),
                        cv2.FONT_HERSHEY_SIMPLEX, 2, (0, 0, 200), 2)

    # 按字母q退出
    cv2.imshow('Webcam', frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

complete project

click to download

Guess you like

Origin blog.csdn.net/weixin_46211269/article/details/124105198