Python + OpenCV image processing of the human face detection

Use OpenCV comes with xml file, the camera can detect faces in real time

Haar-like features, LBP features are common features, describe various local information

Haar describes an image on a local scale shading pixel value conversion information

LBP is described in the texture information corresponding to the local area

HAAR differs LBP: 
① HAAR wherein the floating-point computations, LBP wherein the integer calculation; 
the number of samples required trained ② LBP larger than HAAR; 
③ typically faster than the speed of LBP HAAR; 
④ same sample detection result HAAR trained to accurate than LBP; 
⑤ expand the sample data LBP can achieve a training effect HAAR

Face detection in still images to achieve python

DEF face_detect_demo (Image): 
    Gray = cv2.cvtColor (Image, cv2.COLOR_BGR2GRAY) 
    face_detector = cv2.CascadeClassifier ( " the Data / haarcascade_frontalface_default.xml " )   # Create a face detection object 
    faces = face_detector.detectMultiScale (gray, 1.02, 5 )   # face detection 
    "" " 
       faces = face_detector.detectMultiScale (IMG, by scaleFactor, minNeighbors) 
       parameters: img: the identified original 
             scaleFactor: iterative compression rate of the image 
             minNeighbors: each face rectangle reserved minimum number of neighbors 
       return Found: a list, which is inside each of the rectangular frame from a face (X, Y, W, H) 
    "" " 
    for X, Y, W, H in faces:
        cv2.rectangle(image, (x, y), (x+w, y+h), (0, 0, 255), 2)
    cv2.imshow("face_detect_demo", image)

result

Face detection in video realization python

DEF Detect ():
     # object creation face detection 
    face_cascade = cv2.CascadeClassifier ( " ../data/haarcascade_frontalface_default.xml " )
     # create eye detection of q objects 
    eye_cascade = cv2.CascadeClassifier ( " ../data/haarcascade_eye. XML " )
     # Create a smile detection target q 
    smile_cascade = cv2.CascadeClassifier ( " ../data/haarcascade_smile.xml " )
     # camera attached object number 0 indicates the camera 
    camera = cv2.VideoCapture (0) 

    the while True:
         # read take the current frame 
        RET, frame = camera.read ()
        # Switch grayscale image 
        Gray = cv2.cvtColor (Frame, cv2.COLOR_BGR2GRAY)
         # face detection returns a list of each element (x, y, w, h ) represents the upper left corner and the width and height of the rectangle 
        faces = face_cascade. detectMultiScale (Gray, 1.3, 5 )
         # draw a face rectangle 
        for (the X-, the y-, w, H) in faces:
             # draw a rectangle on the frame picture painting, passing left and right corners coordinates of the rectangle color and line width 
            = cv2.rectangle IMG (Frame, (X, Y), (X + W, Y + H), (255, 0, 0), 2 )
             # face separate out 
            roi_gray = gray [y: y + h, X: X + W]
             # detect eyes in the face (40, 40) is set a minimum size, then a small portion will not detect
            eyes = eye_cascade.detectMultiScale(roi_gray, 1.03, 5, 0, (40, 40))
            # 把眼睛画出来
            for(ex, ey, ew, eh) in eyes:
                cv2.rectangle(img, (x+ex, y+ey), (x+ex+ew, y+ey+eh), (0, 255, 0), 2)

        cv2.imshow("camera", frame)
        if cv2.waitKey(5) & 0xff == ord("q"):
            break

    camera.release()
    cv2.destroyAllWindows()

 

Guess you like

Origin www.cnblogs.com/qianxia/p/11112645.html