[Diao Ye Learns Programming] MicroPython Manual OpenMV Cam Face Recognition

Insert image description here
MicroPython is a lightweight version of the interpreter designed for running the Python 3 programming language in embedded systems. Compared with regular Python, the MicroPython interpreter is small (only about 100KB) and is compiled into a binary Executable file to run, resulting in higher execution efficiency. It uses a lightweight garbage collection mechanism and removes most of the Python standard library to accommodate resource-constrained microcontrollers.

The main features of MicroPython include:
1. The syntax and functions are compatible with standard Python, making it easy to learn and use. Supports most core syntax of Python.
2. Directly access and control hardware, control GPIO, I2C, SPI, etc. like Arduino.
3. Powerful module system, providing file system, network, graphical interface and other functions.
4. Support cross-compilation to generate efficient native code, which is 10-100 times faster than the interpreter.
5. The amount of code is small and the memory usage is small. It is suitable for running on MCU and development boards with small memory.
6. Open source license, free to use. The Shell interactive environment provides convenience for development and testing.
7. The built-in I/O driver supports a large number of microcontroller platforms, such as ESP8266, ESP32, STM32, micro:bit, control board and PyBoard, etc. There is an active community.

MicroPython application scenarios include:
1. Rapidly build prototypes and user interactions for embedded products.
2. Make some small programmable hardware projects.
3. As an educational tool, it helps beginners learn Python and IoT programming.
4. Build smart device firmware to achieve advanced control and cloud connectivity.
5. Various microcontroller applications such as Internet of Things, embedded intelligence, robots, etc.

Things to note when using MicroPython:
1. Memory and Flash space are limited.
2. The explanation and execution efficiency is not as good as C language.
3. Some library functions are different from the standard version.
4. Optimize the syntax for the platform and correct the differences with standard Python.
5. Use memory resources rationally and avoid frequently allocating large memory blocks.
6. Use native code to improve the performance of speed-critical parts.
7. Use abstraction appropriately to encapsulate underlying hardware operations.

Generally speaking, MicroPython brings Python into the field of microcontrollers, which is an important innovation that not only lowers the programming threshold but also provides good hardware control capabilities. It is very suitable for the development of various types of Internet of Things and intelligent hardware.
Insert image description here
OpenMV Cam is a small, low-power microcontroller board that allows you to easily implement applications using machine vision in the real world. You can program OpenMV Cam using high-level Python scripts (provided by MicroPython OS) instead of C/C++. The technical parameters of OpenMV Cam include the following aspects:

1. Processor: OpenMV Cam H7 Plus uses STM32H743II ARM Cortex M7 processor, running at 480 MHz, with 32MB SDRAM + 1MB SRAM and 32 MB external flash memory + 2 MB internal flash memory. OpenMV Cam M4 V2 uses STM32F427VG ARM Cortex M4 processor running at 180 MHz with 256KB RAM and 1 MB flash memory.
2. Image sensor: Both OpenMV Cam H7 Plus and OpenMV Cam M4 V2 use the OV7725 image sensor, which can capture 320x240 8-bit grayscale images or 320x240 16-bit RGB565 images at 75 FPS at resolutions higher than 320x240. Capable of shooting at 150 FPS at 320x240.
3. I/O interface: OpenMV Cam H7 Plus and OpenMV Cam M4 V2 both have the following I/O interfaces:
(1) Full-speed USB (12Mbs) interface, connected to the computer. When the OpenMV Cam is plugged in, a virtual COM port and a "USB flash drive" will appear on your computer.
(2) The μSD card slot is capable of 100Mbs read/write, allowing your OpenMV Cam to record video and extract machine vision material from the μSD card.
(3) The SPI bus operates at a speed of up to 54Mbs, allowing you to simply transmit image stream data to the LCD expansion board, WiFi expansion board, or other controllers.
(4) I2C bus (up to 1Mb/s), CAN bus (up to 1Mb/s) and asynchronous serial bus (TX/RX, up to 7.5Mb/s) for connection with other controllers or sensors.
(5) A 12-bit ADC and a 12-bit DAC.
(6) There are interrupts and PWM on all I/O pins (there are 9 or 10 I/O pins on the board).
4. LED: Both OpenMV Cam H7 Plus and OpenMV Cam M4 V2 are equipped with one RGB LED (tri-color) and two bright 850nm IR LEDs (infrared).
5. Lens: Both OpenMV Cam H7 Plus and OpenMV Cam M4 V2 are equipped with a standard M12 lens interface and a default 2.8 mm lens. If you want to use a more professional lens with your OpenMV Cam, you can easily purchase it and install it yourself.

Insert image description here
MicroPython's OpenMV Cam supports face recognition function and can detect and recognize faces by writing MicroPython code. Below I will explain in detail its main features, application scenarios and matters needing attention.

main feature:

Face Detection: OpenMV Cam’s face recognition feature can detect faces in images in real time. It uses image processing algorithms and machine learning technology to detect and locate faces, thereby realizing face recognition and tracking.

Facial feature extraction: In addition to detecting faces, OpenMV Cam can also extract feature information of faces. Through feature extraction, the face can be represented as a digital vector for subsequent tasks such as recognition, comparison, and verification.

Real-time performance: OpenMV Cam has high real-time performance and can capture images and perform face recognition in real time. This makes it suitable for application scenarios that require fast response and real-time processing, such as face access control systems, face payment, etc.

Simplified development: The MicroPython programming language is easy to learn and suitable for beginners and educational fields. OpenMV Cam provides a friendly programming interface and sample code, making the development and debugging of face recognition easier and more convenient.

Application scenarios:

Face access control system: Face recognition can be applied to access control systems to achieve authorization and access control to specific areas or devices by identifying and verifying face information. For example, access control or control equipment can be unlocked only if the pre-registered facial information is successfully matched.

Face payment: In payment systems, face recognition can be used for identity verification and transaction authorization. Users can confirm their identity through face scanning, thereby enabling payment operations without passwords or cards, improving the convenience and security of payment.

Face recognition research and education: OpenMV Cam’s face recognition function can be used in teaching and scientific research projects. Students and researchers can use OpenMV Cam to conduct experiments and research, exploring knowledge in face recognition algorithms, facial expression analysis and other related fields.

Things to note:

Facial lighting and angle: Face recognition is sensitive to facial lighting and angle. In order to obtain accurate recognition results, you need to pay attention to providing sufficient lighting and try to keep the normal angle of the face within the camera's field of view.

Facial occlusions and changes: Facial occlusions (such as masks, sunglasses) or changes in facial expressions (such as smiling, opening mouth) may affect the recognition effect. In practical applications, it is necessary to select appropriate face recognition algorithms and parameter configurations based on specific scenarios and needs to cope with the challenges caused by facial occlusion and changes.

Data privacy and security: Facial recognition involves personal privacy and data security issues. When applying facial recognition technology, you need to comply with relevant laws, regulations and privacy policies to ensure the security and legal use of personal data.

In summary, the face recognition function of MicroPython's OpenMV Cam can realize face detection, recognition and feature extraction, with high real-time performance and simplified development features. It is suitable for face access control systems, face payment, education and research and other application scenarios. When using, you need to pay attention to factors such as facial lighting, angles, and occlusion, and comply with relevant privacy and security regulations.

Case 1: Using OpenMV Cam for face detection

import sensor, image, time
from pyb import Timer, ADC

sensor.reset() # 初始化摄像头
sensor.set_pixformat(sensor.RGB565) # 设置像素格式为RGB565
sensor.set_framesize(sensor.QVGA) # 设置帧大小为320x240
sensor.skip_frames(time = 2000) # 等待设置生效,延时2秒

timer = Timer(period=1, mode=Timer.PERIODIC, callback=lambda t: None) # 创建一个定时器,每1毫秒触发一次回调函数

while True:
    img = sensor.snapshot() # 捕获一帧图像
    faces = img.find_features(image.HaarCascade("haarcascade_frontalface_default.xml")) # 使用Haar级联分类器检测人脸
    for face in faces:
        img.draw_rectangle(face.rect(), color=(255, 0, 0)) # 在图像上绘制矩形框,颜色为红色
    print("Found", len(faces), "faces") # 输出检测到的人脸数量
    time.sleep_ms(100) # 延时100毫秒

Interpretation: This program first imports the sensor, image, time and pyb.Timer, pyb.ADC modules. Then initialize the camera, set the pixel format to RGB565, the frame size to 320x240, and wait for the settings to take effect, with a delay of 2 seconds. Then create a timer to trigger the callback function every 1 millisecond. In an infinite loop, images from the camera are continuously captured. Use Haar cascade classifier to detect faces and draw rectangular boxes on the image. Finally, the number of detected faces is output and delayed by 100 milliseconds.

Case 2: Using OpenMV Cam for face recognition

import sensor, image, time
from pyb import Timer, ADC

sensor.reset() # 初始化摄像头
sensor.set_pixformat(sensor.RGB565) # 设置像素格式为RGB565
sensor.set_framesize(sensor.QVGA) # 设置帧大小为320x240
sensor.skip_frames(time = 2000) # 等待设置生效,延时2秒

timer = Timer(period=1, mode=Timer.PERIODIC, callback=lambda t: None) # 创建一个定时器,每1毫秒触发一次回调函数
face_recognition = Timer(period=1, mode=Timer.PERIODIC, callback=lambda t: None) # 创建一个定时器,每1毫秒触发一次回调函数

face_recognition.start() # 开始人脸识别计时器

while True:
    img = sensor.snapshot() # 捕获一帧图像
    faces = img.find_features(image.HaarCascade("haarcascade_frontalface_default.xml")) # 使用Haar级联分类器检测人脸
    if len(faces) > 0:
        face = faces[0] # 获取第一个检测到的人脸
        x, y, w, h = face.rect() # 获取人脸矩形框的坐标和宽高
        img.draw_rectangle(x, y, w, h, color=(255, 0, 0)) # 在图像上绘制矩形框,颜色为红色
        face_id = face.feature_id() # 获取人脸的特征ID
        if face_id not in known_faces: # 如果特征ID不在已知人脸列表中
            face_recognition.stop() # 停止人脸识别计时器
            known_faces.append(face_id) # 将特征ID添加到已知人脸列表中
            face_recognition.start() # 重新开始人脸识别计时器
    else:
        face_recognition.stop() # 停止人脸识别计时器
        time.sleep_ms(100) # 延时100毫秒
    print("Found", len(known_faces), "known faces") # 输出已知人脸数量
    time.sleep_ms(100) # 延时100毫秒

Interpretation: This program is similar to the first case, except that the face recognition function is added. First, the sensor, image, time and pyb.Timer, pyb.ADC modules were imported. Then initialize the camera, set the pixel format to RGB565, the frame size to 320x240, and wait for the settings to take effect, with a delay of 2 seconds. Then create a timer to trigger the callback function every 1 millisecond. In an infinite loop, images from the camera are continuously captured. Use Haar cascade classifier to detect faces and draw rectangular boxes on the image. If a face is detected, get its feature ID and add it to the list of known faces. Then restart the face recognition timer. Finally, the number of known faces is output and delayed by 100 milliseconds.

Case 3: Using OpenMV Cam for face recognition (upgraded version)

import sensor, image, time
from pyb import Timer, ADC

sensor.reset() # 初始化摄像头
sensor.set_pixformat(sensor.RGB565) # 设置像素格式为RGB565
sensor.set_framesize(sensor.QVGA) # 设置帧大小为320x240
sensor.skip_frames(time = 2000) # 等待设置生效,延时2秒

timer = Timer(period=1, mode=Timer.PERIODIC, callback=lambda t: None) # 创建一个定时器,每1毫秒触发一次回调函数
face_recognition = Timer(period=1, mode=Timer.PERIODIC, callback=lambda t: None) # 创建一个定时器,每1毫秒触发一次回调函数
face_recognition_threshold = 0.6 # 人脸识别阈值

face_recognition.start() # 开始人脸识别计时器

known_faces = [] # 已知人脸列表

while True:
    img = sensor.snapshot() # 捕获一帧图像
    faces = img.find_features(image.HaarCascade("haarcascade_frontalface_default.xml")) # 使用Haar级联分类器检测人脸
    if len(faces) > 0:
        face = faces[0] # 获取第一个检测到的人脸
        x, y, w, h = face.rect() # 获取人脸矩形框的坐标和宽高
        img.draw_rectangle(x, y, w, h, color=(255, 0, 0)) # 在图像上绘制矩形框,颜色为红色
        face_id = face.feature_id() # 获取人脸的特征ID
        if face_id not in known_faces and face_recognition.is_running(): # 如果特征ID不在已知人脸列表中且人脸识别计时器正在运行
            # 计算人脸区域的平均亮度
            face_region = img.crop((x, y, x+w, y+h)) # 截取人脸区域
            avg_brightness = (sum(c for c in face_region.pixels()) / face_region.width * face_region.height) / 255 # 计算平均亮度
            if avg_brightness > face_recognition_threshold: # 如果平均亮度大于阈值
                face_recognition.stop() # 停止人脸识别计时器
                known_faces.append(face_id) # 将特征ID添加到已知人脸列表中
                face_recognition.start() # 重新开始人脸识别计时器
    else:
        face_recognition.stop() # 停止人脸识别计时器
        time.sleep_ms(100) # 延时100毫秒
    print("Found", len(known_faces), "known faces") # 输出已知人脸数量
    time.sleep_ms(100) # 延时100毫秒

Interpretation: This program is similar to the second case, except that the face recognition function is added. First, the sensor, image, time and pyb.Timer, pyb.ADC modules were imported. Then initialize the camera, set the pixel format to RGB565, the frame size to 320x240, and wait for the settings to take effect, with a delay of 2 seconds. Then create a timer to trigger the callback function every 1 millisecond. In an infinite loop, images from the camera are continuously captured. Use Haar cascade classifier to detect faces and draw rectangular boxes on the image. If a face is detected, obtain its feature ID and determine whether to add it to the known face list based on the face recognition threshold. Then restart the face recognition timer. Finally, the number of known faces is output and delayed by 100 milliseconds.

Case 4: Simple face recognition program

import sensor, image, time  
  
sensor.reset() # 初始化摄像头  
sensor.set_pixformat(sensor.RGB565) # 设置像素格式  
sensor.set_framesize(sensor.QVGA) # 设置帧大小  
sensor.skip_frames(time = 2000) # 等待摄像头稳定  
  
face_cascade = image.HaarCascade('haarcascade_frontalface_default.xml') # 加载人脸级联分类器  
  
while(True):  
    img = sensor.snapshot() # 拍摄照片  
    faces = img.find_haarcascades(face_cascade) # 查找人脸  
    if len(faces) > 0: # 如果找到了人脸  
        print('Face detected') # 打印消息

Key points to interpret:

First initialize the camera and set the pixel format and frame size.
Load the face cascade classifier. The face cascade classifier provided by OpenCV is used here.
In an infinite loop, take photos and find faces. If a face is found, a message is printed. Use the find_haarcascades() function to find faces.

Case 5: Display face recognition results on LCD screen

import sensor, image, time, lcd  
  
lcd.init() # 初始化LCD屏幕  
sensor.reset() # 初始化摄像头  
sensor.set_pixformat(sensor.RGB565) # 设置像素格式  
sensor.set_framesize(sensor.QVGA) # 设置帧大小  
sensor.skip_frames(time = 2000) # 等待摄像头稳定  
  
face_cascade = image.HaarCascade('haarcascade_frontalface_default.xml') # 加载人脸级联分类器  
  
while(True):  
    img = sensor.snapshot() # 拍摄照片  
    faces = img.find_haarcascades(face_cascade) # 查找人脸  
    if len(faces) > 0: # 如果找到了人脸  
        lcd.clear() # 清空LCD屏幕  
        for face in faces: # 在LCD屏幕上绘制人脸矩形框  
            lcd.draw_rectangle(face.rect(), lcd.COLOR_RED)

Interpretation of key points:
First initialize the LCD screen and camera, set the pixel format and frame size.
Load the face cascade classifier. The face cascade classifier provided by OpenCV is used here.
In an infinite loop, take photos and find faces. If a face is found, the LCD screen is cleared and a rectangular frame of the face is drawn on it. Use the draw_rectangle() function to draw a rectangular box.

Case 6: Using facial recognition to control robot movement

import sensor, image, time, pyb, machine  
  
motor1 = machine.PWM(pyb.Pin('X1'), freq=100) # 初始化电机1引脚为PWM输出,频率为100Hz  
motor2 = machine.PWM(pyb.Pin('X2'), freq=100) # 初始化电机2引脚为PWM输出,频率为100Hz  
sensor.reset() # 初始化摄像头  
sensor.set_pixformat(sensor.RGB565) # 设置像素格式  
sensor.set_framesize(sensor.QVGA) # 设置帧大小  
sensor.skip_frames(time = 2000) # 等待摄像头稳定  
  
face_cascade = image.HaarCascade('haarcascade_frontalface_default.xml') # 加载人脸级联分类器  
speed = 1000 # 设置电机速度,这里是1000(占空比)  
  
while(True):  
    img = sensor.snapshot() # 拍摄照片  
    faces = img.find_haarcascades(face_cascade) # 查找人脸  
    if len(faces) > 0: # 如果找到了人脸  
        x, y = faces[0].center() # 获取人脸中心坐标  
        if x < img.width() / 2: # 如果人脸在图像左半部分  
            motor1.duty(speed) # 左电机前进  
            motor2.duty(0) # 右电机停止  
        else: # 如果人脸在图像右半部分  
            motor1.duty(0) # 左电机停止  
            motor2.duty(speed) # 右电机前进  
    else: # 如果没有找到人脸  
        motor1.duty(0) # 左电机停止  
        motor2.duty(0) # 右电机停止  
    time.sleep(100) # 延时100ms,以减少CPU的使用率

Interpretation of key points:
First initialize the motor pin to PWM output and set the frequency to 100Hz. Initialize the camera and set the pixel format and frame size. Load the face cascade classifier. The face cascade classifier provided by OpenCV is used here. Set the motor speed, here it is 1000 (duty cycle).
In an infinite loop, take photos and find faces. If a face is found, the motor is controlled to advance or stop based on the position of the face. If no face is found, both motors stop. Use the duty() function to set the duty cycle of the motor.
Use the time.sleep() function to delay 100ms to reduce CPU usage.

Case 7: Detect and draw face bounding boxes:

import sensor
import image
import time

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False)
sensor.set_auto_whitebal(False)

face_cascade = image.HaarCascade("frontalface.xml")

while True:
    img = sensor.snapshot()
    faces = img.find_features(face_cascade, threshold=0.5, scale=1.5)
    
    for face in faces:
        img.draw_rectangle(face)
    
    time.sleep(100)

Interpretation of key points:
This program performs face recognition through OpenMV Cam and draws bounding boxes when a face is detected.
Use sensor.reset() to reset sensor settings.
Set image format, frame size, and disable auto gain and auto white balance.
Use image.HaarCascade("frontalface.xml") to instantiate a HaarCascade object for face detection. The "frontalface.xml" file needs to be placed on the OpenMV Cam.
In an infinite loop, have the program continue to do the following:
Use sensor.snapshot() to take a snapshot of the image.
Use img.find_features(face_cascade, threshold=0.5, scale=1.5) to detect faces in the image. Threshold and scale parameters can be adjusted to achieve better face detection results.
For each detected face, use img.draw_rectangle(face) to draw a rectangular bounding box on the image.
Use time.sleep(100) to delay 100 milliseconds to control the processing frequency.

Case 8: Face recognition and output recognition results:

import sensor
import image
import time

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False)
sensor.set_auto_whitebal(False)

face_cascade = image.HaarCascade("frontalface.xml")

known_face_image = image.Image("/known_face.jpg")
known_face_descriptor = image.vector_descriptor(known_face_image)

while True:
    img = sensor.snapshot()
    faces = img.find_features(face_cascade, threshold=0.5, scale=1.5)
    
    for face in faces:
        descriptor = img.get_descriptor(face)
        match = image.match_descriptor(known_face_descriptor, descriptor, threshold=85)
        if match:
            print("Recognized face!")
        else:
            print("Unknown face!")
    
    time.sleep(100)

Interpretation of key points:
This program performs face recognition through OpenMV Cam and outputs the recognition results based on known face images.
Use sensor.reset() to reset sensor settings.
Set image format, frame size, and disable auto gain and auto white balance.
Use image.HaarCascade("frontalface.xml") to instantiate a HaarCascade object for face detection. The "frontalface.xml" file needs to be placed on the OpenMV Cam.
Use image.Image("/known_face.jpg") to load a known face image and convert it into a feature descriptor.
In an infinite loop, have the program continue to do the following:
Use sensor.snapshot() to take a snapshot of the image.
Use img.find_features(face_cascade, threshold=0.5, scale=1.5) to detect faces in the image.
For each detected face, use img.get_descriptor(face) to obtain the feature descriptor of the face.
Use image.match_descriptor(known_face_descriptor, descriptor, threshold=85) to match the detected facial feature descriptor with the known facial feature descriptor, and set the matching threshold to 85.
Depending on the matching result, use print() to output "Recognized face!" or "Unknown face!".

Case 9: Face recognition and identification results:

import sensor
import image
import time

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False)
sensor.set_auto_whitebal(False)

face_cascade = image.HaarCascade("frontalface.xml")

known_face_image = image.Image("/known_face.jpg")
known_face_descriptor = image.vector_descriptor(known_face_image)

tag_color = (0, 255, 0)  # 识别结果标签的颜色

while True:
    img = sensor.snapshot()
    faces = img.find_features(face_cascade, threshold=0.5, scale=1.5)
    
    for face in faces:
        descriptor = img.get_descriptor(face)
        match = image.match_descriptor(known_face_descriptor, descriptor, threshold=85)
        if match:
            img.draw_string(face[0], face[1], "Recognized", color=tag_color)
        else:
            img.draw_string(face[0], face[1], "Unknown", color=tag_color)
    
    time.sleep(100)

Interpretation of key points:
This program performs face recognition through OpenMV Cam and marks the recognition results on the image.
Use sensor.reset() to reset sensor settings.
Set image format, frame size, and disable auto gain and auto white balance.
Use image.HaarCascade("frontalface.xml") to instantiate a HaarCascade object for face detection. The "frontalface.xml" file needs to be placed on the OpenMV Cam.
Use image.Image("/known_face.jpg") to load a known face image and convert it into a feature descriptor.
Define the color of the label.
In an infinite loop, have the program continue to do the following:
Use sensor.snapshot() to take a snapshot of the image.
Use img.find_features(face_cascade, threshold=0.5, scale=1.5) to detect faces in the image.
For each detected face, use img.get_descriptor(face) to obtain the feature descriptor of the face.
Use image.match_descriptor(known_face_descriptor, descriptor, threshold=85) to match the detected facial feature descriptor with the known facial feature descriptor, and set the matching threshold to 85.
Depending on the matching result, use img.draw_string() to identify the recognition result ("Recognized" or "Unknown") on the image.
Use time.sleep(100) to delay 100 milliseconds to control the processing frequency.

These practical application examples show how to use OpenMV Cam for face recognition. The first example demonstrates how to detect and draw bounding boxes for faces by using the HaarCascade object for face detection and drawing bounding boxes on an image. The second example shows how to perform face recognition and output the recognition results by converting known face images into feature descriptors, matching the detected face feature descriptors with them, and outputting the corresponding recognition results. The third example demonstrates how to perform face recognition and identify the recognition results by using a string label on the image to identify the recognition results.

Please note that the above cases are only for expanding ideas and may contain errors or inapplicability. Different hardware platforms, usage scenarios and MicroPython versions may lead to different usage methods. In actual programming, you need to adjust it according to your hardware configuration and specific needs, and conduct multiple actual tests. It is important to ensure that the hardware is connected correctly and to understand the specifications and characteristics of the sensors and devices used.

Insert image description here

Guess you like

Origin blog.csdn.net/weixin_41659040/article/details/133577849