[Diaoye learns programming] MicroPython manual for OpenMV Cam real-time shooting

Insert image description here
MicroPython is a lightweight version of the interpreter designed for running the Python 3 programming language in embedded systems. Compared with regular Python, the MicroPython interpreter is small (only about 100KB) and is compiled into a binary Executable file to run, resulting in higher execution efficiency. It uses a lightweight garbage collection mechanism and removes most of the Python standard library to accommodate resource-constrained microcontrollers.

The main features of MicroPython include:
1. The syntax and functions are compatible with standard Python, making it easy to learn and use. Supports most core syntax of Python.
2. Directly access and control hardware, control GPIO, I2C, SPI, etc. like Arduino.
3. Powerful module system, providing file system, network, graphical interface and other functions.
4. Support cross-compilation to generate efficient native code, which is 10-100 times faster than the interpreter.
5. The amount of code is small and the memory usage is small. It is suitable for running on MCU and development boards with small memory.
6. Open source license, free to use. The Shell interactive environment provides convenience for development and testing.
7. The built-in I/O driver supports a large number of microcontroller platforms, such as ESP8266, ESP32, STM32, micro:bit, control board and PyBoard, etc. There is an active community.

MicroPython application scenarios include:
1. Rapidly build prototypes and user interactions for embedded products.
2. Make some small programmable hardware projects.
3. As an educational tool, it helps beginners learn Python and IoT programming.
4. Build smart device firmware to achieve advanced control and cloud connectivity.
5. Various microcontroller applications such as Internet of Things, embedded intelligence, robots, etc.

Things to note when using MicroPython:
1. Memory and Flash space are limited.
2. The explanation and execution efficiency is not as good as C language.
3. Some library functions are different from the standard version.
4. Optimize the syntax for the platform and correct the differences with standard Python.
5. Use memory resources rationally and avoid frequently allocating large memory blocks.
6. Use native code to improve the performance of speed-critical parts.
7. Use abstraction appropriately to encapsulate underlying hardware operations.

Generally speaking, MicroPython brings Python into the field of microcontrollers, which is an important innovation that not only lowers the programming threshold but also provides good hardware control capabilities. It is very suitable for the development of various types of Internet of Things and intelligent hardware.
Insert image description here
OpenMV Cam is a small, low-power microcontroller board that allows you to easily implement applications using machine vision in the real world. You can program OpenMV Cam using high-level Python scripts (provided by MicroPython OS) instead of C/C++. The technical parameters of OpenMV Cam include the following aspects:

1. Processor: OpenMV Cam H7 Plus uses STM32H743II ARM Cortex M7 processor, running at 480 MHz, with 32MB SDRAM + 1MB SRAM and 32 MB external flash memory + 2 MB internal flash memory. OpenMV Cam M4 V2 uses STM32F427VG ARM Cortex M4 processor running at 180 MHz with 256KB RAM and 1 MB flash memory.
2. Image sensor: Both OpenMV Cam H7 Plus and OpenMV Cam M4 V2 use the OV7725 image sensor, which can capture 320x240 8-bit grayscale images or 320x240 16-bit RGB565 images at 75 FPS at resolutions higher than 320x240. Capable of shooting at 150 FPS at 320x240.
3. I/O interface: OpenMV Cam H7 Plus and OpenMV Cam M4 V2 both have the following I/O interfaces:
(1) Full-speed USB (12Mbs) interface, connected to the computer. When the OpenMV Cam is plugged in, a virtual COM port and a "USB flash drive" will appear on your computer.
(2) The μSD card slot is capable of 100Mbs read/write, allowing your OpenMV Cam to record video and extract machine vision material from the μSD card.
(3) The SPI bus operates at a speed of up to 54Mbs, allowing you to simply transmit image stream data to the LCD expansion board, WiFi expansion board, or other controllers.
(4) I2C bus (up to 1Mb/s), CAN bus (up to 1Mb/s) and asynchronous serial bus (TX/RX, up to 7.5Mb/s) for connection with other controllers or sensors.
(5) A 12-bit ADC and a 12-bit DAC.
(6) There are interrupts and PWM on all I/O pins (there are 9 or 10 I/O pins on the board).
4. LED: Both OpenMV Cam H7 Plus and OpenMV Cam M4 V2 are equipped with one RGB LED (tri-color) and two bright 850nm IR LEDs (infrared).
5. Lens: Both OpenMV Cam H7 Plus and OpenMV Cam M4 V2 are equipped with a standard M12 lens interface and a default 2.8 mm lens. If you want to use a more professional lens with your OpenMV Cam, you can easily purchase it and install it yourself.

Insert image description here
MicroPython's OpenMV Cam supports real-time shooting function, which can capture images and videos in real time by writing MicroPython code.

main feature:

Fast real-time performance: OpenMV Cam has high real-time performance and can capture images and videos in real time. It is equipped with a high-performance image processor and sensor that can capture and process images at a stable frame rate, enabling real-time shooting capabilities.

Multiple resolutions and formats: OpenMV Cam supports real-time shooting in multiple resolutions and image formats. You can choose the appropriate resolution and format according to your application needs to meet the shooting requirements in different scenarios.

Flexible image processing: OpenMV Cam has built-in rich image processing functions and algorithms, including filtering, edge detection, color recognition and other functions. This allows real-time shooting to go beyond simple image capture to enable real-time image processing and analysis.

Simplified development: The MicroPython programming language is easy to learn and suitable for beginners and educational fields. OpenMV Cam provides a friendly programming interface and sample code, making the development and debugging of real-time shooting functions easier and more convenient.

Application scenarios:

Visual monitoring and tracking: OpenMV Cam’s real-time shooting capabilities can be applied to visual monitoring and tracking systems. For example, it can be used for visual navigation of drones, capturing images of the surrounding environment in real time and performing real-time analysis and processing to achieve precise positioning and obstacle avoidance.

Teaching and scientific research: The real-time shooting function can be used for teaching and scientific research projects. Students and researchers can use OpenMV Cam to conduct experiments and research on real-time image acquisition and processing, and explore knowledge in computer vision, image processing and other related fields.

Video surveillance: OpenMV Cam can be used in real-time video surveillance systems, such as indoor security systems or industrial production line monitoring. Through real-time shooting and analysis, abnormal situations can be discovered in time and corresponding measures can be taken.

Things to note:

Storage: The Live Capture feature may require a large amount of storage space to store captured images or videos. When using OpenMV Cam for live shooting, you need to ensure there is enough storage space to store data, or use an external storage device to expand.

Power supply: The real-time shooting function has higher requirements on power supply, especially when shooting for a long time or continuously. When using OpenMV Cam for real-time shooting, you need to ensure a stable power supply to avoid problems caused by insufficient battery or power interruption.

Processing performance: Capturing and processing images in real time may require certain computing resources. When performing real-time shooting, the processing performance of the OpenMV Cam needs to be evaluated to ensure that it can meet the required real-time processing requirements.

In summary, the real-time shooting function of MicroPython's OpenMV Cam has the characteristics of fast real-time performance, flexible image processing, and simplified development. It is suitable for scenarios such as visual monitoring and tracking, teaching and scientific research, and video surveillance. When using it, you need to pay attention to factors such as storage space, power supply, and processing performance, and configure and optimize parameters according to specific needs and scenarios.

Case 1: Simple real-time shooting program

import sensor, image, time  
  
sensor.reset() # 初始化摄像头  
sensor.set_pixformat(sensor.RGB565) # 设置像素格式  
sensor.set_framesize(sensor.QVGA) # 设置帧大小  
sensor.skip_frames(time = 2000) # 等待摄像头稳定  
  
while(True):  
    img = sensor.snapshot() # 拍摄照片  
    print(img) # 打印照片

Interpretation of key points:
Initialize the camera, set the pixel format and frame size. In an infinite loop, use the snapshot() function to take photos in real time and print the photos.

Case 2: Shoot in real time and display the image on the LCD screen

import sensor, image, time, lcd  
  
lcd.init() # 初始化LCD屏幕  
sensor.reset() # 初始化摄像头  
sensor.set_pixformat(sensor.RGB565) # 设置像素格式  
sensor.set_framesize(sensor.QVGA) # 设置帧大小  
sensor.skip_frames(time = 2000) # 等待摄像头稳定  
  
while(True):  
    img = sensor.snapshot() # 拍摄照片  
    lcd.display(img) # 在LCD屏幕上显示图像

Interpretation of key points:
First initialize the LCD screen and camera, set the pixel format and frame size. In an infinite loop, a photo is taken in real time using the snapshot() function and the image is displayed on the LCD screen using the display() function.

Case 3: Shoot and detect faces in real time

import sensor, image, time, facedetect  
  
sensor.reset() # 初始化摄像头  
sensor.set_pixformat(sensor.RGB565) # 设置像素格式  
sensor.set_framesize(sensor.QVGA) # 设置帧大小  
sensor.skip_frames(time = 2000) # 等待摄像头稳定  
face_cascade = facedetect.Cascade('haarcascade_frontalface_default.xml') # 加载人脸检测器  
  
while(True):  
    img = sensor.snapshot() # 拍摄照片  
    faces = face_cascade.find(img) # 检测人脸  
    if len(faces) > 0: # 如果检测到了人脸  
        for face in faces: # 在每个检测到的人脸周围绘制矩形框  
            img.draw_rectangle(face.rect(), color = (255, 0, 0), thickness = 2)  
    print(img) # 打印标记了人脸的照片

Interpretation of key points:
First initialize the camera and set the pixel format and frame size. Load the face detector, here the Haar cascade classifier provided by OpenCV is used. In an infinite loop, use the snapshot() function to take photos in real time and use the find() function to detect faces. If faces are detected, draw a rectangular box around each detected face and use the draw_rectangle() function to draw the rectangular box. Finally print the photo with the face tagged.

Case 4: Real-time display of camera images

import sensor, image, time

sensor.reset() # 初始化传感器
sensor.set_pixformat(sensor.RGB565) # 设置像素格式为RGB565
sensor.set_framesize(sensor.QVGA) # 设置帧大小为QVGA
sensor.skip_frames(time = 2000) # 等待2秒,让红外接收器稳定

while(True):
    img = sensor.snapshot() # 获取一帧图像
    lcd.display(img) # 在LCD上显示图像
    time.sleep_ms(30) # 每隔30毫秒更新一次LCD

Interpretation of key points: First import the sensor, image and time modules. Then initialize the sensor and set the pixel format and frame size. Then wait 2 seconds for the infrared receiver to stabilize. In the while loop, one frame of image is continuously obtained and displayed on the LCD, and the LCD is updated every 30 milliseconds.

Case 5: Take photos in real time and save them to SD card

import sensor, image, time, os

sensor.reset() # 初始化传感器
sensor.set_pixformat(sensor.RGB565) # 设置像素格式为RGB565
sensor.set_framesize(sensor.QVGA) # 设置帧大小为QVGA
sensor.skip_frames(time = 2000) # 等待2秒,让红外接收器稳定

while(True):
    img = sensor.snapshot() # 获取一帧图像
    img.save("photo_{}.bmp".format(int(time.time()))) # 将图像保存为BMP格式
    time.sleep_ms(30) # 每隔30毫秒更新一次LCD
    os.system("ls /sd/") # 查看SD卡中的文件列表

Interpretation of key points: First import the sensor, image, time and os modules. Then initialize the sensor and set the pixel format and frame size. Then wait 2 seconds for the infrared receiver to stabilize. In the while loop, continuously obtain a frame of image and save it as a BMP format photo, while updating the LCD every 30 milliseconds and viewing the file list in the SD card.

Case 6: Real-time display of camera footage and recording of video

import sensor, image, time, cv2

sensor.reset() # 初始化传感器
sensor.set_pixformat(sensor.RGB565) # 设置像素格式为RGB565
sensor.set_framesize(sensor.QVGA) # 设置帧大小为QVGA
sensor.skip_frames(time = 2000) # 等待2秒,让红外接收器稳定

fourcc = cv2.VideoWriter_fourcc(*'XVID') # 定义视频编码格式
out = cv2.VideoWriter('output.avi', fourcc, 20.0, (640, 480)) # 创建视频写入对象

while(True):
    img = sensor.snapshot() # 获取一帧图像
    img.save("photo_{}.bmp".format(int(time.time()))) # 将图像保存为BMP格式
    time.sleep_ms(30) # 每隔30毫秒更新一次LCD
    ret, frame = cap.read() # 从摄像头读取一帧图像
    out.write(frame) # 将图像写入视频文件
    cv2.imshow('frame', frame) # 在窗口中显示图像
    if cv2.waitKey(1) & 0xFF == ord('q'): # 如果按下q键则退出程序
        break
out.release() # 释放视频写入对象
cv2.destroyAllWindows() # 关闭所有窗口

Interpretation of key points: First import the sensor, image, time, and cv2 modules. Then initialize the sensor and set the pixel format and frame size. Then wait 2 seconds for the infrared receiver to stabilize. In the while loop, continuously obtain a frame of image and save it as a BMP format photo, while updating the LCD every 30 milliseconds, and reading a frame of image from the camera and writing it to a video file. Finally, the image is displayed in the window and the program exits if the q key is pressed.

Case 7: Real-time display of camera images:

import sensor
import image
import lcd

lcd.init()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.run(1)

while True:
    img = sensor.snapshot()
    lcd.display(img)

Interpretation of key points:
This program uses the sensor module and lcd module of OpenMV Cam to display the camera image in real time.
Use lcd.init() to initialize the LCD display.
Use sensor.reset() to reset the camera sensor.
Use sensor.set_pixformat(sensor.RGB565) to set the image pixel format to RGB565.
Use sensor.set_framesize(sensor.QVGA) to set the image frame size to QVGA (320x240).
Use sensor.run(1) to start camera image streaming.
In an infinite loop, make the program continue to do the following:
capture the camera image using sensor.snapshot() and store it in the variable img.
Use lcd.display(img) to display an image on the LCD display.

Case 8: Detect and display faces in images in real time:

import sensor
import image
import lcd
import time
import uos
from fpioa_manager import *

lcd.init()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.run(1)

# 加载人脸检测模型
model_path = "/sd/models/face_detection/face_detection.kmodel"
fm.register(board_info.PIN9, fm.fpioa.GPIOHS0)
time.sleep(1)
lcd.clear()

try:
    face_detect = KPUClassify(model_path)
except Exception as e:
    print(e)

while True:
    img = sensor.snapshot()
    # 进行人脸检测
    faces = face_detect.detect(img)
    for face in faces:
        # 在图像上绘制人脸框
        img.draw_rectangle(face.rect(), color=(255, 0, 0))
    lcd.display(img)

Interpretation of key points:
This program uses the sensor module, lcd module and KPU module of OpenMV Cam to detect and display the face in the image in real time.
Use lcd.init() to initialize the LCD display.
Use sensor.reset() to reset the camera sensor.
Use sensor.set_pixformat(sensor.RGB565) to set the image pixel format to RGB565.
Use sensor.set_framesize(sensor.QVGA) to set the image frame size to QVGA (320x240).
Use sensor.run(1) to start camera image streaming.
In the try-except block, load the face detection model (the path on the SD card is /sd/models/face_detection/face_detection.kmodel).
In an infinite loop, make the program continue to do the following:
capture the camera image using sensor.snapshot() and store it in the variable img.
Use the face detection model face_detect to detect faces on the image, and store the detected faces in the variable faces.
For each detected face, use img.draw_rectangle(face.rect(), color=(255, 0, 0)) to draw the face box on the image, color red.
Use lcd.display(img) to display an image with a face frame on the LCD display.

Case 9: Real-time calculation of image frame rate:

import sensor
import image
import time

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.run(1)

frame_count = 0
start_time = time.time()

while True:
    img = sensor.snapshot()
    frame_count += 1
    if frame_count % 100 == 0:
        current_time = time.time()
        elapsed_time = current_time - start_time
        fps = frame_count / elapsed_time
        print("FPS:", fps)

Interpretation of key points:
This program uses the sensor module of OpenMV Cam to calculate the image frame rate in real time.
Use sensor.reset() to reset the camera sensor.
Use sensor.set_pixformat(sensor.RGB565) to set the image pixel format to RGB565.
Use sensor.set_framesize(sensor.QVGA) to set the image frame size to QVGA (320x240).
Use sensor.run(1) to start camera image streaming.
Initialize the frame count frame_count to 0, and record the program start time start_time.
In an infinite loop, make the program continue to do the following:
capture the camera image using sensor.snapshot() and store it in the variable img.

Every time 100 frames are captured, perform the following operations:
Get the current time current_time.
Calculate the elapsed time elapsed_time, which is the current time minus the program start time.
Calculate the frame rate fps as the number of frames divided by the elapsed time.
Use print() to output the frame rate.

These sample codes provide the basic framework for live capture using MicroPython and OpenMV Cam. You can modify and extend these codes according to your needs, such as adding image processing algorithms, interacting with other sensors or modules, etc.

Please note that the above cases are only for expanding ideas and may contain errors or inapplicability. Different hardware platforms, usage scenarios and MicroPython versions may lead to different usage methods. In actual programming, you need to adjust it according to your hardware configuration and specific needs, and conduct multiple actual tests. It is important to ensure that the hardware is connected correctly and to understand the specifications and characteristics of the sensors and devices used.

Insert image description here

Guess you like

Origin blog.csdn.net/weixin_41659040/article/details/133578636