Detailed code analysis of OpenMV line inspection program for October 2022 (1)

foreword

(1) The electric competition is about to take place, and machine recognition will definitely be used. In order to prevent the appearance of the special line patrol program in October last year. Here I share the OpenMV line inspection scheme, and explain and analyze how to change it.
(2) Before studying this article, you need to learn: OpenMV serial communication detailed explanation ; OpenMV image processing to MCU communication ; OpenMV single color recognition explanation ;
(3)Copyright statement: This code is borrowed from the code of Taobao merchants - nameless innovation, and has been licensed. Commercial use is prohibited without permission!
(4) Note: His code creates a relatively complex communication protocol by himself. In order to facilitate the quick start of novices, I made fine-tuning on his code.
(5) Give a reassurance first, and you can master this article in a maximum of 20 minutes. (The premise is that OpenMV’s color recognition , area of ​​interest , and serial communication can be used proficiently)
(6) The main control code will be given in the next article, because the number of words in the editor of station C exceeds 1W, and a batch of them will be stuck. There is no way , can only be written in two parts.

OpenMV full code

(1) Upload the code directly, because many people may only see this blog after the start of the competition, and there is not much time left.
(2)The function entry of OpenMV is main.py, which is executed from the first line down. So this file should be named main.py! ! !

#main.py -- put your code here!
import cpufreq
import pyb
import sensor,image, time,math
from pyb import LED,Timer,UART

sensor.reset()                      # 重置感光元件,重置摄像机
sensor.set_pixformat(sensor.RGB565) # 设置颜色格式为RGB565,彩色,每个像素16bit。
sensor.set_framesize(sensor.QQQVGA)  # 图像大小为QQQVGA,大小80x60
sensor.skip_frames(time = 2000)     #延时跳过一些帧,等待感光元件变稳定
sensor.set_auto_gain(False)          #黑线不易识别时,将此处写False
sensor.set_auto_whitebal(False)     #颜色识别必须关闭白平衡,会影响颜色识别效果,导致颜色的阈值发生改变
clock = time.clock()                # 创建一个时钟对象来跟踪FPS。
#sensor.set_auto_exposure(True, exposure_us=5000) # 设置自动曝光sensor.get_exposure_us()

red_led = pyb.LED(1)    #下面这三个就是OpenMV上的LED初始化
green_led = pyb.LED(2)
blue_led = pyb.LED(3)
uart=UART(3,115200)   #初始化串口3,波特率为115200,P4为TX连接单片机RX,P5为RX连接单片机TX

class target_check(object):
    x=0          #int16_t,横线上被标记黑点的地方,从左到右依次减少
    y=0          #int8_t,竖线上被标记黑点的地方,从上到下依次减少

target=target_check()


# 绘制水平线
def draw_hori_line(img, x0, x1, y, color):
    for x in range(x0, x1):
        img.set_pixel(x, y, color)
# 绘制竖直线
def draw_vec_line(img, x, y0, y1, color):
    for y in range(y0, y1):
        img.set_pixel(x, y, color)
# 绘制矩形
def draw_rect(img, x, y, w, h, color):
    draw_hori_line(img, x, x+w, y, color)
    draw_hori_line(img, x, x+w, y+h, color)
    draw_vec_line(img, x, y, y+h, color)
    draw_vec_line(img, x+w, y, y+h, color)


#图像大小为QQQVGA,大小80x60
#roi的格式是(x, y, w, h)
track_roi=[(0,25,5,10),
           (5,25,5,10),
           (10,25,5,10),
           (15,25,5,10),
           (20,25,5,10),
           (25,25,5,10),
           (30,25,5,10),
           (35,25,5,10),
           (40,25,5,10),
           (45,25,5,10),
           (50,25,5,10),
           (55,25,5,10),
           (60,25,5,10),
           (65,25,5,10),
           (70,25,5,10),
           (75,25,5,10)]

target_roi=[(70,0,10,12),
           (70,12,10,12),
           (70,24,10,12),
           (70,36,10,12),
           (70,48,10,12)]


thresholds =(0, 30, -30, 30, -30, 30)  #黑色的颜色阈值

hor_bits=['0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0'] #记录横线16个感兴趣区是否为黑线
ver_bits=['0','0','0','0','0',]  #记录右边5个感兴趣区是否为黑线
#__________________________________________________________________
def findtrack():
    target.x=0
    target.y=0
    img=sensor.snapshot()

    #用于检测黑线
    for i in range(0,16):
        hor_bits[i]=0
        '''
        thresholds表示黑色线阈值,roi为感兴趣区
        merge=True,表示所有合并所有重叠的blob为一个
        margin 边界,如果设置为10,那么两个blobs如果间距10一个像素点,也会被合并。
        '''
        blobs=img.find_blobs([thresholds],roi=track_roi[i],merge=True,margin=10)
        #如果识别到了黑线,hor_bits对应位置1
        for b in blobs:
            hor_bits[i]=1

    #用于检测右侧的黑线
    for i in range(0,5):
        ver_bits[i]=0
        blobs=img.find_blobs([thresholds],roi=target_roi[i],merge=True,margin=10)
        for b in blobs:
            ver_bits[i]=1

    #绘制16个横线红色四个角
    for k in range(0,16):
        if  hor_bits[k]:
            target.x=target.x|(0x01<<(15-k))
            img.draw_circle(int(track_roi[k][0]+track_roi[k][2]*0.5),int(track_roi[k][1]+track_roi[k][3]*0.5),1,(255,0,0))
    #绘制右侧5个红色四个角
    for k in range(0,5):
        if  ver_bits[k]:
            target.y=target.y|(0x01<<(4-k))
            img.draw_circle(int(target_roi[k][0]+target_roi[k][2]*0.5),int(target_roi[k][1]+target_roi[k][3]*0.5),3,(0,255,0))
    #绘制16个横线感兴趣区
    for rec in track_roi:
        img.draw_rectangle(rec, color=(0,0,255))#绘制出roi区域
    #绘制右侧5个横线感兴趣区
    for rec in target_roi:
        img.draw_rectangle(rec, color=(0,255,255))#绘制出roi区域
           #大--小  从左到右                       从上到下
    print((target.x & 0xff00)>>8,target.x & 0xff,target.y)
    uart.write(str((target.x & 0xff00)>>8))
    #uart.write(str(target.x & 0xff))
    #uart.write(str(target.y))
    #uart.write("\r\n")

#__________________________________________________________________
def package_blobs_data():
    return bytearray([target.x >> 8,
                      target.x,
                      target.y])
#__________________________________________________________________
i = 0
while True:
    findtrack()
    uart.write(package_blobs_data())
    uart.write("\r\n")
    i = i + 1
    if i == 50:
        i = 0
        green_led.toggle()
    pyb.delay(10)
    #uart.write("Hello World!\r")
    #uart.write(1+'\n')
    #计算fps
    #print(clock.fps())
#__________________________________________________________________

code analysis

Concern 1—image size setting

(1) In the following part of the code, I will only explain the part you need to change.
(2) sensor.set_framesize(sensor.QQQVGA)
<1> This is used to set the image size. First of all, we need to know what a picture is for a computer.
<2> Everyone has played video games, and they must have played with common OLED or dot matrix screens. So how do we make OLED or dot matrix screen draw pictures? Very simple, pixel by pixel painting.
<3>For example, the common 4-pin 0.96-inch OLED is 128*64 pixels. What does it mean? He explained that this OLED has 128 small LEDs vertically and 64 small LEDs horizontally. We want to draw an image, which is to light up these small LEDs one by one.
<3>The same is true for OpenMV, but its pixel size can be set. For example, I am using the QQQVGA image quality here, and the pixels are 80x60, that is to say, there are 80 pixels on the X axis and 60 pixels on the Y axis.
<4>What is the use of this? very simple,It is related to the setting of the region of interest later.
<5> OpenMV image quality setting related information: https://book.openmv.cc/image/sensor.html

insert image description here

<6>Related information on the region of interest: https://book.openmv.cc/image/statistics.html
<7>Maybe some people did not pay attention to the relevant information on the region of interest when learning color recognition. I will briefly explain it here.
<8> As we said above, OpenMV can set the image size, as shown in the figure below, we set the image size to 16*16 pixel size.
<9> I captured the whole image, which is a flower. But if we perform feature recognition and only want to recognize the central part of the flower (the place marked by the blue line on the way), then during color recognition, pass in the value of interest roi=(5,2,6,4).

insert image description here


import cpufreq
import pyb
import sensor,image, time,math
from pyb import LED,Timer,UART

sensor.reset()                      # 重置感光元件,重置摄像机
sensor.set_pixformat(sensor.RGB565) # 设置颜色格式为RGB565,彩色,每个像素16bit。
sensor.set_framesize(sensor.QQQVGA)  # 图像大小为QQQVGA,大小80x60
sensor.skip_frames(time = 2000)     #延时跳过一些帧,等待感光元件变稳定
sensor.set_auto_gain(False)          #黑线不易识别时,将此处写False
sensor.set_auto_whitebal(False)     #颜色识别必须关闭白平衡,会影响颜色识别效果,导致颜色的阈值发生改变
clock = time.clock()                # 创建一个时钟对象来跟踪FPS。
#sensor.set_auto_exposure(True, exposure_us=5000) # 设置自动曝光sensor.get_exposure_us()

Focus 2—Serial port baud rate setting

(1) First of all, let me explain what pyb.LED is.
<1> This is actually the initialization function of the only RGB light on OpenMV.
<2>Why do I need to initialize RGB? Because, in order to prevent OpenMV from suddenly stopping for some reason, it did not give feedback data to the main controller in the end, and it was impossible to locate the problem during the troubleshooting process. So I set up a program that blinks RGB. If OpenMV is working properly then RGB will be blinking. This makes it easier to troubleshoot problems.
(2) The UART function is the serial port initialization function
<1> Note,OpenMV only has one serial port 3! Please don't be smart and change the first parameter!
<2>The second parameter is to set the baud rate, and the value of this baud rate needs to be consistent with the baud rate setting of the master.
<3>The connection method is: TX (P4) of OpenMV——MCU RX, RX (P5)——MCU TX, GND——GND, 3.3V——3.3V

red_led = pyb.LED(1)    #下面这三个就是OpenMV上的LED初始化
green_led = pyb.LED(2)
blue_led = pyb.LED(3)
uart=UART(3,115200)   #初始化串口3,波特率为115200,P4为TX连接单片机RX,P5为RX连接单片机TX

Concern 3—Line Patrol Data Storage

(1) First of all, we need to know what the e-sports map looks like in October 22. He is a problem of reversing into storage.
(2) For this question, it is obviously not possible to use light-sensing tracking, otherwise it will exceed the size of the car and not meet the meaning of the question.
(3) So our OpenMV should be placed obliquely.

insert image description here

(4) The results actually detected by OpenMV are as follows. We will see that as long as the Y axis on the right detects black, it means that the car can be reversed and put into storage.
(5) When the X-axis at the center position has data identification in the 5 identification areas on the right, it means that the car is about to be reversed and put into storage.

insert image description here

(6) Therefore, we have created a class here. If there is no python foundation, it can be understood as a structure of C language. This structure stores the data for the x and y axes.
(7) If fine-tuning is required, you can see how to set your area of ​​interest

class target_check(object):
    x=0          #int16_t,横线上被标记黑点的地方,从左到右依次减少
    y=0          #int8_t,竖线上被标记黑点的地方,从上到下依次减少

target=target_check()

Do not need to pay attention to the code, make a simple explanation

(1) We can see that there are some frames drawn in the OpenMV recognition code above. In fact, this will only appear in the OpenMV IDE, which is convenient for us to debug.

# 绘制水平线
def draw_hori_line(img, x0, x1, y, color):
    for x in range(x0, x1):
        img.set_pixel(x, y, color)
# 绘制竖直线
def draw_vec_line(img, x, y0, y1, color):
    for y in range(y0, y1):
        img.set_pixel(x, y, color)
# 绘制矩形
def draw_rect(img, x, y, w, h, color):
    draw_hori_line(img, x, x+w, y, color)
    draw_hori_line(img, x, x+w, y+h, color)
    draw_vec_line(img, x, y, y+h, color)
    draw_vec_line(img, x+w, y, y+h, color)

Focus 4—Region of Interest Setting

(1) We set the image size of OpenMV to QQQVGA, where the pixel size is 80x60.
(2) Then we need to set only the specified part to be recognized for tracking.
(3) First explain the 16 identification points in the middle, track_roi.
<1> We need to establish 16 identification points on the X axis, and the X axis has a total of 80 pixels, so w is 5, and x is incremented by 5 each time .
<2>Because my entire image has 60 pixels on the y-axis. If h is too small, the response time of the main control is too short, if it is too large, it will affect the identification of the 8 data on the y-axis, so we choose h to be fixed at 10 . Because the y-axis has 60 pixels, the center pixel is 30, and since h is selected as 10, the starting position of y is 30-(10/2)=25 .
(4) Now explain the five recognition points on the right, target_roi.
<1> After understanding the above explanation, it is easy to understand here. First of all, in order to prevent the width w of the x-axis from being set too small, which is not conducive to recognition, too large will affect the 16 recognition points in the center, so the width w is set to 10, and the x-axis has only 80 pixels, so x is set to 80-10 =70 .
<2>Because there are 5 recognition points on the right, and then there are 60 pixels on the y-axis, so h is set to 60/6=12, and the y-axis increases by 12 each time .

#图像大小为QQQVGA,大小80x60
#roi的格式是(x, y, w, h)
track_roi=[(0,25,5,10),
           (5,25,5,10),
           (10,25,5,10),
           (15,25,5,10),
           (20,25,5,10),
           (25,25,5,10),
           (30,25,5,10),
           (35,25,5,10),
           (40,25,5,10),
           (45,25,5,10),
           (50,25,5,10),
           (55,25,5,10),
           (60,25,5,10),
           (65,25,5,10),
           (70,25,5,10),
           (75,25,5,10)]

target_roi=[(70,0,10,12),
           (70,12,10,12),
           (70,24,10,12),
           (70,36,10,12),
           (70,48,10,12)]

Concern 5—black line color threshold

(1) You only need to set the tool according to the color threshold value below, and pass the final data to thresholds.
(2) Color threshold setting tool tutorial: OpenMV color threshold setting .

thresholds =(0, 30, -30, 30, -30, 30)  #黑色的颜色阈值

Concern 6—Array of Regions of Interest

(1) If you have several ROIs, create several arrays.
(2) If there are several areas in each ROI, just write a few '0's.

hor_bits=['0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0'] #记录横线16个感兴趣区是否为黑线
ver_bits=['0','0','0','0','0',]  #记录右边5个感兴趣区是否为黑线

Focus 7 - Tracking part

(1) Only the parts that may need to be fine-tuned will be explained here
( 2 ) target.x and target.y
<1>target.x and target.y are used to record the 16 regions of interest on the x-axis and the right side of the y-axis Data for those 5 ROIs. Each call to this function needs to be cleared, otherwise the data will be disturbed by the last result. <2> Here it needs to be related to the data information of the 3rd part of the focus
you set before . If you set a few quantities, you need to clear a few. ( 3 ) for i in range(0,16): and for i in range(0,5): <1>range(0,16) This is to pass 16 data from 0 to 15 to i. The contents of the two for statements here are actually the same. If you have several types of data, just write a few for, and then only need to change the x in range(0,x). <2>hor_bits[i]=0 First of all, we need to clear the data of these arrays to prevent the previous data from causing interference. Do not change. <3> The find_blobs() function only needs to change the roi area of ​​interest, and set roi to the area of ​​interest you want. ( 4 ) for k in range(0,16): and for k in range(0,5): <1>The function of these two for statements is to store the data in the array into target.x if the black line is recognized and target. y. (0x01<<(4-k) and (0x01<<(15-k) part, the x in this (0x01<<(xk) is determined according to your type of interest area, such as the x-axis 16, then x is 15. There are 5 on the y axis, then x is 4.






<2>The draw_circle() function is a prompt that will appear on the OpenMV IDE after the black line is recognized. If we need to change it, set the four target_roi[] passed in as our own, as follows: (5)
insert image description here
for rec in track_roi : and for rec in target_roi:
<1> This is to draw the area where we will perform color recognition. If we want to change it, we only need to change the in condition after the for statement to the area of ​​interest we set

def findtrack():
    target.x=0
    target.y=0
    img=sensor.snapshot() #这个必须存在,是用于获取图像数据的

    #用于检测黑线
    for i in range(0,16):
        hor_bits[i]=0
        '''
        thresholds表示黑色线阈值,roi为感兴趣区
        merge=True,表示所有合并所有重叠的blob为一个
        margin 边界,如果设置为10,那么两个blobs如果间距10一个像素点,也会被合并。
        '''
        blobs=img.find_blobs([thresholds],roi=track_roi[i],merge=True,margin=10)
        #如果识别到了黑线,hor_bits对应位置1
        for b in blobs:
            hor_bits[i]=1

    #用于检测右侧的黑线
    for i in range(0,5):
        ver_bits[i]=0
        blobs=img.find_blobs([thresholds],roi=target_roi[i],merge=True,margin=10)
        for b in blobs:
            ver_bits[i]=1

    #绘制16个横线红色四个角
    for k in range(0,16):
        if  hor_bits[k]:
            target.x=target.x|(0x01<<(15-k))
            img.draw_circle(int(track_roi[k][0]+track_roi[k][2]*0.5),int(track_roi[k][1]+track_roi[k][3]*0.5),1,(255,0,0))
    #绘制右侧5个红色四个角
    for k in range(0,5):
        if  ver_bits[k]:
            target.y=target.y|(0x01<<(4-k))
            img.draw_circle(int(target_roi[k][0]+target_roi[k][2]*0.5),int(target_roi[k][1]+target_roi[k][3]*0.5),3,(0,255,0))
    #绘制16个横线感兴趣区
    for rec in track_roi:
        img.draw_rectangle(rec, color=(0,0,255))#绘制出roi区域
    #绘制右侧5个横线感兴趣区
    for rec in target_roi:
        img.draw_rectangle(rec, color=(0,255,255))#绘制出roi区域
           #大--小  从左到右                       从上到下
    print((target.x & 0xff00)>>8,target.x & 0xff,target.y)
    uart.write(str((target.x & 0xff00)>>8))

Focus 8—OpenMV sends data to the serial port

(1) This function is to pass the black line information recognized by OpenMV to the main control. Send 8bit data each time.
(2) If you need to change, you only need to change [] in the bytearray function according to your needs.
(3) Pay attention here, what is the content of the data transmission, the data of target.x is from large to small, from left to right. The data of target.y is from top to bottom.
(4) Some people may ask why. The reason is very simple. This is related to the order in which you set the ROI.
insert image description here

def package_blobs_data():
    return bytearray([target.x >> 8,
                      target.x,
                      target.y])

No need to pay attention, there is a simple understanding

(1) In the end, it is an infinite loop. I set the RGB lights on OpenMV to flip every 500ms. Used to check whether OpenMV is running normally.
(2) uart.write() is to transfer the tracking data to the master, because I have adopted the communication protocol of punctual atom, so the data frame ends with "\r\n". The main control code should also adopt this scheme in the same way.

i = 0
while True:
    findtrack()
    uart.write(package_blobs_data())
    uart.write("\r\n")
    i = i + 1
    if i == 50:
        i = 0
        green_led.toggle()
    pyb.delay(10)   #延时10ms

Guess you like

Origin blog.csdn.net/qq_63922192/article/details/131844509