2022 E-sports C-question car: OpenMV


foreword

OpenMV is an open source, powerful machine vision module. Many basic functions such as color block recognition can be easily realized by calling functions. By using these functions skillfully, OpenMV can be used to replace other peripherals. The peripherals on our provincial car are very simple, only OpenMV, the keyboard for setting mode, the buzzer for sound prompts, the leading car and the Bluetooth for communicating with the car. At first, I was worried that the car was running too fast and the OpenMV return rate was not enough, but the test found that the frame rate of OpenMV can reach 50~60fps, which is completely sufficient.
That is to say, OpenMV realizes the three functions of tracking, recognition of stop line and distance measurement at the same time .


1. Review Question C

1. Task

Design a car following driving system, using TI's MCU, consisting of a leading car and a following car. The car is required to have a tracking function, and the speed is adjustable from 0.3 to 1m/s, and it can complete the driving on the specified path Operation, the path of the driving site is shown in Figure 1. Among them, point A on the path is the starting point and end point of each driving of the leading car. When the car completes a trip and reaches the end point, the leading car and the following car will make sound prompts. The leading car and the following car can travel along the ABFDE rounded rectangle (referred to as the inner circle for short) path, or along the ABCDE rounded rectangle (referred to as the outer circle for short) path. When driving on the BFD section of the inner circle, the trolley should issue a light indicator. In addition, during the test, a "waiting and stopping instruction" sign (see the left side of Figure 1) can be placed on the straight line area on the path where point E is located by the test expert (see the left side of Figure 1), indicating that the leading car must stop here and wait Continue driving after 5 seconds.
insert image description here

2. Requirements

  1. Place the leading car at point A at the starting position of the path, and place the following car 20cm behind it, set the speed of the leading car at 0.3m/s, and drive one circle along the outer circle path to stop. Requirements: (20 points)
    ( 1) The average speed error of the leading car is not greater than 10%;
    (2) The following car can follow the leading car, and no car collisions can occur during the whole process;
    (3) After the leading car stops at point A after completing a lap, the following car should stop in time. Stop, the stop time difference is not more than 1s, and the distance from the leading car is 20cm, the error is not more than 6cm.
  2. Place the leading car at point A at the starting position of the path trajectory, and place the following car on the straight line area on the path where point E is located, at the position designated by the test expert, set the speed of the leading car at 0.5m/s, and follow the outer circle path Drive two laps to stop, requirements: (20 points)
    (1) The average speed error of the leading car is not greater than 10%;
    (2) The following car can quickly catch up with the leading car, and then follow the leading car at a distance of 20cm, and no cars will occur during the whole process Collision:
    (3) After completing two laps, the leading car reaches point A and stops. The following car should stop in time. The time difference between the two cars stopping should not exceed 1 second, and the distance between the leading car and the leading car should be 20cm, and the error should not be greater than 6cm.
  3. Place the leading car at point A at the starting position of the path, and place the following car 20cm behind it. The leading car and the following car complete three laps of the path continuously. In the first lap, both the lead car and the following car travel along the outer circle path. In the second circle, the leading car drives along the outer circle path, and follows the car along the inner circle path to achieve overtaking and leading. In the third lap, the following car drives along the path of the outer circle, and the leading car drives along the path of the inner circle, realizing overtaking and leading again. Requirements: (30 points)
    (1) The two cars run smoothly throughout the whole process, successfully complete two overtakings, and no car collision occurs;
    (2) After completing three laps, the leading car stops at point A, and the following car should stop in time. The time difference between the cars stopping
    is not more than 1s, and the distance from the leading car is 20cm, and the error is not more than 6cm;
    (3) The driving speed of the car can be set independently, but it should not be lower than 0.3m/s, and the specified three laps should be completed The shorter the time required to travel the trajectory, the better.
  4. The test expert shall designate a position on the straight line area on the side where the point E of the path is located, and place the sign of "Wait and stop instruction". Then, place the leading car at point A at the starting position of the path, and place the following car 20cm behind it. Set the speed of the leading car at 1m/s, and drive a circle along the outer circle path. The two cars must not collide during driving.
    rush. Requirements: (20 points)
    (1) The average speed error of the lead car is not greater than 10%;
    (2) The lead car stops at the "waiting stop indication" point, the parking position is accurate, and the error is not greater than 5cm
    ; The parking time at the "indication" is 5s, and the error is not more than 1s.

2. OpenMV implements functional ideas and codes

1. Tracking

The code used by OpenMV for the tracking module has been shared by many bloggers on the Internet. The general idea is:
1. Divide the image field of view into three parts, top, middle and bottom, and at the same time look for the largest black color block (you can also find other color blocks to see what color line is required), that is, call the function img that comes with OpenMV .find_blobs, and then filter out the largest color block.Please add a picture description

2. Calculate the offset angle of the current trajectory relative to the car through the cx and cy values ​​​​of the center points of the three color blocks, so as to facilitate the PID calculation.Please add a picture description

OpenMV的色块识别有很多细节方面,可以更加巧妙的利用,若有时间可以整理出一篇blog记录一下

def car_run():

    centroid_sum = 0

    #利用颜色识别分别寻找三个矩形区域内的线段
    for r in ROIS:
        blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=r[0:4], merge=True)
        # r[0:4] is roi tuple.
        #找到视野中的线,merge=true,将找到的图像区域合并成一个

        #目标区域找到色块
        if blobs:
            # Find the index of the blob with the most pixels.
            most_pixels = 0
            largest_blob = 0
            for i in range(len(blobs)):
            #目标区域找到的颜色块(线段块)可能不止一个,找到最大的一个,作为本区域内的目标直线
                if blobs[i].pixels() > most_pixels:
                    most_pixels = blobs[i].pixels()
                    #merged_blobs[i][4]是这个颜色块的像素总数,如果此颜色块像素总数大于                     #most_pixels,则把本区域作为像素总数最大的颜色块。更新most_pixels和largest_blob
                    largest_blob = i

            # Draw a rect around the blob.
            img.draw_rectangle(blobs[largest_blob].rect())

            #将此区域的像素数最大的颜色块画矩形和十字形标记出来
            img.draw_cross(blobs[largest_blob].cx(),
                           blobs[largest_blob].cy())

            centroid_sum += blobs[largest_blob].cx() * r[4] # r[4] is the roi weight.
            #计算centroid_sum,centroid_sum等于每个区域的最大颜色块的中心点的x坐标值乘本区域的权值

    center_pos = (centroid_sum / weight_sum) # Determine center of line.
    #中间公式

    # Convert the center_pos to a deflection angle. We're using a non-linear
    # operation so that the response gets stronger the farther off the line we
    # are. Non-linear operations are good to use on the output of algorithms
    # like this to cause a response "trigger".
    deflection_angle = 0
    #机器人应该转的角度

    # The 80 is from half the X res, the 60 is from half the Y res. The
    # equation below is just computing the angle of a triangle where the
    # opposite side of the triangle is the deviation of the center position
    # from the center and the adjacent side is half the Y res. This limits
    # the angle output to around -45 to 45. (It's not quite -45 and 45).
    deflection_angle = -math.atan((center_pos-160)/120)
    #角度计算.80 60 分别为图像宽和高的一半,图像大小为QQVGA 160x120.
    #注意计算得到的是弧度值

    # Convert angle in radians to degrees.
    deflection_angle = math.degrees(deflection_angle)
    #将计算结果的弧度值转化为角度值
    A=deflection_angle+90
    return int(A)
    #由于数据需要传回给单片机,不适合传输负数,因此+90调节数据

However, this question has an inner circle and an outer circle. It is obviously inappropriate to identify a single color block at the fork, so identify the two largest color blocks in each area of ​​the upper, middle, and lower areas. At the same time, in order to avoid background interference, the identified color blocks are filtered out. Small patches of color . In this way, there are two situations:
1. There is no fork, and there is only one path. At this time, each area has only one color block, and the calculation principle is the same as the above code.
Please add a picture description
Please add a picture description

2. If there is a fork, two paths will appear. There may be a fork in the front area, or a fork in two areas, or a fork in all three areas. At this time, the offset angles of the two paths on the left and right are calculated separately, and the areas where two color blocks appear are calculated on the left and right sides, and the areas with only one color block are calculated using this color block.
Entering the fork in the road:
Please add a picture description
Please add a picture description
insert image description here
exiting the fork in the road (from the perspective of the outer circle, the same is true for the inner circle):

insert image description here
insert image description here

code show as below:

def car_run():
    centroid_sum = [0,0]
    left_center=[-1,-1,-1]
    right_center=[-1,-1,-1]
    for r in range(3):
        blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=ROIS[r][0:4], merge=True,area_threshold=100,margin=3)
        if blobs:
            max_ID=[-1,-1]#保存两个最大色块的ID
            max_ID=find_max(blobs)
            img.draw_rectangle(blobs[max_ID[0]].rect())
            img.draw_cross(blobs[max_ID[0]].cx(),
                           blobs[max_ID[0]].cy())
            if max_ID[1]!=-1:#如果识别到两个色块
                img.draw_rectangle(blobs[max_ID[1]].rect())
                img.draw_cross(blobs[max_ID[1]].cx(),
                               blobs[max_ID[1]].cy())
                #区分左边和右边
                if blobs[max_ID[0]].cx()<blobs[max_ID[1]].cx():
                    left_center[r]=blobs[max_ID[0]].cx()
                    right_center[r]=blobs[max_ID[1]].cx()
                else:
                    left_center[r]=blobs[max_ID[1]].cx()
                    right_center[r]=blobs[max_ID[0]].cx()
            else:
            	left_center[r]=right_center[r]=blobs[max_ID[0]].cx()
            	centroid_sum[0] += left_center[r] * ROIS[r][4]
            	centroid_sum[1] += right_center[r] * ROIS[r][4]
    center_pos =[0,0]
    center_pos[0] = (centroid_sum[0] / weight_sum)
    center_pos[1] = (centroid_sum[1] / weight_sum)
    deflection_angle = [0,0]
    deflection_angle[0] = -math.atan((center_pos[0]-80)/60)
    deflection_angle[1] = -math.atan((center_pos[1]-80)/60)
    #使用的QQVGA像素是160*120,因此中心点是(80,60)
    deflection_angle[0] = math.degrees(deflection_angle[0])
    deflection_angle[1] = math.degrees(deflection_angle[1])
    if center_pos[0]==center_pos[1]==0:
        deflection_angle[1]=deflection_angle[0]=0
    A=[int(deflection_angle[0])+90,int(deflection_angle[1])+90]
    return A

After the data is processed, it is sent to the single-chip microcomputer, and the rest is handed over to the single-chip microcomputer for processing. Here is also a brief explanation of the MCU code idea: use the car's perspective to walk through the inner and outer lanes, and find that the data on the right is taken when taking the outer lane, and the data on the left is used when walking the inner lane . In this way, if the left and right sides of the returned data are equal when walking in a straight line, then the single-chip microcomputer only needs to select the left or right offset angle according to the selection of the inner lane and outer lane mode, and then the driving of the inner lane and the outer lane can be realized.

2. Identify the stop line

The stop line has a width of 5cm. Using this feature directly solves the problem of identifying the stop line when identifying the offset angle. Later, according to this idea, the problem of waiting for the stop line was also solved.
To put it simply, if the pixel area of ​​a color block is greater than a certain value, it is considered a stop line .
insert image description here

1. First, the stop line identification is performed when the left offset angle is equal to the right offset angle, that is, when there is no fork, to avoid the influence of the fork.
insert image description here

2. What is used is not the size of the color block area, which will cause misjudgment. What should be used is the size of the pixel area, blob.pixels() function . (There will be many unexpected gains if you have time to read the manual more)
3. The tilt angle of the camera and the height from the ground will affect the value of blob. The pixel size of the stop line is not the same, so three thresholds should be set.
4. It is a bit too simple and rough to identify the stop line in this way, and misjudgment will occur, so the cooperation of the single-chip microcomputer is also required. The code will briefly describe the idea of ​​the single-chip microcomputer.

def car_run():
    centroid_sum = [0,0]
    left_center=[-1,-1,-1]
    right_center=[-1,-1,-1]
    flag_cross=0
    flag_Stop=0	#停止线标志
    flag_Wait=[0,0]#等停线标志
    for r in range(3):
        blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=ROIS[r][0:4], merge=True,area_threshold=100,margin=3)
        if blobs:
            max_ID=[-1,-1]
            max_ID=find_max(blobs)
            img.draw_rectangle(blobs[max_ID[0]].rect())
            img.draw_cross(blobs[max_ID[0]].cx(),
                           blobs[max_ID[0]].cy())
            if max_ID[1]!=-1:
                img.draw_rectangle(blobs[max_ID[1]].rect())
                flag_cross=1
                img.draw_cross(blobs[max_ID[1]].cx(),
                               blobs[max_ID[1]].cy())
                if blobs[max_ID[0]].cx()<blobs[max_ID[1]].cx():
                    left_center[r]=blobs[max_ID[0]].cx()
                    right_center[r]=blobs[max_ID[1]].cx()
                else:
                    left_center[r]=blobs[max_ID[1]].cx()
                    right_center[r]=blobs[max_ID[0]].cx()
            else:
                #print(blobs[max_ID[0]].pixels(),blobs[max_ID[0]].w())
                if flag_cross==0:
                    if blobs[max_ID[0]].pixels()>range_stop[r]:#range_stop为三个区域停止线的像素阈值
                        flag_Stop=r+1	#停止线标志的值为3,2,1,表示所在区域
                    if blobs[max_ID[0]].w()>range_wait[r]:
                        flag_Wait[0]=flag_Wait[0]+1#等待停止线同理
                left_center[r]=right_center[r]=blobs[max_ID[0]].cx()
            centroid_sum[0] += left_center[r] * ROIS[r][4]
            centroid_sum[1] += right_center[r] * ROIS[r][4]
    center_pos =[0,0]
    center_pos[0] = (centroid_sum[0] / weight_sum)
    center_pos[1] = (centroid_sum[1] / weight_sum)
    if flag_Wait[0]==2:
        flag_Wait[1]=1
    deflection_angle = [0,0]
    K=0.8
    deflection_angle[0] = -math.atan((center_pos[0]-80)/60)
    deflection_angle[1] = -math.atan((center_pos[1]-80)/60)
    deflection_angle[0] = math.degrees(deflection_angle[0])
    deflection_angle[1] = math.degrees(deflection_angle[1])
    if center_pos[0]==center_pos[1]==0:
        deflection_angle[1]=deflection_angle[0]=0
    A=[int(deflection_angle[0])+90,int(deflection_angle[1])+90,flag_Stop,flag_Wait[1]]
    return A

Single-chip microcomputer idea: the car is gradually approaching the stop line, so the position of the stop line is from far to near, so the returned value is 3->2->1, when it receives 3, 2, 1 along the way, it is considered to have stopped line, stop immediately . Receiving along the line does not mean receiving continuously, because some areas of vision do not recognize color blocks, that is, blind areas of vision, and the stop line will pass through the blind areas of vision and return 0. And this method can also avoid misjudgment at the fork.
The principle of waiting for the stop line is similar to that of the stop line. When the waiting stop line is identified, the stop line will also be recognized. Therefore, it is a coincidence that when four times are required, when the waiting stop line and the stop line are recognized at the same time, the waiting stop line is processed. Even if you see it again after stopping the line, don't deal with it. That is to say, the first stop is waiting for a stop, and the second stop is a stop.

3. Distance recognition

In fact, I wanted to use OpenMV for distance measurement from the beginning, using the built-in AprilTag marker tracking, which is equivalent to identifying a marker to get the distance. At that time, a piece of cardboard was glued on the back of the car, and then the logo was pasted.
insert image description here
The experiment found that it is not feasible. For our car, the camera is placed on the top. (Of course, if it is a car with many cameras installed in different places, it is okay, but we have no money.) This angle is very tricky. Even if a small AprilTag is used, the distance between the two cars is 20cm to barely see the whole thing. When I saw it, it was already 20cm away, and it was relatively close. If I didn't see it, I would have bumped into it directly.
Based on this idea, since it can be seen, it is relatively close, so directly engage in a color block recognition. The color block recognition of OpenMV really makes me hate and love it. How painful it is to adjust the threshold, how easy this color block recognition is. We attached a whole blue board to the back of the car, and covered the wheels of the car by the way, so as to prevent the rear car from recognizing the front wheel as a track.
I also thought about many ways to identify the color block and judge the distance, such as the area of ​​the color block, the area of ​​the pixel, the length of the color block, the center point of the color block, etc. Finally, I patted my head and directly used the y value of the lower edge of the color block, and then set the parameters according to the actual measurement k, adjusted to a distance value . The negative side of B in the LAB color gamut is blue. At that time, the blue area was exhausted, and L and A were also exhausted, so there was no need to worry about the influence of light changes on the color.
LAB这里简单说一下:L*代表明度,取值0~100, a*代表从绿色到红色的分量 ,取值-128~127,b*代表从蓝色到黄色的分量,取值-128~127
insert image description here
There is probably only so much distance measurement, and the rest of the speed adjustment is left to the single-chip microcomputer.
code:

blobs=img.find_blobs([(30,60,-30,-10,-25,-12)],pixels_threshold=300,area_threshold=300,merge=False)
max_size=0
if blobs:
    for blob in blobs:
        if blob.cy()+0.5*blob.h() > max_size:
            img.draw_rectangle(blob.rect(),(255,0,0))
            max_size = blob.cy()+0.5*blob.h()
    row_data[2]=int(k*(120-max_size))

Summarize

OpenMV's reference documents need to be read more, used more, and called member functions flexibly. Many problems are actually not very complicated. The other is to make good use of the computing power of the single-chip microcomputer to process the data transmitted by OpenMV.
Add the code of MSP: https://download.csdn.net/download/weixin_52385589/86393773?spm=1001.2014.3001.5503

import sensor, image, time, math
from pyb import UART,LED
LED(3).on()
uart = UART(3, 115200, timeout_char=1000)
u_start=bytearray([0xb3,0xb3])
u_over=bytearray([0x0d,0x0a])
GRAYSCALE_THRESHOLD = [(-125, 20, -21, 13, -28, 14)]#巡线的阈值
ROIS = [
        (0, 90, 160, 20, 0.7),
        (0, 050, 160, 20, 0.4),
        (0, 000, 160, 20, 0.05)
       ]#三个区域
weight_sum = 0
range_stop=[390,190,100]#停止线像素最小值
range_wait=[60,40,0]    #等待停止线像素最小值
for r in ROIS: weight_sum += r[4]
#摄像头设置
sensor.reset()
sensor.set_contrast(1)
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QQVGA)
sensor.skip_frames(30)
sensor.set_auto_gain(False)
sensor.set_auto_whitebal(False)
clock = time.clock()
sensor.set_vflip(True)
sensor.set_hmirror(True)
thresholds=[(-100, 72, -128, -16, -128, 127)]
#该阈值用于测距

#寻找两个最大的色块,ID存在max_ID中,便于调用
def find_max(blobs):
    max_size=[0,0]
    max_ID=[-1,-1]
    for i in range(len(blobs)):
        if blobs[i].pixels()>max_size[0]:
            max_ID[1]=max_ID[0]
            max_size[1]=max_size[0]
            max_ID[0]=i
            max_size[0]=blobs[i].pixels()
        elif blobs[i].pixels()>max_size[1]:
            max_ID[1]=i
            max_size[1]=blobs[i].pixels()
    return max_ID


def car_run():
    centroid_sum = [0,0]
    left_center=[-1,-1,-1]  #存放左边色块的中心cx值用于计算左边的偏移角
    right_center=[-1,-1,-1] #存放右边色块的中心cx值用于计算右边的偏移角
    flag_cross=0    #是否有分岔口
    flag_Stop=0     #停止标志
    flag_Wait=[0,0] #等待停止标志
    for r in range(3):  #三个区域分别寻找色块
        blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=ROIS[r][0:4], merge=True,area_threshold=100,margin=3)
        if blobs:
            max_ID=[-1,-1]
            max_ID=find_max(blobs)  #找最大色块
            img.draw_rectangle(blobs[max_ID[0]].rect())
            img.draw_cross(blobs[max_ID[0]].cx(),
                           blobs[max_ID[0]].cy())
            if max_ID[1]!=-1:   #如果有两个色块,即有分岔口,分成左边右边存入数组
                img.draw_rectangle(blobs[max_ID[1]].rect())
                flag_cross=1
                img.draw_cross(blobs[max_ID[1]].cx(),
                               blobs[max_ID[1]].cy())
                if blobs[max_ID[0]].cx()<blobs[max_ID[1]].cx():
                    left_center[r]=blobs[max_ID[0]].cx()
                    right_center[r]=blobs[max_ID[1]].cx()
                else:
                    left_center[r]=blobs[max_ID[1]].cx()
                    right_center[r]=blobs[max_ID[0]].cx()
            else:   #只有一个色块
                #print(blobs[max_ID[0]].pixels(),blobs[max_ID[0]].w())
                if flag_cross==0:   #没有分岔口,进行判断停止线
                    if blobs[max_ID[0]].pixels()>range_stop[r]:
                        flag_Stop=r+1
                    if blobs[max_ID[0]].w()>range_wait[r]:
                        flag_Wait[0]=flag_Wait[0]+1
                left_center[r]=right_center[r]=blobs[max_ID[0]].cx()
            centroid_sum[0] += left_center[r] * ROIS[r][4] #乘权值
            centroid_sum[1] += right_center[r] * ROIS[r][4]
    center_pos =[0,0]
    center_pos[0] = (centroid_sum[0] / weight_sum)
    center_pos[1] = (centroid_sum[1] / weight_sum)
    if flag_Wait[0]==2:
        flag_Wait[1]=1
    deflection_angle = [0,0]
    deflection_angle[0] = -math.atan((center_pos[0]-80)/60)#计算角度
    deflection_angle[1] = -math.atan((center_pos[1]-80)/60)
    deflection_angle[0] = math.degrees(deflection_angle[0])#弧度制换成角度制
    deflection_angle[1] = math.degrees(deflection_angle[1])
    if center_pos[0]==center_pos[1]==0:
        deflection_angle[1]=deflection_angle[0]=0
    A=[int(deflection_angle[0])+90,int(deflection_angle[1])+90,flag_Stop,flag_Wait[1]]
    return A

def degrees(radians):
    return (180 * radians) / math.pi
k=1
while(True):
    times=0
    clock.tick()
    img = sensor.snapshot().lens_corr(strength = 1.8, zoom = 1.0)#不断拍照,进行鱼眼校正
    row_data=[0,0,0,0,0]
    row_data[0],row_data[1],row_data[3],row_data[4]=car_run()
    blobs=img.find_blobs([(30,60,-30,-10,-25,-12)],pixels_threshold=300,area_threshold=300,merge=False)
    max_size=0
    if blobs:
        for blob in blobs:
            if blob.cy()+0.5*blob.h() > max_size:
                img.draw_rectangle(blob.rect(),(255,0,0))
                max_size = blob.cy()+0.5*blob.h()
        row_data[2]=int(k*(120-max_size))#计算距离,k可调
    print(row_data)
    #传输数据给单片机
    uart_buf = bytearray(row_data)
    uart.write(u_start)
    uart.write(uart_buf)
    uart.write(u_over)


Guess you like

Origin blog.csdn.net/weixin_52385589/article/details/126329933