2023 Electric Competition---Sports Target Control and Automatic Tracking System (Question E)

foreword

(1) Because the number of blog editing words exceeds 1W words, the MD editor will be very stuck. So I separated the idea of ​​developing questions and basic questions.
(2) Update diary:
<1> August 4, 2023, 9:20. Separating the idea of ​​developing questions and basic questions, adding the overall code of blogger Huiyeee, and adding a debugging method for the problem of red dots being absorbed by black belts
<2> August 4, 2023, 15:55. Answer the target value in the chasing ball code is that.

About the red dot being absorbed by the black belt

Make a red dot appear on the IDE

(1) Many people reported that the red dot was absorbed by the black rectangle and could not be recognized.
(2) The debugging methods recommended here are as follows:
<1> The primary purpose is to allow people on the IDE on the PC side to see the red laser with naked eyes. Therefore, the following initialization part of the code should be adjusted appropriately outside.
<2> We can change RGB565 to GRAYSCALE grayscale image.
<3> Properly increase the image resolution and change QQVGA to other image quality.
<4> Set the brightness, because it is transplanted from OpenART, so the brightness cannot be 3000. Students who use OpenMV, please note that it should be -3 to +3.

insert image description here

<5> Exposure can be based on another blogger. By adjusting the exposure, you can control the lightness and darkness of the image, thus creating different visual effects. We did not use the sensor.set_auto_exposure() function to set the exposure here, so it is turned on by default. You can look at the ideas and adjustments of another blogger.
<6> We need to turn off the white balance and self-gain for image recognition. But if it is adjusted, there is no result. You can try to open it to see if there is any optimization

# 初始化摄像头
sensor.reset()
sensor.set_pixformat(sensor.RGB565) # 设置图像色彩格式为RGB565格式
sensor.set_framesize(sensor.QQVGA)  # 设置图像大小为160*120
sensor.set_auto_whitebal(True)      # 设置自动白平衡
sensor.set_brightness(3000)         # 设置亮度为3000
sensor.skip_frames(time = 20)       # 跳过帧

(3) Note that only the initialization of the camera needs to be adjusted at this stage! ! ! The PC-side IDE has red spots and starts processing again! ! !

The role of white balance and self-gain

(1) Turning off white balance and self-gain may be applicable to the following scenarios:
<1> Maintain color consistency: In some image recognition applications, especially those involving high importance of color information, turning off white balance can maintain the color consistency. White balance will adjust the image color according to different light sources, which may lead to differences in the color of the same object under different lighting conditions, affecting the accuracy of recognition.
<2>Optimization under specific lighting conditions: In some special environments, such as low light conditions or strong light exposure, turning off the self-gain can avoid over-exposure or under-exposure of the image. Doing so preserves more detail in the image, which helps image recognition algorithms process images better.
<3> Raw data analysis: In some image analysis applications, turning off the white balance and self-gain can use the original sensor data for analysis without being affected by the camera's color processing and brightness adjustment. This can provide purer and more raw data, which can help the optimization of some specific image processing and recognition algorithms.
(2) It should be noted that turning off white balance and self-gain may also cause some challenges. For example, the color information in the image may change greatly, and in some scenarios, more complex image processing algorithms may be required to deal with it Challenges in different lighting conditions. Therefore, whether to turn off white balance and self-gain in image recognition applications needs to be weighed and decided according to specific situations and application scenarios.

Give full play to the idea of ​​the question

First Question - Tracking

(1) I personally think that this must be on the PID. I personally think that this topic is similar to OpenMV's chasing ball gimbal code.
(2) I think the basic topic can be completed with one OpenMV. To play the title, two OpenMVs are required. One of the two OpenMVs puts a red laser pointer and the other puts a green laser pointer.
(3) You can comprehensively track the code of the small ball gimbal , as well as what I will talk about below: Notes on the code of the blogger at station C.
(4) A green laser pointer for tracking is recommended.
<1> Change to a grayscale image, which can eliminate the interference of other colors and only observe red. Then track.
<2> Improve the resolution, higher resolution can provide better effect.
<3> Use multiple sets of color thresholds, just like the code explained in detail for OpenMV multi-color recognition . Let the recognized color thresholds become multiple groups. Improve resolution.

The second question - tracking and walking basic questions 3 and 4

If the basic questions are done, and the tracking of the play questions is done. Then just use the red laser pointer of the basic questions to go, and then use the question tracking. This should be similar to the above code.

Question 3 - Pause button

For this, I still suggest soldering a button by myself and connecting a 104 capacitor in parallel to debounce the button in hardware. Then write the logic code yourself.

insert image description here

main.py

(1) The following is the official ball chasing gimbal code, I added Chinese notes.
(2) Because GitHub is on the external network, I copied it directly. But there are always some netizens who insist on the official GitHub link. I also posted it in the preface section.

the code

import sensor, image, time

from pid import PID
from pyb import Servo  #从内置pyb导入servo类,也就是舵机控制类

pan_servo=Servo(1)  #定义两个舵机,对应P7引脚
tilt_servo=Servo(2) #定义两个舵机,对应P8引脚

pan_servo.calibration(500,2500,500)
tilt_servo.calibration(500,2500,500)

red_threshold  = (13, 49, 18, 61, 6, 47)  #设置红色阈值

pan_pid = PID(p=0.07, i=0, imax=90) #PID参数,只需要调整P量即可,设置P7引脚的PI值
tilt_pid = PID(p=0.05, i=0, imax=90) #PID参数,只需要调整P量即可,设置P8引脚的PI值
#pan_pid = PID(p=0.1, i=0, imax=90)#在线调试使用这个PID
#tilt_pid = PID(p=0.1, i=0, imax=90)#在线调试使用这个PID

sensor.reset() # 初始化摄像头传感器
sensor.set_pixformat(sensor.RGB565) # 使用 RGB565 彩图
sensor.set_framesize(sensor.QQVGA) # 使用 QQVGA 分辨率
sensor.skip_frames(10) #跳过几帧,让新的设置生效。
sensor.set_auto_whitebal(False) # 因为是颜色识别,所以需要把白平衡关闭
clock = time.clock() # 追踪帧率,影响不大

#__________________________________________________________________
#定义寻找最大色块的函数,因为图像中有多个色块,所以追踪最大的那个
def find_max(blobs):
    max_size=0
    for blob in blobs:
        if blob[2]*blob[3] > max_size:
            max_blob=blob
            max_size = blob[2]*blob[3]
    return max_blob

#__________________________________________________________________
while(True):
    clock.tick() # 跟踪快照()之间经过的毫秒数。
    img = sensor.snapshot() # 截取一张图片

    blobs = img.find_blobs([red_threshold]) #识别红色阈值
    if blobs:   #如果找到红色色块
        max_blob = find_max(blobs)  #调用上面自定义函数,找到最大色块
        pan_error = max_blob.cx()-img.width()/2
        tilt_error = max_blob.cy()-img.height()/2

        print("pan_error: ", pan_error)

        img.draw_rectangle(max_blob.rect()) # 在找到最大色块画一个矩形框
        img.draw_cross(max_blob.cx(), max_blob.cy()) # cx, cy

        pan_output=pan_pid.get_pid(pan_error,1)/2
        tilt_output=tilt_pid.get_pid(tilt_error,1) #上面两个都说进行PID运算
        print("pan_output",pan_output)
        pan_servo.angle(pan_servo.angle()+pan_output) #将最终值传入两个舵机中,追踪目标
        tilt_servo.angle(tilt_servo.angle()-tilt_output)# 因为两个舵机方向和摆放位置不同,所以一个是+一个是-

steering gear control

Servo selection

(1) Servo is the servo control class. Because we imported the servo class directly from pyd using from pyb import Servo. So it can be directly written as Servo().
(2) Pin correspondence: Servo(1)——P7, Servo(2)——P8, Servo(3)——P9.
(3)
<1> So we can know that pan_servo.calibration is to control P7 accordingly, because pan_servo=Servo(1).
<2>tilt_servo.calibration is to control P8, because tilt_servo=Servo(2)

insert image description here

Servo pulse setting

(1) pan_servo.calibration(500,2500,500), this is actually to set the pulse width of the servo. Because we know that the common servo cycle is 20ms, and the pulse width is between 500us and 2500us.
(2) So here the first parameter is 500 and the second parameter is 2500. The third parameter is the 0° position of the servo, which is also 500 according to the figure below. The last two values ​​can be filled in or not, and I personally do not recommend filling them in.

insert image description here
insert image description here

The PID target value is that

pan_servo.angle() is the current servo angle, and pan_output is the offset angle calculated by PID.

Color Threshold Settings

(1) Because we want to track red, we use red_threshold = (13, 49, 18, 61, 6, 47) to set the threshold.
(2) Color threshold setting tutorial: OpenMV color threshold setting

P and I parameter setting of PID

(1) Note, because the gimbal is relatively stable and does not require high response speed, so I guess only PI is used instead of PID. So we can see that pan_pid and tilt_pid only have two parameters, P and I.
(2)The parameters of I do not need to be adjusted! imax is used for integral limit, don't change this!, if your gimbal shakes badly, it means that P is too large and needs to be adjusted down.
(3) If you feel that the response of your gimbal is too slow, you need to increase the P value.
(4) The best P value is the previous value when your gimbal has jitter. This is the optimal value of P. But in my opinion, there is no need for P to be too large, because the gimbal is relatively stable.
(5)Therefore, only the P value needs to be adjusted, and the I value and imax value do not need to be adjusted! ! !

frame rate disabled

(1) When OpenMV is connected to the computer IDE, the running frame rate is different from that of not connected to the computer IDE. Because when we connect to the computer IDE, OpenMV will transmit data to the computer IDE, so the frame rate will drop.
(2) The drop in frame rate will cause the PID parameters to be different when we actually run offline and when connected to the computer IDE.
(3) So we need to click Disable in the upper right corner of the computer, or Disable. After being disabled, our effect is the real running effect after offline.

insert image description here

pid.py

(1) We don't need to care about the code here, even the PID code.
(2)Again, please don't make any changes here, and you are responsible for any problems.

from pyb import millis
from math import pi, isnan
 
class PID:
    _kp = _ki = _kd = _integrator = _imax = 0
    _last_error = _last_derivative = _last_t = 0
    _RC = 1/(2 * pi * 20)
    def __init__(self, p=0, i=0, d=0, imax=0):
        self._kp = float(p)
        self._ki = float(i)
        self._kd = float(d)
        self._imax = abs(imax)
        self._last_derivative = float('nan')
 
    def get_pid(self, error, scaler):
        tnow = millis()
        dt = tnow - self._last_t
        output = 0
        if self._last_t == 0 or dt > 1000:
            dt = 0
            self.reset_I()
        self._last_t = tnow
        delta_time = float(dt) / float(1000)
        output += error * self._kp
        if abs(self._kd) > 0 and dt > 0:
            if isnan(self._last_derivative):
                derivative = 0
                self._last_derivative = 0
            else:
                derivative = (error - self._last_error) / delta_time
            derivative = self._last_derivative + \
                                     ((delta_time / (self._RC + delta_time)) * \
                                        (derivative - self._last_derivative))
            self._last_error = error
            self._last_derivative = derivative
            output += self._kd * derivative
        output *= scaler
        if abs(self._ki) > 0 and dt > 0:
            self._integrator += (error * self._ki) * scaler * delta_time
            if self._integrator < -self._imax: self._integrator = -self._imax
            elif self._integrator > self._imax: self._integrator = self._imax
            output += self._integrator
        return output
    def reset_I(self):
        self._integrator = 0
        self._last_derivative = float('nan')

Notes on the code of the blogger at Station C

the code

Original link: https://blog.csdn.net/weixin_52385589/article/details/126334744

Photoreceptor initialization code

sensor.reset()
    sensor.set_auto_gain(False)
    sensor.set_pixformat(sensor.GRAYSCALE) # or sensor.RGB565
    sensor.set_framesize(sensor.  QVGA) # or sensor.QVGA (or others)
    sensor.skip_frames(time=900) # Let new settings take affect.
    sensor.set_auto_exposure(False, 1000)#在这里调节曝光度,调节完可以比较清晰地看清激光点
    sensor.set_auto_whitebal(False) # turn this off.
    sensor.set_auto_gain(False) # 关闭增益(色块识别时必须要关)

Identify laser point codes

def color_blob(threshold):

    blobs = img.find_blobs(threshold,x_stride=1, y_stride=1, area_threshold=0, pixels_threshold=0,merge=False,margin=1)

    if len(blobs)>=1 :#有色块
        # Draw a rect around the blob.
        b = blobs[0]
        #img.draw_rectangle(b[0:4]) # rect
        cx = b[5]
        cy = b[6]
        for i in range(len(blobs)-1):
            #img.draw_rectangle(b[0:4]) # rect
            cx = blobs[i][5]+cx
            cy = blobs[i][6]+cy
        cx=int(cx/len(blobs))
        cy=int(cy/len(blobs))
        #img.draw_cross(cx, cy) # cx, cy
        print(cx,cy)
        return int(cx), int(cy)
    return -1, -1 #表示没有找到

color threshold

threshold=[(60, 255, -20, 20, -20, 20)]

overall code

(1) This is the overall code of the blogger, you can use it if you are interested.
(2)The copyright belongs to Huiyeee, the blogger of Station C! Students who have used the code must go to Huiyeee 's blogger to thank you! ! !

def find_max(blobs):
    max_size=0
    for blob in blobs:
        if blob[2]*blob[3] > max_size :
            max_blob=blob
            max_size = blob[2]*blob[3]
    return max_blob


def FindCR():

    counts=0
    w_avg=0
    x_avg=0
    y_avg=0
    judge=0
    shape=0
    c_times=0
    r_times=0
    max_size=0
    cx=0
    cy=0
    while(counts<30):
        if(pin1.value()==0):
            break
        counts=counts+1
        clock.tick()#返回以毫秒计的通电后的运行时间。
        img = sensor.snapshot().lens_corr(strength = 1.8, zoom = 1.0)#抓取一帧的图像并返回一个image对象
        #img.find_blobs()查找图像中所有色块,并返回包含每个色块的色块对象列表

        flag=0
        blobs=img.find_blobs(thresholds, pixels_threshold=200, area_threshold=300, merge=True)
        if blobs:
            max_blob=find_max(blobs)
            if max_blob:
                if judge==0:
                    for c in img.find_circles(max_blob.rect(),threshold = 5000, x_margin = 5, y_margin = 5,
                     r_margin = 5, r_min = 2, r_max = 200 , r_step = 2):
                        print(c)
                        if c.magnitude()>5000:
                            c_times=c_times+1
                            cx=cx+c.x()
                            cy=cy+c.y()
                            #img.draw_circle(c.x(),c.y(),c.r(),(0,255,0))
                        if c_times>5:
                            judge=1
                            shape=1
                            print("circle")
                            cx=int(cx/c_times)
                            cy=int(cy/c_times)
                            break
                    for r in img.find_rects(max_blob.rect(),threshold = 10):
                        img.draw_rectangle(r.x(),r.y(),r.w(),r.h(),(0,255,0))
                        print(r)
                        if r.magnitude()>50000 and judge==0:
                            r_times=r_times+1
                        if r_times:
                            judge=1
                            shape=0
                            print("rect")
                            break
                if abs(max_blob.w()-max_blob.h())<4:
                    if shape==1:
                        x_avg=cx
                        y_avg=cy
                    else:
                        x_avg=(max_blob.x()+0.5*max_blob.w()+x_avg)+max_blob.cx()
                        y_avg=(max_blob.y()+0.5*max_blob.h()+y_avg)+max_blob.cy()
                        max_size=max_size+2
                    flag=1
                #if(max_blob.cx()-max_blob.x()>w_avg):
                    #w_avg=max_blob.cx()-max_blob.x()
                #if(max_blob.w()-max_blob.cx()+max_blob.x())>x_avg:
                    #x_avg=max_blob.w()-max_blob.cx()+max_blob.x()

                #if(max_blob.cy()-max_blob.y()>w_avg):
                    #w_avg=max_blob.cy()-max_blob.y()
                #if(max_blob.w()-max_blob.cy()+max_blob.y())>x_avg:
                    #x_avg=max_blob.w()-max_blob.cy()+max_blob.y()
                if max_blob.w()>w_avg:
                    w_avg=max_blob.w()
                if max_blob.h()>w_avg:
                    w_avg=max_blob.h()
                #a=math.sqrt((max_blob.cx()-max_blob.x())**2-(max_blob.cy()-max_blob.y())**2)
                #if a>w_avg:
                    #w_avg=a
                img.draw_rectangle(max_blob.rect())#画矩形框 blob.rect() ---> 返回一个矩形元组(可当作roi区域)

                img.draw_cross(max_blob.cx(),max_blob.cy())
        if flag==0:
            counts=counts-1
        else:
            if shape==0:
                img.draw_cross(int(x_avg/max_size),int(y_avg/max_size))#画十字 blob.cx(), blob.cy() --->返回中心点x和y坐标
            else:
                img.draw_cross(x_avg,y_avg)
           #print(clock.fps())#clock.fps() ---> 停止追踪运行时间,并返回当前FPS(必须先调用tick)。
    if shape==0:
        print(int(x_avg/max_size),int(y_avg/max_size),int(w_avg))
        data=[int(x_avg/max_size)+1,int(y_avg/max_size)-2 ,int(w_avg),shape]
    elif shape==1:
        data=[x_avg,y_avg,int(w_avg),shape]

    return data




def detect():
    sensor.reset() #初始化设置
    sensor.set_pixformat(sensor.RGB565) #设置为彩色
    sensor.set_framesize(sensor.QVGA) #设置清晰度
    sensor.skip_frames(time = 2000) #跳过前2000ms的图像
    sensor.set_auto_gain(False) # 关闭自动自动增益。默认开启的。
    sensor.set_auto_whitebal(False) #关闭白平衡。在颜色识别中,一定要关闭白平衡。
    clock = time.clock() #创建一个clock便于计算FPS,看看到底卡不卡

    judge=0
    r_time=0
    a_r=0
    a_c=0
    a_rt=0
    c_time=0
    rt_time=0
    row_data=[-1,-1]

    while(judge==0): #不断拍照
        clock.tick()
        img = sensor.snapshot().lens_corr(strength = 1.8, zoom = 1.0)

        blobs=img.find_blobs(thresholds,pixels_threshold=10,area_threshold=10)
        #openmv自带的寻找色块函数。
        #pixels_threshold是像素阈值,面积小于这个值的色块就忽略
        #roi是感兴趣区域,只在这个区域内寻找色块
        #are_threshold是面积阈值,如果色块被框起来的面积小于这个值,会被过滤掉
            #print('该形状占空比为',blob.density())
        if blobs:
            max_blob=find_max(blobs)
            if max_blob:


            #print('该形状的面积为',area)
             #density函数居然可以自动返回色块面积/外接矩形面积这个值,太神奇了,官方文档还是要多读!
                if max_blob.density()>0.78:#理论上矩形和他的外接矩形应该是完全重合
            #但是测试时候发现总会有偏差,多次试验取的这个值。下面圆形和三角形亦然

                    r_time=r_time+1
                    #a_r=a_r+area
                    #area=int(max_blob.x()*max_blob.y()*max_blob.density()*0.0185)
                    img.draw_rectangle(max_blob.rect())
                    print('长方形长',max_blob.density())
                    if r_time>1:
                        #area=int(a_r/r_time)
                        #print(area)
                        judge=1
                        row_data[0]=1
                        print("检测为长方形  ",end='')
                elif max_blob.density()>0.46:

                    #area=int(max_blob.x()*max_blob.y()*max_blob.density()*0.0575)
                    c_time=c_time+1
                    #a_c=a_c+area
                    img.draw_circle((max_blob.cx(), max_blob.cy(),int((max_blob.w()+max_blob.h())/4)))
                    print('圆形半径',max_blob.density())
                    if c_time>8:
                        #area=int(a_c/c_time)
                        #print(area)
                        judge=1
                        row_data[0]=2
                        print("检测为圆  ",end='')
                elif max_blob.density()>0.2:
                    #area=int(max_blob.x()*max_blob.y()*max_blob.density()*0.0207)
                    rt_time=rt_time+1
                    #a_rt=a_rt+area
                    img.draw_cross(max_blob.cx(), max_blob.cy())
                    print(max_blob.density(),)
                    if rt_time>20:
                        #area=int(a_rt/rt_time)
                        #if c_time>0:
                            #area=area-10*c_time
                        #print(area)
                        judge=1
                        row_data[0]=3
                        print("检测为三角型  ",end='')
                else: #基本上占空比小于0.4的都是干扰或者三角形,索性全忽略了。
                    continue

                #row_data[1]=area

    area=0
    count=0
    while(1):
        clock.tick()
        img = sensor.snapshot().lens_corr(strength = 1.8, zoom = 1.0)
        img.binary(thresholds)
        img.draw_rectangle(80,40,150,120)
        x_init=80
        y_init=40

        for i in range(150):
            for j in range(100):
                if img.get_pixel(i+x_init,j+y_init)[0]==255:
                    count=count+1

        break
        #img.draw_rectangle(80,40,150,120)
        #sta=img.get_statistics(thresholds, invert=False, roi=(80,40,150,120))
        #area=area+(30-sta.mean())*1000
        #counts=counts+1
        #print(sta.mean())
        #if(counts>50):
            #area=int(area/counts*0.0031495)
            #print(area)
            #break
    row_data[1]=int(count*0.02329*1.14946236559)
    #
    \

    print(row_data[1])
    data=bytearray(row_data)
    uart.write(u_start)
    uart.write(data)
    uart.write(u_over)



#-------------------------------------------------------------------------

#threshold=[(90, 100, -101, 87, -94, 84)]
threshold=[(60, 255, -20, 20, -20, 20)]

def color_blob(threshold):

    blobs = img.find_blobs(threshold,x_stride=1, y_stride=1, area_threshold=0, pixels_threshold=0,merge=False,margin=1)

    if len(blobs)>=1 :
        # Draw a rect around the blob.
        b = blobs[0]
        #img.draw_rectangle(b[0:4]) # rect
        cx = b[5]
        cy = b[6]
        for i in range(len(blobs)-1):
            #img.draw_rectangle(b[0:4]) # rect
            cx = blobs[i][5]+cx
            cy = blobs[i][6]+cy
        cx=int(cx/len(blobs))
        cy=int(cy/len(blobs))
        #img.draw_cross(cx, cy) # cx, cy
        print(cx,cy)
        return int(cx), int(cy)
    return -1, -1

import sensor, image, time ,math,utime
from pyb import UART
from pyb import Pin
from pyb import ExtInt
from pyb import LED



uart = UART(3, 115200, timeout_char=1000)  # i使用给定波特率初始化
uart.init(115200, bits=8, parity=None, stop=1, timeout_char=1000)
u_start=bytearray([0xb3,0xb3])
u_over=bytearray([0x0d,0x0a])
thresholds = [(0, 15, -21,10, -18, 6),(0, 22, -28, 19, -4, 13),
(0, 25, -22, 10, -30, 10),(0, 25, -20, 15, -37, 12),(3,22,-4,40,-10,13)]#LAB阈值
pin1 = Pin('P1', Pin.IN, Pin.PULL_UP)
pin2 = Pin('P2',Pin.IN,Pin.PULL_UP)


#extint = ExtInt(pin2, ExtInt.IRQ_FALLING, Pin.PULL_UP, callback_PIN2)
#顺序:(L Min, L Max, A Min, A Max, B Min, B Max)

clock = time.clock()#定义时钟对象clock



while(1):
    while(pin1.value()==0):
        continue
    row_data = [-1,-1,-1,-1,-1,-1]
    sensor.reset()
    sensor.set_pixformat(sensor.RGB565)
    sensor.set_framesize(sensor.QVGA)
    sensor.skip_frames(time = 500)
    sensor.set_auto_gain(False) # 关闭增益(色块识别时必须要关)
    sensor.set_auto_whitebal(False) # 关闭白平衡
    LED(1).on()
    if(pin1.value()==0):
        continue
    if(pin2.value()==0):
        detect()

    row_data[0],row_data[1],row_data[2],row_data[3]=FindCR()
    if(pin1.value()==0):
        continue
    if(pin2.value()==0):
        detect()
    LED(1).off()
    LED(2).on()
    data=bytearray(row_data)
    uart.write(u_start)
    uart.write(data)
    uart.write(u_over)
    print(row_data)
    LED(2).off()
    if(pin1.value()==0):
        continue
    sensor.reset()
    sensor.set_auto_gain(False)
    sensor.set_pixformat(sensor.GRAYSCALE) # or sensor.RGB565
    sensor.set_framesize(sensor.  QVGA) # or sensor.QVGA (or others)
    sensor.skip_frames(time=900) # Let new settings take affect.
    sensor.set_auto_exposure(False, 1000)
    sensor.set_auto_whitebal(False) # turn this off.
    sensor.set_auto_gain(False) # 关闭增益(色块识别时必须要关)
    times=0
    num=[-1,-1,-1]
    add=[-1,-1]
    while(True):
        if(pin1.value()==0):
            break
        if(pin2.value()==0):
            detect()
        #EXPOSURE_TIME_SCALE = 1.01
        #current_exposure_time_in_microseconds = sensor.get_exposure_us()

        # 默认情况下启用自动曝光控制(AEC)。调用以下功能可禁用传感器自动曝光控制。
        # 另外“exposure_us”参数在AEC被禁用后覆盖自动曝光值。
        #sensor.set_auto_exposure(False, \
            #exposure_us = int(current_exposure_time_in_microseconds * EXPOSURE_TIME_SCALE))
        #roi=(int(row_data[0]-0.5*row_data[2]),int(row_data[1]-0.5*row_data[2]),row_data[2],row_data[2])
        clock.tick()
        img = sensor.snapshot().lens_corr(strength = 1.8, zoom = 1.0)#抓取一帧的图像并返回一个image对象
        img.draw_circle(data[0],data[1],int(data[2]*0.5))
        img.draw_cross(data[0],data[1])
        img = sensor.snapshot().lens_corr(strength = 1.8, zoom = 1.0)#抓取一帧的图像并返回一个image对象
        row_data[4],row_data[5]=color_blob(threshold)

        if row_data[4]!=-1:
            if times==0:
                num[times]=[row_data[4],row_data[5]]
            else:
                if abs(row_data[4]-num[0][0])>10 or abs(row_data[5]-num[0][1])>10:

                    continue #丢弃数据
            times=times+1
            num[0]=[row_data[4],row_data[5]]
            #img.draw_cross(row_data[4],row_data[5])
            data=bytearray(row_data)
            uart.write(u_start)
            uart.write(data)
            uart.write(u_over)
            print(row_data)

        print(pin2.value())

Interpretation of the code part

image type

(1) The image types set by sensor.set_pixformat are different.
<1> I compared the official code with the code posted above and found one thing. His sensor.set_pixformat is set to GRAYSCALE image, and here we are set to RGB565 image.
<2> The advantage of setting the image quality to GRAYSCALE is that if there are only two colors, it is easier to distinguish, and the space occupied by each pixel is also reduced, and the resolution can be appropriately increased. But there are also disadvantages. If it is multi-color recognition, there will be problems.
<3> There are only three colors in this question, the bottom plate is white, the green dots are tracked, and the red dots are tracked. Because you only need to track one color, you can try to change sensor.set_pixformat(sensor.RGB565) to sensor.set_pixformat(sensor.GRAYSCALE)

insert image description here

Image Resolution

(2) Image resolution, sensor.set_framesize(sensor.QQVGA)
<1>The resolution of the official routine is QQVGA, and the pixel is 160x120. And the resolution of that blog is QVGA, the pixel is 320x240.
<2> The higher the resolution, the better the recognition effect, but the resolution should be increased without thinking, otherwise the color threshold is convenient and it will be difficult to set.

insert image description here

Skip frames

(3) Skip the number of frames, sensor.skip_frames(10)
<1> This is to initialize a buffer time for OpenMV, and it is set to skip 900 pictures. The official routine is to skip 10 pictures.
<2> The choice is up to you. I don’t think 900 is necessary, it’s too much.

insert image description here

exposure setting

(4) sensor.set_auto_exposure(False, 1000)
<1> This is used to set the exposure, overexposed photos will look too bright, underexposed pictures will look too dark.
<2> The official routine does not call this function, indicating that the exposure is always on.
<3> Station C is set to expose once every 1ms. You can set it according to your own test results.

insert image description here

White Balance and Auto Gain

(5) sensor.set_auto_whitebal(False) and sensor.set_auto_gain(False)
<1> Logically speaking, for color recognition, both of these need to be turned off. The other blog at station C is all closed. But the official only turned off the set_auto_whitebal() white balance.
<2>I personally suggest to close all of them first

insert image description here

insert image description here

Guess you like

Origin blog.csdn.net/qq_63922192/article/details/132096752