OpenMV introductory tutorial (very detailed) from zero-based entry to proficiency, it is enough to read this article

1. What is OpenMV

OpenMV is an open source, low cost, powerful machine vision module.

Machine vision algorithms on OpenMV include finding color patches, face detection, eye tracking, edge detection, sign tracking, etc.

With the STM32F427CPU as the core, the OV7725 camera chip is integrated. On the small hardware module, the core machine vision algorithm is efficiently implemented in C language and a Python programming interface is provided.

(This also means that we can program him through python, so we need to learn a little basic python knowledge)

————————————————

2. About OpenMV and OpenCV

OpenMV is an open source machine vision framework, and OpenMV is an open source computer vision library. Both are tools for implementing visual applications. The difference is that OpenMV can run on MCUs, while OpenCV can run on CPUs with various frameworks. The advantage of OpenMV lies in its lightweight, but it is obviously weaker than OpenCV in processing highly complex graphics information and telling images .

————————————————

3. OpenMV Tutorial

Preface · OpenMV Chinese Introductory Tutorial

The above link is the hands-on tutorial given by Xingtong official, and our next content is also a note on the video content

Home - Liao Xuefeng's official website

The above link is a website for learning Python grammar, suitable for students who have some basic knowledge of other languages

————————————————

4. OpenMV IDE interface introduction

The above is the interface we entered after downloading the IDE

The middle piece is our code edit box, where we can write code

The upper right corner is the Frame Buffer, which can be used to view the headshot of the OpenMV camera

The lower right is the histogram of the image, you can view the different color thresholds of the image

When we connect OpenMV, click connect, and you can see the image displayed

The menu bar File in the upper left corner

In the following example, there are some official processes

————————————

Control some basic peripherals

————————————————

Draw a picture, draw a cross, draw a line, draw a frame

————————————————

Image Dependent Filtering

————————————————

Image screenshot saving, etc.

————————————————

record video etc.

————————————————

face recognition, face tracking

————————————————

Here are some feature point matches:

Line segment, circle recognition, edge detection, template matching, etc.

————————————————

Pupil Recognition, Human Eye Recognition

————————————————

Related to color recognition: automatic grayscale color recognition, automatic color map color recognition, infrared color recognition, etc.

————————————————

Run the LCD program, used when we connect an external LCD display

————————————————

A Routine for Infrared Thermography

____________________________

Bluetooth, WIFI, servo expansion board routines

——————————————

Barcode, two-dimensional code related scanning recognition

____________________________________________________

Then in Edit, it is our most frequently used copy and paste, etc.

———————————————————————————

5. Explanation of basic procedures

# Hello World Example
#
# Welcome to the OpenMV IDE! Click on the green run arrow button below to run the script!

import sensor, image, time

sensor.reset()                      # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)   # Set frame size to QVGA (320x240)
sensor.skip_frames(time = 2000)     # Wait for settings take effect.
clock = time.clock()                # Create a clock object to track the FPS.

while(True):
    clock.tick()                    # Update the FPS clock.
    img = sensor.snapshot()         # Take a picture and return the image.
    print(clock.fps())              # Note: OpenMV Cam runs about half as fast when connected
                                    # to the IDE. The FPS should increase once disconnected.

The above code was given to us directly when we first entered the IDE. We will analyze it first.

· First of all, import is to import the modules that this code depends on. In hello world, this code mainly depends on

sensor Photosensitive element module
image Image processing related modules
time clock
sensor.reset()    reset sensor
RGB565 color book
sensor.set_pixformat(sensor.RGB565) Sets the color of the image from the sensor
sensor.set_framesize(sensor.QVGA) Set the size of the resolution of the photosensitive element
sensor.skip_frames(time = 2000) make the image skip a few frames
clock = time.clock()  set the clock

We enter a big while loop, where the image is constantly taking screenshots to save images, and we can see the captured images in the upper right corner

The last sentence is the printed frame rate, we can see it in the box below Terminal

——————————————————————————————————

5. How to run OpenMV offline

And we want to use our code when we are not connected to the computer, we need to use offline operation

Save the code in the built-in flash memory of OpenMV (that is, when OpenMV is connected to the computer, the U disk pops up, we save the file in it, and we store the code or picture in it)

We use one-click download in the tool

When the writing is successful, our light will light up once, which means that the offline is completed. After the offline is successful, we can power on OpenMV again (that is, restart it), and the code inside can be run automatically. We can also see the code we just saved in the U disk of OpenMV

OpenMV will save our code as main.py by default when saving, and we can also save it as another name.py after saving

But it should be noted that after we power on, it will automatically execute our main.py program instead of other programs

Regarding how we can check whether our code is saved correctly, we can save the flashing code in the sample code as a test in our main.py, and then check whether the flashing light is correct after power-on

If we find that sometimes the code is not saved correctly, we can format the U disk and save it again

——————————————————————————————————

6. Color recognition

We know earlier that there are a lot of color recognition in OpenMV, let's take a look at single color color recognition

# Single Color RGB565 Blob Tracking Example
#
# This example shows off single color RGB565 tracking using the OpenMV Cam.

import sensor, image, time, math

threshold_index = 0 # 0 for red, 1 for green, 2 for blue

# Color Tracking Thresholds (L Min, L Max, A Min, A Max, B Min, B Max)
# The below thresholds track in general red/green/blue things. You may wish to tune them...
thresholds = [(30, 100, 15, 127, 15, 127), # generic_red_thresholds
              (30, 100, -64, -8, -32, 32), # generic_green_thresholds
              (0, 30, 0, 64, -128, 0)] # generic_blue_thresholds

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

# Only blobs that with more pixels than "pixel_threshold" and more area than "area_threshold" are
# returned by "find_blobs" below. Change "pixels_threshold" and "area_threshold" if you change the
# camera resolution. "merge=True" merges all overlapping blobs in the image.

while(True):
    clock.tick()
    img = sensor.snapshot()
    for blob in img.find_blobs([thresholds[threshold_index]], pixels_threshold=200, area_threshold=200, merge=True):
        # These values depend on the blob not being circular - otherwise they will be shaky.
        if blob.elongation() > 0.5:
            img.draw_edges(blob.min_corners(), color=(255,0,0))
            img.draw_line(blob.major_axis_line(), color=(0,255,0))
            img.draw_line(blob.minor_axis_line(), color=(0,0,255))
        # These values are stable all the time.
        img.draw_rectangle(blob.rect())
        img.draw_cross(blob.cx(), blob.cy())
        # Note - the blob rotation is unique to 0-180 only.
        img.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=20)
    print(clock.fps())


At the beginning, it is also the module needed to import the code


import sensor, image, time, math

Next set the threshold of the color

threshold_index = 0 # 0 for red, 1 for green, 2 for blue

Then reset the photosensitive element, camera. Set the color format to RGB565, the image size to QVGA, and set the frame rate

Turn off the white balance and automatic gain in color recognition (opening may change the threshold of the color)

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

inside the while loop

First, take a picture of the photosensitive element

 for blob in img.find_blobs([thresholds[threshold_index]], pixels_threshold=200, area_threshold=200, merge=True):

This is the syntax of python, judged below.

It means that in this function, we perform color recognition, and the find_blobs function will return a list

roi是“感兴趣区”。

left_roi = [0,0,160,240]
blobs = img.find_blobs([red],roi=left_roi)

x_stride 就是查找的色块的x方向上最小宽度的像素,默认为2,如果你只想查找宽度10个像素以上的色块,那么就设置这个参数为10:

blobs = img.find_blobs([red],x_stride=10)

y_stride 就是查找的色块的y方向上最小宽度的像素,默认为1,如果你只想查找宽度5个像素以上的色块,那么就设置这个参数为5:

blobs = img.find_blobs([red],y_stride=5)

invert 反转阈值,把阈值以外的颜色作为阈值进行查找

area_threshold 面积阈值,如果色块被框起来的面积小于这个值,会被过滤掉

pixels_threshold 像素个数阈值,如果色块像素数量小于这个值,会被过滤掉

merge 合并,如果设置为True,那么合并所有重叠的blob为一个。
注意:这会合并所有的blob,无论是什么颜色的。如果你想混淆多种颜色的blob,只需要分别调用不同颜色阈值的find_blobs。

Here, we are looking for red by default

He framed us red

The find_blobs object returns a list of multiple blobs . (Note the distinction between blobs and blobs, which is just a name used to distinguish between multiple color blocks and one color block).
The list is similar to the array of C language. A blobs list contains many blob objects. The blobs object is the color block. Each blobs object contains the information of a color block.

blob有多个方法:

blob.rect() 返回这个色块的外框——矩形元组(x, y, w, h),可以直接在image.draw_rectangle中使用。

blob.x() 返回色块的外框的x坐标(int),也可以通过blob[0]来获取。

blob.y() 返回色块的外框的y坐标(int),也可以通过blob[1]来获取。

blob.w() 返回色块的外框的宽度w(int),也可以通过blob[2]来获取。

blob.h() 返回色块的外框的高度h(int),也可以通过blob[3]来获取。

blob.pixels() 返回色块的像素数量(int),也可以通过blob[4]来获取。

blob.cx() 返回色块的外框的中心x坐标(int),也可以通过blob[5]来获取。

blob.cy() 返回色块的外框的中心y坐标(int),也可以通过blob[6]来获取。

blob.rotation() 返回色块的旋转角度(单位为弧度)(float)。如果色块类似一个铅笔,那么这个值为0~180°。如果色块是一个圆,那么这个值是无用的。如果色块完全没有对称性,那么你会得到0~360°,也可以通过blob[7]来获取。

blob.code() 返回一个16bit数字,每一个bit会对应每一个阈值。举个例子:

blobs = img.find_blobs([red, blue, yellow], merge=True)

如果这个色块是红色,那么它的code就是0001,如果是蓝色,那么它的code就是0010。注意:一个blob可能是合并的,如果是红色和蓝色的blob,那么这个blob就是0011。这个功能可以用于查找颜色代码。也可以通过blob[8]来获取。

blob.count() 如果merge=True,那么就会有多个blob被合并到一个blob,这个函数返回的就是这个的数量。如果merge=False,那么返回值总是1。也可以通过blob[9]来获取。

blob.area() 返回色块的外框的面积。应该等于(w * h)

blob.density() 返回色块的密度。这等于色块的像素数除以外框的区域。如果密度较低,那么说明目标锁定的不是很好。
比如,识别一个红色的圆,返回的blob.pixels()是目标圆的像素点数,blob.area()是圆的外接正方形的面积。

————————

threshold

red = (minL, maxL, minA, maxA, minB, maxB)

The above is a color threshold structure

The values ​​in the tuple are the minimum and maximum value of LAB respectively.

The OpenMV IDE has added a threshold selection tool, which greatly facilitates the debugging of color thresholds.

————————————————————————————

7. Vision car

To control the visual car, the most important thing for OpenMV / K210 is to identify the elements and send the elements to our master microcontroller, so that the master can control the car. This is what we need most.

So next, I mainly re-understand from the recognized area and the sent area

(The following content is based on the drug delivery car for the 2021 electric competition question F, referring to multiple blogs of up masters, and I will mark them one by one in the follow-up)

1. Tracking about OpenMV

#uart = UART(1, 115200)   # 串口配置      P1   P0(TX RX)
#uart = UART(3, 115200)   #               P4   P5


THRESHOLD = (20, 47, 21, 57, 11, 47)
import sensor, image, time,ustruct
from pyb import UART,LED
import pyb

sensor.reset()
#sensor.set_vflip(True)
#sensor.set_hmirror(True)
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QQQVGA)
#sensor.set_windowing([0,20,80,40])
sensor.skip_frames(time = 2000)
clock = time.clock()


#32通信
uart = UART(1,115200)     #定义串口3变量
uart.init(115200, bits=8, parity=None, stop=1) # init with given parameters

#识别区域,绘图区域 左右上区域
roi1 = [(0, 17, 15, 25),           #  左  x y w h
            (65,17,15,25),         #  右
            (30,0,20,15),          #  上
            (0,0,80,60)]           #  停车


def send_data_w(x,a,f_x,f_a):
    global uart;
    data = ustruct.pack("<bbhhhhb",      #格式为俩个字符俩个短整型(2字节)
                   0x2C,                      #帧头1 00101100
                   0x12,                      #帧头2 00010010
                   #下面的4个数据存到上面的情况里面,进行一个打包
                   int(x),   # rho 截距
                   int(a),   # theat 角度
                   int(f_x), # 位置判断信息
                   int(f_a), # 位置判断信息
                   0x5B)     # 帧尾 01011011
    uart.write(data);


while(True):
    clock.tick()
    img = sensor.snapshot().binary([THRESHOLD])
    line = img.get_regression([(100,100)], robust = True)

    left_flag,right_flag,up_flag=(0,0,0)#三个区域块的值,和下面的判断相关
    for rec in roi1:
            img.draw_rectangle(rec, color=(255,0,0))#绘制出roi区域

    if (line):
        rho_err = abs(line.rho())-img.width()/2
        if line.theta()>90:
            theta_err = line.theta()-180
        else:
            theta_err = line.theta()
        #直角坐标调整
        img.draw_line(line.line(), color = 127)
        #画出直线
        x=int(rho_err)
        a=int(theta_err)
        f_x=0
        f_a=0
        if x<0:
          x=-x
          f_x=1
        if a<0:
          a=-a
          f_a=1

        if line.magnitude()>8:
            outdata=[x,a,f_x,f_a]
            print(outdata)
            send_data_w(x,a,f_x,f_a)  #outuart发送的就是 x,a,flag,对应的就是截距,角度
            if img.find_blobs([(96, 100, -13, 5, -11, 18)],roi=roi1[0]):  #left
                left_flag=1
            if img.find_blobs([(96, 100, -13, 5, -11, 18)],roi=roi1[1]):  #right
                right_flag=1
            if img.find_blobs([(96, 100, -13, 5, -11, 18)],roi=roi1[2]):  #up
                up_flag=1
            if left_flag==1 and right_flag==1:
                send_data_w(0,0,2,2)
                time.sleep_ms(100)
                send_data_w(0,0,2,2)
                print(0,0,2,2)
                print('shizi')


                continue
        else:
            pass
    else:
        send_data_w(0,0,3,3)
        print('3')
        print('stop')

The most important of these is our packing function

def send_data_w(x,a,f_x,f_a):
    global uart;
    data = ustruct.pack("<bbhhhhb",      #格式为俩个字符俩个短整型(2字节)
                   0x2C,                      #帧头1 00101100
                   0x12,                      #帧头2 00010010
                   #下面的4个数据存到上面的情况里面,进行一个打包
                   int(x),   # rho 截距
                   int(a),   # theat 角度
                   int(f_x), # 位置判断信息
                   int(f_a), # 位置判断信息
                   0x5B)     # 帧尾 01011011
    uart.write(data);

Let's interpret this function. First, we need to look at a picture

The packing function bbhhhhb we defined is to send 2 characters (1 byte, so the first two we receive on STM32 will be 2C 12)

And the following short integer will be converted as above (for example, 1 will be changed to 01 00), so we need to pay attention to the number of digits read

We use the area of ​​interest to mark, so as to judge the cross and stop (send special data at the same time, add a delay to ensure the processing time of the special situation identified)

2. Digital recognition of K210


import sensor, image, lcd, time
import KPU as kpu
import gc, sys
import ustruct
from machine import Timer
from fpioa_manager import fm
from machine import UART


fm.register(7, fm.fpioa.UART1_TX, force=True)
fm.register(6, fm.fpioa.UART1_RX, force=True)
uart = UART(UART.UART1, 115200, 8, 1, 0, timeout=1000, read_buf_len=4096)



def lcd_show_except(e):
    import uio
    err_str = uio.StringIO()
    sys.print_exception(e, err_str)
    err_str = err_str.getvalue()
    img = image.Image(size=(224,224))
    img.draw_string(0, 10, err_str, scale=1, color=(0xff,0x00,0x00))
    lcd.display(img)


def main(anchors, labels = None, model_addr=0x500000, sensor_window=(224, 224), lcd_rotation=0, sensor_hmirror=False, sensor_vflip=False):
    sensor.reset()
    sensor.set_pixformat(sensor.RGB565)
    sensor.set_framesize(sensor.QVGA)
    sensor.set_windowing(sensor_window)
    sensor.set_hmirror(sensor_hmirror)
    sensor.set_vflip(sensor_vflip)
    sensor.run(1)

    lcd.init(type=1)
    lcd.rotation(lcd_rotation)
    lcd.clear(lcd.WHITE)

    if not labels:
        with open('labels.txt','r') as f:
            exec(f.read())
    if not labels:
        print("no labels.txt")
        img = image.Image(size=(320, 240))
        img.draw_string(90, 110, "no labels.txt", color=(255, 0, 0), scale=2)
        lcd.display(img)
        return 1
        try:
            img = image.Image("startup.jpg")
            lcd.display(img)
        except Exception:
            img = image.Image(size=(320, 240))
            img.draw_string(90, 110, "loading model...", color=(255, 255, 255), scale=2)
            lcd.display(img)

    task = kpu.load(model_addr)
    kpu.init_yolo2(task, 0.5, 0.3, 5, anchors) # threshold:[0,1], nms_value: [0, 1]
    try:
        flag=1
        #num=0
        while flag:
            img = sensor.snapshot()
            t = time.ticks_ms()
            objects = kpu.run_yolo2(task, img)
            t = time.ticks_ms() - t

            if objects:
                for obj in objects:
                    pos = obj.rect()
                    img.draw_rectangle(pos)
                    img.draw_string(pos[0], pos[1], "%s : %.2f" %(labels[obj.classid()], obj.value()), scale=2, color=(255, 0, 0))
                    objx = int((obj.x()+obj.w())/2)
                if labels[obj.classid()] == "1" :
                   uart.write('s')
                   uart.write('z')
                   uart.write('1')
                   uart.write('e') # >95 右转 <95 左转
                   print(1)

                if labels[obj.classid()] == "2":
                   uart.write('s')
                   uart.write('z')
                   uart.write('2')
                   uart.write('e')
                   print(2)

                if labels[obj.classid()] == "3" and objx >= 92 and objx <= 98:
                  # num = 3
                   uart.write('s')
                   uart.write('z')
                   uart.write('3')
                   uart.write('T')
                   uart.write('e')
                   print('T')
                   print(3)
                  # time.sleep(3)



                if labels[obj.classid()] == "3" and objx >98:
                   num =0
                   uart.write('s')
                   uart.write('z')
                   uart.write('3')
                   uart.write('R')
                   uart.write('e')
                   print('R')
                   print(3)
                if labels[obj.classid()] == "3" and objx <92:

                   uart.write('s')
                   uart.write('z')
                   uart.write('3')
                   uart.write('L')
                   uart.write('e')
                   print('L')
                   print(3)


                if labels[obj.classid()] == "4" and objx >95:
                   uart.write('s')
                   uart.write('z')
                   uart.write('4')
                   uart.write('R')
                   uart.write('e')
                   print(4)
                if labels[obj.classid()] == "4" and objx <95:
                   uart.write('s')
                   uart.write('z')
                   uart.write('4')
                   uart.write('L')
                   uart.write('e')
                   print(4)


                if labels[obj.classid()] == "5" and objx >95:
                   uart.write('s')
                   uart.write('z')
                   uart.write('5')
                   uart.write('R')
                   uart.write('e')
                   print(5)
                if labels[obj.classid()] == "5" and objx <95:
                   uart.write('s')
                   uart.write('z')
                   uart.write('5')
                   uart.write('L')
                   uart.write('e')
                   print(5)

                if labels[obj.classid()] == "6" and objx >95:
                   uart.write('s')
                   uart.write('z')
                   uart.write('6')
                   uart.write('R')
                   uart.write('e')
                   print(6)
                if labels[obj.classid()] == "6" and objx <95:
                   uart.write('s')
                   uart.write('z')
                   uart.write('6')
                   uart.write('L')
                   uart.write('e')
                   print(6)

                if labels[obj.classid()] == "7" and objx >95:
                   uart.write('s')
                   uart.write('z')
                   uart.write('7')
                   uart.write('R')
                   uart.write('e')
                   print(7)
                if labels[obj.classid()] == "7" and objx <95:
                   uart.write('s')
                   uart.write('z')
                   uart.write('7')
                   uart.write('L')
                   uart.write('e')
                   print(7)

                if labels[obj.classid()] == "8" and objx >95:
                   uart.write('s')
                   uart.write('z')
                   uart.write('8')
                   uart.write('R')
                   uart.write('e')
                   print(8)
                if labels[obj.classid()] == "8" and objx <95:
                   uart.write('s')
                   uart.write('z')
                   uart.write('8')
                   uart.write('L')
                   uart.write('e')
                   print(8)








            img.draw_string(0, 200, "t:%dms" %(t), scale=2, color=(255, 0, 0))
            lcd.display(img)
    except Exception as e:
        raise e
    finally:
        kpu.deinit(task)


if __name__ == "__main__":
    try:
        labels = ['1', '2', '3', '4', '5', '6', '7', '8']
        anchors = [1.40625, 1.8125000000000002, 5.09375, 5.28125, 3.46875, 3.8124999999999996, 2.0, 2.3125, 2.71875, 2.90625]
        #main(anchors = anchors, labels=labels, model_addr="/sd/m.kmodel", lcd_rotation=2, sensor_window=(224, 224))
        main(anchors = anchors, labels=labels, model_addr=0x500000, lcd_rotation=2, sensor_window=(224, 224))
    except Exception as e:
        sys.print_exception(e)
        lcd_show_except(e)
    finally:
        gc.collect()

2. The code was modified by the elder brother, but it still needs to be adjusted in the future

#目前缺点:  第一次识别会直接发送出方向
import sensor, image, lcd, time
import KPU as kpu
import gc, sys
import ustruct
from Maix import GPIO
from machine import Timer
from fpioa_manager import fm
from machine import UART

fm.register(12, fm.fpioa.GPIO0,force=True)
fm.register(7, fm.fpioa.UART1_TX, force=True)
fm.register(6, fm.fpioa.UART1_RX, force=True)
uart = UART(UART.UART1, 115200, 8, 1, 0, timeout=1000, read_buf_len=4096)

LED_B = GPIO(GPIO.GPIO0, GPIO.OUT) #构建 LED 对象

def lcd_show_except(e):
    import uio
    err_str = uio.StringIO()
    sys.print_exception(e, err_str)
    err_str = err_str.getvalue()
    img = image.Image(size=(224,224))
    img.draw_string(0, 10, err_str, scale=1, color=(0xff,0x00,0x00))
    lcd.display(img)


def await_num(num_await,num_input,objx):
   """
   等待数字 num_await 的出现
   #  不需要了:如果出现了 num_await, 则返回 None, 可以接收新的数字输入了(进入recognize_num)
   没有出现 num_await,那么就无限循环
   """
   if num_input == num_await:
      if objx >110:
         uart.write('s')
         uart.write('z')
         uart.write(num_await)
         uart.write('R')
         uart.write('e')
         print('R')
         print(num_await)
      elif objx <80:
         uart.write('s')
         uart.write('z')
         uart.write(num_await)
         uart.write('L')
         uart.write('e')
         print('L')
         print(num_await)
     #return None   # 下一次不用识别这个数字了
     #修改:如果不返回值,这样就不会重新输入新的数字,也就是不会等待新数字,这样就会避免出现程序刷新弄到新的数字

   return num_await
   # 调用函数给下面的主函数,返回出识别的值

# 如果想要识别新的数字就可以通过重新上电来执行


def recognize_num(num,objx):
   """
   识别输入的数字
   返回值: 接下来需要识别的数
   """
   # 识别输入的数字
   # 不管识别到什么数字,其实操作是类似的,所以可以用一个函数概括这些操作,不用写很多遍
   if num == '1' or num == '2':
      # 这两个的操作类似,所以用一个 if 就可以了; 看代码读到这两个数字的时候,并不需要等待下一次继续出现这个数字?
      uart.write('s')
      uart.write('z')
      uart.write(num)
      uart.write('G')
      uart.write('e') # >95 右转 <95 左转
      print(num)
      return None # 不需要等待任何数字

   elif objx >= 80 and objx <= 110:
      uart.write('s')
      uart.write('z')
      uart.write(num)
      uart.write('T')
      uart.write('e')
      #time.sleep(1)
      #time.sleep_ms(500)
      print(num)
      print('T')
      return num # 下一次需要等待的数字


def main(anchors, labels = None, model_addr=0x500000, sensor_window=(224, 224), lcd_rotation=0, sensor_hmirror=False, sensor_vflip=False):
    sensor.reset()
    sensor.set_pixformat(sensor.RGB565)
    sensor.set_framesize(sensor.QVGA)
    sensor.set_windowing(sensor_window)
    sensor.set_hmirror(sensor_hmirror)
    sensor.set_vflip(sensor_vflip)
    sensor.run(1)

    lcd.init(type=1)
    lcd.rotation(lcd_rotation)
    lcd.clear(lcd.WHITE)

    if not labels:
        with open('labels.txt','r') as f:
            exec(f.read())
    if not labels:
        print("no labels.txt")
        img = image.Image(size=(320, 240))
        img.draw_string(90, 110, "no labels.txt", color=(255, 0, 0), scale=2)
        lcd.display(img)
        return 1
        try:
            img = image.Image("startup.jpg")
            lcd.display(img)
        except Exception:
            img = image.Image(size=(320, 240))
            img.draw_string(90, 110, "loading model...", color=(255, 255, 255), scale=2)
            lcd.display(img)

    task = kpu.load(model_addr)
    kpu.init_yolo2(task, 0.5, 0.3, 5, anchors) # threshold:[0,1], nms_value: [0, 1]
    try:
        flag=1
        num=None
        while flag:
            img = sensor.snapshot()
            t = time.ticks_ms()
            objects = kpu.run_yolo2(task, img)
            t = time.ticks_ms() - t

            if num != 0:
               LED_B.value(0)

            if objects:
                for obj in objects:
                    pos = obj.rect()
                    img.draw_rectangle(pos)
                    img.draw_string(pos[0], pos[1], "%s : %.2f" %(labels[obj.classid()], obj.value()), scale=2, color=(255, 0, 0))
                    objx = int((obj.x()+obj.w())/2)
                # 识别新的数字
                    if num is None:
                       num = recognize_num(labels[obj.classid()],objx)
                    else:
                       num = await_num(num,labels[obj.classid()],objx)

                img.draw_string(0, 200, "t:%dms" %(t), scale=2, color=(255, 0, 0))
                lcd.display(img)
    except Exception as e:
        raise e
    finally:
        kpu.deinit(task)


if __name__ == "__main__":
    try:
        labels = ['1', '2', '3', '4', '5', '6', '7', '8']
        anchors = [1.40625, 1.8125000000000002, 5.09375, 5.28125, 3.46875, 3.8124999999999996, 2.0, 2.3125, 2.71875, 2.90625]
        main(anchors = anchors, labels=labels, model_addr="/sd/m.kmodel", lcd_rotation=2, sensor_window=(224, 224))
       # main(anchors = anchors, labels=labels, model_addr=0x500000, lcd_rotation=2, sensor_window=(224, 224))
    except Exception as e:
        sys.print_exception(e)
        lcd_show_except(e)
    finally:
        gc.collect()

——————————————

8. Find the largest color patch

The difference between looking for the largest color block and looking for the template above is that it is distinguished by color. If the thing you need to find is not a fixed object, you can use color to distinguish it

And when looking for, you can change the object that needs to be distinguished by changing the threshold:

The test code is as follows:


import sensor,lcd,time
import gc,sys
import ustruct

from machine import UART,Timer
from fpioa_manager import fm

#映射串口引脚
fm.register(6, fm.fpioa.UART1_RX, force=True)
fm.register(7, fm.fpioa.UART1_TX, force=True)
uart = UART(UART.UART1, 115200, read_buf_len=4096)


#摄像头初始化
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.set_vflip(1) #后置模式,所见即所得
sensor.set_auto_whitebal(False)#白平衡关闭


#lcd初始化
lcd.init()
# 颜色识别阈值 (L Min, L Max, A Min, A Max, B Min, B Max) LAB模型
# 此处识别为橙色,调整出的阈值,全部为红色
barries_red = (20, 100, -5, 106, 36, 123)

clock=time.clock()


#打包函数
def send_data_wx(x,a):
    global uart;
    data = ustruct.pack("<bbhhhhb",
                  0x2c,
                  0x12,
                  int(x),
                  int(a),
                  0x5B)
    uart.write(data);

#找到最大色块函数
def find_max(blods):
    max_size=0
    for blob in blobs:
        if blob.pixels() > max_size:
            max_blob=blob
            max_size=blob.pixels()
    return max_blob


while True:
    clock.tick()
    img=sensor.snapshot()
    #过滤
    blods = img.find_blobs([barries_red],x_strid=50)
    blods = img.find_blobs([barries_red],y_strid=50)
    blods = img.find_blobs([barries_red],pixels_threshold=100)
    blods = img.find_blobs([barries_red],area_threshold=60)
    blobs = img.find_blobs([barries_red])  #找到阈值色块
    cx=0;cy=0;
    if blobs:
       max_blob = find_max(blobs) #找到最大色块
       cx=max_blob[5]
       cy=max_blob[6]
       cw=max_blob[2]
       ch=max_blob[3]
       img.draw_rectangle(max_blob[0:4])
       img.draw_cross(max_blob[5],max_blob[6])



    lcd.display(img)     #LCD显示图片
    print(max_blob[5],max_blob[6])
    send_data_wx(max_blob[5],max_blob[6])




==========> (To be contnue…)

[](https://blog.csdn.net/leah126/article/details/131410570?spm=1001.2014.3001.5502)[](https://blog.csdn.net/Python_0011/article/details/131370717?spm=1001.2014.3001.5502)**题外话**

====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================

Many people who are new to the computer industry or graduates of computer-related majors have encountered obstacles everywhere due to lack of practical experience. Let's look at two sets of data:

  • The 2023 national college graduates are expected to reach 11.58 million, and the employment situation is severe;

  • According to the data released by the National Network Security Publicity Week, by 2027, the shortage of network security personnel in my country will reach 3.27 million.

On the one hand, the employment situation of fresh graduates is severe every year, and on the other hand, there is a gap of one million cyber security talents.

On June 9, the 2023 edition of the Employment Blue Book of MyCOS Research (including the 2023 Employment Report for Undergraduates in China and the Employment Report for Higher Vocational Students in China in 2023) was officially released.

Top 10 Majors with Higher Monthly Salary for 2022 College Graduates

The monthly income of undergraduate computer science majors and higher vocational automation majors is relatively high. The monthly income of the 2022 class of undergraduate computer science and higher vocational automation majors is 6,863 yuan and 5,339 yuan, respectively. Among them, the starting salary of undergraduate computer majors is basically the same as that of the 2021 class, and the monthly income of higher vocational automation majors has increased significantly. The 2022 class of overtaking railway transportation majors (5295 yuan) ranks first.

Specifically, depending on the major, the major with a higher monthly income for undergraduates in 2022 is information security (7579 yuan). Compared with the class of 2018, undergraduate majors related to artificial intelligence, such as electronic science and technology, automation, performed well, and their starting salaries increased by 19% compared with five years ago. Although data science and big data technology are newly added majors in recent years, they have performed well, and have ranked among the top three majors with higher monthly income half a year after graduation for the 2022 class of undergraduates. The only humanities and social science major that entered the top 10 undergraduate high-paying list five years ago-French has dropped out of the top 10.

"There is no national security without cybersecurity". At present, network security has been elevated to the height of national strategy and has become one of the most important factors affecting national security and social stability.

Characteristics of the network security industry

1. The employment salary is very high, and the salary rises quickly. In 2021, Liepin.com released the highest employment salary in the network security industry, which is 337,700 yuan per capita!

2. There is a large talent gap and many employment opportunities

On September 18, 2019, the official website of the "Central People's Government of the People's Republic of China" published: my country needs 1.4 million cyberspace security talents, but schools across the country train less than 1.5 million people each year. Liepin.com's "Cyber ​​Security Report for the First Half of 2021" predicts that the demand for cyber security talents will be 3 million in 2027, and there are only 100,000 employees currently engaged in the cyber security industry.

The industry has a lot of room for development and many jobs

Since the establishment of the network security industry, dozens of new network security industry positions have been added: network security experts, network security analysts, security consultants, network security engineers, security architects, security operation and maintenance engineers, penetration engineers, information security administrators, data security engineers, network security operations engineers, network security emergency response engineers, data appraisers, network security product managers, network security service engineers, network security trainers, network security auditors, threat intelligence analysis engineers, disaster recovery professionals, combat attack and defense professionals...

Great career potential

The network security major has strong technical characteristics, especially mastering the core network architecture and security technology in the work, which has an irreplaceable competitive advantage in career development.

With the continuous improvement of personal ability, the professional value of the work will also increase with the enrichment of one's own experience and the maturity of project operation, and the appreciation space is bullish all the way, which is the main reason why it is popular with everyone.

To some extent, in the field of network security, just like the doctor profession, the older you are, the more popular you become. Because the technology becomes more mature, the work will naturally be valued, and promotion and salary increase are a matter of course.

How to Learn Hacking & Cyber ​​Security

Today, as long as you give my article a thumbs-up, I will share my private collection of online security learning materials with you for free, so let’s see what is there.

1. Learning Roadmap

There are also many things to learn in attack and defense. I have written all the specific things to learn in the roadmap above. If you can learn them, you will have no problem getting a job or taking private jobs.

2. Video Tutorial

Although there are many learning resources on the Internet, they are basically incomplete. This is a video tutorial on cyber security recorded by myself. I have a supporting video explanation for every knowledge point in the above roadmap.

The content covers the study of network security law, network security operation and other guarantee assessment, penetration testing basics, detailed explanation of vulnerabilities, basic computer knowledge, etc., which are all learning contents that must be known when getting started with network security.

(It’s all packed into one piece and cannot be unfolded one by one. There are more than 300 episodes in total)

Due to limited space, only part of the information is shown, you need to click the link below to get it

CSDN spree: "Hacker & Network Security Introduction & Advanced Learning Resource Pack" free sharing

3. Technical documents and e-books

The technical documents are also compiled by myself, including my experience and technical points of participating in large-scale network security operations, CTF and SRC vulnerability mining. There are also more than 200 e-books. Due to the sensitivity of the content, I will not show them one by one.

Due to limited space, only part of the information is shown, you need to click the link below to get it

CSDN spree: "Hacker & Network Security Introduction & Advanced Learning Resource Pack" free sharing

4. Toolkit, interview questions and source code

"If you want to do a good job, you must first sharpen your tools." I have summarized dozens of the most popular hacking tools for everyone. The scope of coverage mainly focuses on information collection, Android hacking tools, automation tools, phishing, etc. Interested students should not miss it.

There is also the source code of the case and the corresponding toolkit mentioned in my video, which can be taken away if needed.

Due to limited space, only part of the information is shown, you need to click the link below to get it

CSDN spree: "Hacker & Network Security Introduction & Advanced Learning Resource Pack" free sharing

Finally, there are interview questions about Internet security that I have sorted out in the past few years. If you are looking for a job in Internet security, they will definitely help you a lot.

These questions are often encountered in interviews with Sangfor, Qi Anxin, Tencent or other major companies. If you have good questions or good insights, please share them.

Reference analysis: Sangfor official website, Qi Anxin official website, Freebuf, csdn, etc.

Content features: clear organization, including graphic representation, which is easier to understand.

Summary of content: Including intranet, operating system, protocol, penetration testing, security service, vulnerability, injection, XSS, CSRF, SSRF, file upload, file download, file inclusion, XXE, logic vulnerability, tool, SQLmap, NMAP, BP, MSF...

Due to limited space, only part of the information is shown, you need to click the link below to get it

CSDN spree: "Hacker & Network Security Introduction & Advanced Learning Resource Pack" free sharing

Guess you like

Origin blog.csdn.net/Python_0011/article/details/131922855