SIPEED MAIX BIT K210 uses MaixHub (new version) to train the rock-paper-scissors gesture recognition model online and deploy it

This is the URL for online training
MaixHub URL
Then click Model Training
Insert image description here
Create a task Here I use my mobile phone to collect, which is very convenient. Then collect the training set first. Select the data set you just created. Create a data set.
Insert image description here
Here we create a target detection task. Target detection will return coordinates and frames, but classification tasks will not.
Insert image description here

Insert image description here

Insert image description here


Insert image description here

Insert image description here

Start labeling the data set after the collection is completed.
Note: When collecting the data set, you can collect negative samples (the objects to be detected are not included in the picture), or objects that are very similar to the objects to be detected. When labeling, you only need to Just label it, or you can have multiple detection targets in one picture.
Insert image description here
First add the rock-paper-scissors tag.
Insert image description here
Insert image description here
Then mark it.
Press W to mark. After marking, select the label and press D to automatically save it to the next one.
Insert image description here
After everything is labeled, you can collect the verification set and then label it. The verification set accounts for 1/5. If your training set is large enough, you do not need to collect the verification set, and it will be automatically divided for verification. set.
You can then start creating training tasks.
Insert image description here
Select the K210 model.
Insert image description here
This is my training process. You can see the curves of acc (correctness rate) and loss (error rate) on the right. It is normal for loss to continue to decrease and acc to continue to increase. If acc does not meet the requirements . This can be achieved by increasing the number of iterations or increasing the data set. (val_acc is the accuracy of the test set)
Insert image description here
After the training is completed, the model can be deployed.
Insert image description here
Insert image description here
Then download and decompress and you will get these three files.
Insert image description here
Among them, kmodel is the model file, save it to the sd card of k210.
Then open the IDE of MaixPy to run it, and copy the decompressed main.py file to the IDE to run it.

# generated by maixhub, tested on maixpy3 v0.4.8
# copy files to TF card and plug into board and power on
import sensor, image, lcd, time
import KPU as kpu
import gc, sys

input_size = (224, 224)
labels = ['pape', 'rock', 'scissor']
anchors = [3.94, 4.22, 3.52, 3.38, 4.41, 4.97, 2.56, 3.0, 5.72, 5.97]

def lcd_show_except(e):
    import uio
    err_str = uio.StringIO()
    sys.print_exception(e, err_str)
    err_str = err_str.getvalue()
    img = image.Image(size=input_size)
    img.draw_string(0, 10, err_str, scale=1, color=(0xff,0x00,0x00))
    lcd.display(img)

def main(anchors, labels = None, model_addr="/sd/m.kmodel", sensor_window=input_size, lcd_rotation=0, sensor_hmirror=False, sensor_vflip=False):
    sensor.reset()
    sensor.set_pixformat(sensor.RGB565)
    sensor.set_framesize(sensor.QVGA)
    sensor.set_windowing(sensor_window)
    sensor.set_hmirror(sensor_hmirror)
    sensor.set_vflip(sensor_vflip)
    sensor.run(1)

    lcd.init(type=1)
    lcd.rotation(lcd_rotation)
    lcd.clear(lcd.WHITE)

    if not labels:
        with open('labels.txt','r') as f:
            exec(f.read())
    if not labels:
        print("no labels.txt")
        img = image.Image(size=(320, 240))
        img.draw_string(90, 110, "no labels.txt", color=(255, 0, 0), scale=2)
        lcd.display(img)
        return 1
    try:
        img = image.Image("startup.jpg")
        lcd.display(img)
    except Exception:
        img = image.Image(size=(320, 240))
        img.draw_string(90, 110, "loading model...", color=(255, 255, 255), scale=2)
        lcd.display(img)

    try:
        task = None
        task = kpu.load(model_addr)
        kpu.init_yolo2(task, 0.5, 0.3, 5, anchors) # threshold:[0,1], nms_value: [0, 1]
        while(True):
            img = sensor.snapshot()
            t = time.ticks_ms()
            objects = kpu.run_yolo2(task, img)
            t = time.ticks_ms() - t
            if objects:
                for obj in objects:
                    pos = obj.rect()
                    img.draw_rectangle(pos)
                    img.draw_string(pos[0], pos[1], "%s : %.2f" %(labels[obj.classid()], obj.value()), scale=2, color=(255, 0, 0))
            img.draw_string(0, 200, "t:%dms" %(t), scale=2, color=(255, 0, 0))
            lcd.display(img)
    except Exception as e:
        raise e
    finally:
        if not task is None:
            kpu.deinit(task)


if __name__ == "__main__":
    try:
        # main(anchors = anchors, labels=labels, model_addr=0x300000, lcd_rotation=0)
        #把xxx.kmodel改成自己的model名字
        main(anchors = anchors, labels=labels, model_addr="/sd/xxx.kmodel")
    except Exception as e:
        sys.print_exception(e)
        lcd_show_except(e)
    finally:
        gc.collect()

The effect is as follows:

Insert image description here

Guess you like

Origin blog.csdn.net/darlingqx/article/details/127613553