K210 usage record

This article can be regarded as a record of K210 use, so that it will be convenient for later students to quickly learn and understand the basic knowledge and entry-level use of K210

1. Sources of basic information

Official Website (Canaan Kanzhi)

https://canaan-creative.com/product/kendryteai

There are generally two commonly used methods. Here, the first one is preferred from the perspective of applicability. The information is complete and it will be much faster to get started.

1. Use micro python for development

Official article
https://wiki.sipeed.com/soft/maixpy/zh/index.html
Model Training Platform
https://www.maixhub.com/ModelTraining
Official Taobao Store
https://sipeed.taobao.com/shop/ view_shop.htm?spm=a230r.1.14.4.681570a1T5WoCg&user_number_id=2200606237318

To understand K210 (single refers to the sipeed family), you can basically see more detailed information from the above links. Generally, K210 is used as openmv, and the Kpu of k210 is used to calculate part of the neural network. It is regarded as a lightweight There are many things that can be done by using the target detection platform, but because of its own computing power, it is also relatively limited.

2. For bare metal development, it is recommended to refer to the following tutorials

https://blog.csdn.net/jwdeng1995/category_10302376.html

The official Kanzhi IDE is provided, which is similar to vscode and can be used for bare-metal development and C language programming similar to satm32.

2. Basic visual functions

1. Firmware customization

Using sipeed's method for development requires firmware customization in advance, and the firmware download link is as follows:

https://dl.sipeed.com/MAIX/MaixPy/release/master/

The following firmware and its version
insert image description here
can be opened and you can see several versions of the firmware inside. It is recommended to use the following one, which supports openmv and reads kmodel at the same time. Of course, the largest one should be the most complete. You can also try it.
insert image description here

For firmware flashing, you can watch an official video. The official provides a serial port tool for firmware erasing and writing

https://www.bilibili.com/video/BV144411J72P?spm_id_from=333.337.search-card.all.click

The video opens like this, a video from station b, the author is the owner of their official group
insert image description here

2. Program porting

If you have flashed the firmware of k210, then you can start to develop

In the information reference website given earlier, you can see some traditional image implementation solutions provided by sipeed, which are basically the same as openmv
insert image description here

k210 has transplanted the function library of openmv, basically k210 can do what openmv is capable of , (However, openmv is constantly being updated, and related function libraries are also constantly being updated, so some new things in openmv are not available in k210), but the basic visual functions are still there and can be used directly. I wrote an article about the basic use of openmv before. According to this article, it can basically be implemented on k210.

OPENMV configuration record (1)

To be honest, the hardware of openmv is really expensive, but its own tutorials and routines are also updated very diligently. These parts of sipeed have not been transplanted later, so you can keep it on your own or take a look at it as an idea. There is another
insert image description here
one The most important thing is that the documentation of openm is very detailed. If you have any questions, you can check it here. It is used on one side and checked on the other side!

https://docs.singtown.com/micropython/zh/latest/openmvcam/index.html

For example, if you want to know the most common function of finding color blocks, you can enter the query here and click to
insert image description here
see a more detailed introduction. This is not much more detailed than viewing on sipeed, and it is also convenient for us to modify the
insert image description here
basic openmv part. I won’t continue the introduction. You can directly refer to the openmv record article I wrote before. This article mainly introduces part of the content about k210’s target detection.

3. Training K210 environment construction

1. Installation and configuration of CUDA and CUDNN

It is necessary to configure these two to successfully call the GPU of the computer for training, otherwise the training speed is simply not at the same speed level, and there is a big gap, so it is better to configure it for the computer with GPU

The correspondence between the two packages of tensorflow

Open it to view the corresponding relationship, that is, the versions of these two packages should be installed correspondingly

You can also see the device matching for linux, here is the corresponding version of GCC
insert image description here

The following is the download address of cudnn

https://developer.nvidia.com/rdp/cudnn-archive
insert image description here

The following is the download address of cuda

https://developer.nvidia.com/cuda-10.1-download-archive-base?target_os=Windows&target_arch=x86_64&target_version=10
insert image description here
For version matching, you need to download CUDA10.0.1, and CUDNN is version 7.6.4, as shown below
insert image description here

2. Start the installation

For installation, install CUDA first, here is more basic and direct, double-click to start the next
insert image description here
step is also next, select custom
insert image description here
Here you can choose one, and then install it
insert image description here
all the way Next comes the environment variable part to
insert image description here
create these two face-changing , the following is the first one
insert image description here
and the second one is as follows.
insert image description here
After that, perform the following operations
insert image description here.
Open the command line, enter, and it will be successful
insert image description here
after seeing the information. After that, unzip the downloaded cudnn toolkit, unzip it,
insert image description here
open the file inside, and open the bin file , copy the .dll file in the bin file to our CUDA directory, the location is shown in the figure, then the
insert image description here
include folder is also copied
insert image description here
, and then the contents of the lib folder are copied
insert image description here
so that we have installed CUDA and CUDNN up

3. Anaconda environment configuration

I won’t say much about the environment configuration of conda here. There are many online tutorials, and there are many versions. You can refer to my previous blog to configure pycharm configuration conda records

Enter the command line here, and the configuration is as follows, because the official recommends 3.7, so let’s configure a 3.7 environment for him here

conda create -n Mx_yolov3 python=3.7

Then add the python we just configured to the environment variable
insert image description here
, and then use conda to install the following content. Here, you can use the method of creating a new txt or directly sentence by sentence.

imgaug==0.2.6
opencv-python==4.0.0.21
Pillow==6.2.0
requests==2.24.0
tqdm==4.48.2
sklearn==0.0
pytest-cov==2.10.0
codecov==2.1.8
matplotlib==3.0.3
Tensorflow==1.15.0
Tensorflow_gpu==1.15.0
pascal_voc_writer==0.1.4
PyQt5==5.15.0
numpy==1.16.2
keras==2.3.1
scikit-learn==0.22.2
seaborn==0.11.0
alive-progress==1.6.1
h5py==2.10.0
pyecharts==1.9.0
matplotlib==3.0.3

In this way, we have configured the required environment. The following is to download the source code and run it.

4. Training the neural network model

1. Use the official training model for training

Official model address, just clone this project. The official recommendation is to use the linux platform. Here, I have another ubuntu system installed on my computer. There are many tutorials for installing dual systems on the computer. You can refer to the relevant tutorials at station b, and there are more.

https://github.com/sipeed/maix_train.git

First of all, clone this project, open it like this, create a new conda environment here, install the required things below and enter the
insert image description here
following command

pip install -r requiremnets.txt

After that, you need to download an ncc model conversion tool. This is explained in the official code. Click on the link to
insert image description here
download it. After downloading the first one,
insert image description here
put it in the specified folder and unzip it here. After I unzip it, I will Delete the compressed package, it depends on the personal situation (The decompressed directory is tools/ncc/ncc_v0.1, here you need to create this directory yourself, if not, create a new one)
insert image description here
After that, you can start training, first generate training parameters

python train.py init

Such a folder will be generated
insert image description here
, and the training parameters will be in it. You don’t need to modify it, just use it directly. If you want to change it, you may change the number of training times or the minimum precision. The following is quite clear, you can check it yourself, according to your needs Revise.
insert image description here
Now you can start training. In fact, the official readme has already been written, just follow it. There
insert image description here
is already a dataset in the downloaded git file. If we want to use it, we can actually
insert image description here
view the compressed file according to the format of this dataset. In
insert image description here
the end, it is actually such a format
insert image description here
, so if you use the detection model, it is the following command (Here the data set can be replaced with your own

python3 train.py -t detector -z datasets/test_detector_xml_format.zip train

Start the training and the training
insert image description here
is completed
insert image description here
. After the training is completed, it will be under the out directory.
insert image description here
The result can be viewed here.
insert image description here
The loss curve can be viewed
insert image description here
. After that, the trained model can be put into the SD card to run. The official provides a boot.py file for us to test
insert image description here

2. Use maixhub for online training

The first is the training description, here is mainly the production of some data sets in the early stage that we need to pay attention to (Pay special attention not to make the final packaged format

https://www.maixhub.com/ModelTrainingHelp_zh.html

The following website for model training

https://www.maixhub.com/ModelTraining

insert image description here
Pay attention here to get the machine code. After I verify the machine code, even if you are not using a sipeed board, it is still possible. Follow the instructions on it to burn the firmware and check the serial port printing information to view the machine code. After the training is completed You can download our training files, which are basically the same as the previous downloads, and related files are also provided.
insert image description here

3. Use mx-yolov3 training

Check the details, this is a visual training tool made by a big guy, it is very convenient, the big guy sends and receives should be a public account, the name is import Maker , and you can get the download link by replying to mx3 after entering

https://mc.dfrobot.com.cn/thread-307554-1-1.html

After opening, it is like this.
insert image description here
First of all, it is the environment configuration. It is not recommended to install it here, because there are often various problems, and this is still installed directly on the c drive. It is better to use the conda virtual environment to install it. Directly refer to my first The three chapters k210 training environment can be set up.
insert image description here
After the tool is opened, it looks like this
insert image description here
. Commonly used tools such as annotation tools (The annotation tool is strongly recommended to understand the use of shortcut keys, and the efficiency soars)
insert image description here
The tool for downloading the firmware is integrated here. The following
insert image description here
are two training methods, one is the detection model and the other is the classification model. The author of
insert image description here
the imported data set
insert image description here
also introduced the calculation of the points. You can calculate several times by yourself to obtain the best anchor. Click, and then you can start training.
insert image description here
Training situation
insert image description here

Afterwards, use the ncc tool to convert the model to the kmodel format, so that the model files we need can be generated. The
insert image description here
related folders imported are as follows.
insert image description here
This completes a training process. I have to say that this tool is still very convenient!

Guess you like

Origin blog.csdn.net/m0_51220742/article/details/124577532