mmdetection call model training

mmdetection call model training

Convert dataset format from labelme to coco

image-20230612102840681

First import data in

named in the form

422f4e7e2cbcb305cbb44b92fa5d6af

Because the mmdetection version is now updated to 3.x

So the project structure changed

change itcoco.py

image-20230612004243817

change itclass_names.py

c8322758537f71481eaee4baa3672a7

After the model ran, look at the generated file and pinch it

5b472a88830c1b0bb8bcf346ab81681

Then run this config

package version

068f69f2dcd16e5209241f387bbeb3f

Mainly these three packages

Mmdet seems to be installed with mim

f28a7b23695bdf6049be520d026d692

This package is installed according to the official website tutorial

set upPYTHONPATH

$env:PYTHONPATH += ";F:\Include\include\CV\openlab\mmdetection-3.x"

Python importstatements are used to import code from other Python modules in the current module. The Python interpreter looks for modules to import in a series of predefined directories called the Python Path.

When you execute a Python script, the Python interpreter automatically adds the current script's directory to the Python path. In addition, Python will also look for modules in PYTHONPATHthe directories specified in the environment variables and some default locations (such as the Liband site-packagesdirectory under the Python installation directory).

In your case, the error message says that the Python interpreter cannot find a module named on the Python path projects.example_project.dummy. It's probably because your projectsdirectory is not in the python path.

To fix this, you can projectsadd the path to the include directory to PYTHONPATHan environment variable. For example, if your projectsdirectory is in F:\Include\include\CV\openlab\mmdetection-3.x, you can set the environment variable like this (in Windows Command Prompt):

$env:PYTHONPATH += ";F:\Include\include\CV\openlab\mmdetection-3.x"

Then run your script again and the Python interpreter should be able to find your projects.example_project.dummymodule.

Note that the environment variables set by this method are only valid in the current command prompt session. If you start a new command prompt session, you need to set the environment variable again. If you want the environment variable settings to persist in the system, you need to add the above command to your system environment variable settings.

diffusiondet model

model training

a7a2dad054ee09d263dcd817b9eed61

6d1356ac46f2929e1c610038127c297

This estimate depends on the number of rounds

I ran 90000 with this model

I don't know how it works

Because the model preset was too high at 450000 at the beginning

It exploded in less than a minute.

finished running

It actually terminated due to an error that the image does not exist

Exactly to 15000 iterations

Fortunately, a checkpoint is saved to test the model

image-20230612084356693

Draw a loss graph

python .\tools\analysis_tools\analyze_logs.py plot_curve .\Evinci\20230612_015115\vis_data\20230612_015115.json --keys loss loss_cls loss_bbox

You can add an output path --out out.pdf, which will be output to the root directory of the project...

image-20230612084614811

Figure_1

detection model

python tools/test.py Evinci_config\diffusiondet_r50_fpn_500-proposals_1-step_crop-ms-480-800-450k_coco.py Evinci_diffusiondet\iter_15000.pth --show

--showIt can be omitted. This process is to train on the verification set, and then output the results. If it is added, it --showwill be displayed one by one, so the training is very slow.

image-20230612085524690

If you don't add it, the result will come out in less than a minute.

211029-019C_98_1_1129_312_0.717.png

You can see the training log information of the training and test sets marked in my model

image-20230612205331605

yolo model

Running the model for the first time will download some files

image-20230612164126520

It must be cut off after the first startup, the running configuration file must be obtained from the specified working directory, and then run

After the first run, you can put the configuration file generated by the model into your own configuration folder, then modify some necessary properties, and run again, so basically there will be no problems and bugs

It can be seen that after the model is downloaded, the training starts to run normally.

image-20230612164407110

There is another key point. You can see that the above loss is very large. Because the pre-training model is not loaded, the model is trained from scratch, and the total number of images in my own dataset is only six or seven hundred, which is still very small, so If you want to have a relatively good effect on the early small data set, loading the pre-trained model is more critical

It is to load_fromfill in the path of the pre-training model in , I have downloaded all the checkpoint files to use the pre-training model

But then reported a strange bug

yolof model

Sure enough, you need to start from your own config configuration file

image-20230612165330419

so it worked

Although it must be run from the configuration file of the model under configs at the beginning

But work-dirafter generating the configuration file, it is necessary to run from the file

But then reported a strange bug

Guess you like

Origin blog.csdn.net/ahahayaa/article/details/131363269