mmdetection call model training
Article Directory
Convert dataset format from labelme to coco
First import data in
named in the form
Because the mmdetection version is now updated to 3.x
So the project structure changed
change itcoco.py
change itclass_names.py
After the model ran, look at the generated file and pinch it
Then run this config
package version
Mainly these three packages
Mmdet seems to be installed with mim
This package is installed according to the official website tutorial
set upPYTHONPATH
$env:PYTHONPATH += ";F:\Include\include\CV\openlab\mmdetection-3.x"
Python import
statements are used to import code from other Python modules in the current module. The Python interpreter looks for modules to import in a series of predefined directories called the Python Path.
When you execute a Python script, the Python interpreter automatically adds the current script's directory to the Python path. In addition, Python will also look for modules in PYTHONPATH
the directories specified in the environment variables and some default locations (such as the Lib
and site-packages
directory under the Python installation directory).
In your case, the error message says that the Python interpreter cannot find a module named on the Python path projects.example_project.dummy
. It's probably because your projects
directory is not in the python path.
To fix this, you can projects
add the path to the include directory to PYTHONPATH
an environment variable. For example, if your projects
directory is in F:\Include\include\CV\openlab\mmdetection-3.x
, you can set the environment variable like this (in Windows Command Prompt):
$env:PYTHONPATH += ";F:\Include\include\CV\openlab\mmdetection-3.x"
Then run your script again and the Python interpreter should be able to find your projects.example_project.dummy
module.
Note that the environment variables set by this method are only valid in the current command prompt session. If you start a new command prompt session, you need to set the environment variable again. If you want the environment variable settings to persist in the system, you need to add the above command to your system environment variable settings.
diffusiondet model
model training
This estimate depends on the number of rounds
I ran 90000 with this model
I don't know how it works
Because the model preset was too high at 450000 at the beginning
It exploded in less than a minute.
finished running
It actually terminated due to an error that the image does not exist
Exactly to 15000 iterations
Fortunately, a checkpoint is saved to test the model
Draw a loss graph
python .\tools\analysis_tools\analyze_logs.py plot_curve .\Evinci\20230612_015115\vis_data\20230612_015115.json --keys loss loss_cls loss_bbox
You can add an output path --out out.pdf
, which will be output to the root directory of the project...
detection model
python tools/test.py Evinci_config\diffusiondet_r50_fpn_500-proposals_1-step_crop-ms-480-800-450k_coco.py Evinci_diffusiondet\iter_15000.pth --show
--show
It can be omitted. This process is to train on the verification set, and then output the results. If it is added, it --show
will be displayed one by one, so the training is very slow.
If you don't add it, the result will come out in less than a minute.
You can see the training log information of the training and test sets marked in my model
yolo model
Running the model for the first time will download some files
It must be cut off after the first startup, the running configuration file must be obtained from the specified working directory, and then run
After the first run, you can put the configuration file generated by the model into your own configuration folder, then modify some necessary properties, and run again, so basically there will be no problems and bugs
It can be seen that after the model is downloaded, the training starts to run normally.
There is another key point. You can see that the above loss is very large. Because the pre-training model is not loaded, the model is trained from scratch, and the total number of images in my own dataset is only six or seven hundred, which is still very small, so If you want to have a relatively good effect on the early small data set, loading the pre-trained model is more critical
It is to load_from
fill in the path of the pre-training model in , I have downloaded all the checkpoint files to use the pre-training model
But then reported a strange bug
yolof model
Sure enough, you need to start from your own config configuration file
so it worked
Although it must be run from the configuration file of the model under configs at the beginning
But work-dir
after generating the configuration file, it is necessary to run from the file
But then reported a strange bug