The final version of yolov8 code combing and training its own data

1. To summarize

  1. At the beginning, in order to detect irregular sacks, target detection was adopted. yolov3, fasterrcnn, ssd. This rectangular frame is still available. The objects detected later turned into regular cartons. We also used target detection to find that there was no rotation angle because the placement of the boxes was not correct. Only opencv's minarea can be used to find the minimum rectangular frame to find the angle. However, the opencv method first distinguishes the object from the background color, and secondly, it is not beautiful enough. After all, it is a 2-step process.
  2. Later, I tried to train one more angle, that is, divide the angle into 180 categories for classification. This method is extremely unstable, and it may be that my code is not well written, but later I found that someone is doing rotating rectangle detection.
  3. Rotating rectangular frame detection , to be honest, I ran the framework provided by Yang Xue, the effect is very poor, the angle regression seems to be useless, anyway, it is very unsatisfactory.
  4. There is really no way, so we have to use the strength division, maskrcnn. This is really easy to use, about 50 samples are calibrated, and the effect of training 100epoch is great. We've been doing it this way for about 2 years.
  5. But since I tested rotated_rtmdet. I really feel very enlightened, and I have always used strength division to identify boxes, which always makes me feel very awkward. Now finally there is a rotating object detection that can land. I think the company's 3D destacking has finally been on the right track.
  6. Later, yolo launched again, v4v5v6v7 which also includes ppyolo, yolox, yoloe, including the current v8. I also keep trying to update the algorithm. But to be honest, target detection is really coming to an end. And of course my personal final version. Due to company reasons, individuals will no longer be working on visual projects, so there will be no further research on visual target detection and strength segmentation or classification.

2. Yolov8 code combing

reference
insert image description here

To be honest, I don't quite understand why they changed, it's not difficult to look at the code. Basically the same as v5. And the decoupling detection head is the same as yolox.
Generally speaking, the accuracy and speed of this model are very friendly to engineering and application. I used my own data set to train for 50 epochs, without modifying anything, and the map ran to 99.5. It is very amazing.

3. Train your own dataset

  1. The data set is still in yolo format. If it is in the json format labeled by labelme, refer to modifying the json label to a txt file. Or just use labelImg labeling directly to generate a labeling file reference in yolo format
  2. Modify the data configuration file, that is, copy a copy of coco.yaml. Modify the path to your own dataset path

insert image description here
The file is placed in G:\sick\SH_visionary-s\ultralytics\ultralytics\yolo\v8\detect\coco.yaml

  1. Download yolov8n.pt and put it in the same directory as train.py.
    insert image description here

  2. Modify the training parameters, the path is in G:\sick\SH_visionary-s\ultralytics\ultralytics\yolo\cfg\default.yaml,
    insert image description here
    I just modified these

  3. Just execute the train.py file

Guess you like

Origin blog.csdn.net/qq_33228039/article/details/129024245