Yolov5 creates a dataset and trains the target detection model

Data set collection

Use labelimg for data annotation

Show some below 内联代码片.

pip install labelimg
labelimg

Insert image description here
The first step is to click to open the file , select the image directory to be annotated, then click to change the saving directory , and select the save location of the annotated label.
Insert image description here
Then select the format of the label. It depends on the way the model reads the data during training.
It is recommended to turn on automatic saving in View.

hot key
A Previous picture
D next picture
W Quick frame selection

Dataset format

Take yolov5 as an example

datasets
train
images
labels
test
images
labels
valid
images
labels

It is recommended that the data set be divided into train: test: valid = 7: 2: 1

Training model

Download and configure the yolov5 environment

Need to be performed under pytorch gpu version

git clone https://github.com/ultralytics/yolov5 
cd yolov5
pip install -r requirements.txt  # 下载所需包

Place the dataset

Just put the data set in the yolo5 directory , and then decorate the .yaml file

train: ../train/images
val: ../valid/images
test: ../test/images

nc: 你的数据种类数
names: ['类别1的名称', '类别2的名称', '类别3的名称']

Name it data.yaml and place it in the yolo5\dataset directory.

Training model

Open train.py and set the following four parameters

def parse_opt(known=False):
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='initial weights path')
    parser.add_argument('--data', type=str, default=ROOT / 'datasets/data.yaml', help='dataset.yaml path')
    parser.add_argument('--batch-size', type=int, default=32, help='total batch size for all GPUs, -1 for autobatch')
    parser.add_argument('--epochs', type=int, default=20, help='total training epochs')

–data is the data set path, use the data.yaml file
–batch-size is the number of images in each training batch. Increasing this value will increase the training speed, and will also lead to higher memory usage
–epochs training times

The model uses yolov5s.pt by default. The parameters of each pre-trained model are as follows:

Model size
(pixels)
mAP val
0.5:0.95
mAP val
0.5
Speed
CPU b1
(ms)
Speed
V100 b1
(ms)
Speed
V100 b32
(ms)
params
(M)
FLOPs
@640 (B)
YOLOv5n 640 28.0 45.7 45 6.3 0.6 1.9 4.5
YOLOv5s 640 37.4 56.8 98 6.4 0.9 7.2 16.5
YOLOv5m 640 45.4 64.1 224 8.2 1.7 21.2 49.0
YOLOv5l 640 49.0 67.3 430 10.1 2.7 46.5 109.1
YOLOv5x 640 50.7 68.9 766 12.1 4.8 86.7 205.7
YOLOv5n6 1280 36.0 54.4 153 8.1 2.1 3.2 4.6
YOLOv5s6 1280 44.8 63.7 385 8.2 3.6 12.6 16.8
YOLOv5m6 1280 51.3 69.3 887 11.1 6.8 35.7 50.0
YOLOv5l6 1280 53.7 71.3 1784 15.8 10.5 76.8 111.4
YOLOv5x6
+ [TTA][TTA]
1280
1536
55.0
55.8
72.7
72.7
3136
-
26.2
-
19.4
-
140.7
-
209.8
-

Then run train.py to train the model

Use model

pytorch.hub

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Images
img = 'https://ultralytics.com/images/zidane.jpg'numpy, list

# Inference
results = model(img)

# Results
results.print()

Use of non-hub models, single image detection

from models.common import DetectMultiBackend
from utils.dataloaders import LoadImages
from utils.general import Profile, check_img_size, non_max_suppression, scale_boxes
from utils.torch_utils import select_device
from utils.plots import Annotator, colors
import cv2

import torch


def dtect_img(model, img_path, save_path, size=640):

    imgsz = (640, 640)
    bs = 1  # batch_size
    conf_thres = 0.25
    iou_thres = 0.45
    max_det = 1000
    classes = None
    agnostic_nms = True

    stride, names, pt = model.stride, model.names, model.pt
    imgsz = check_img_size(imgsz, s=stride)  # check image size


    dataset = LoadImages(img_path, img_size=imgsz, stride=stride, auto=pt)

    model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz))  # warmup
    seen, windows, dt = 0, [], (Profile(), Profile(), Profile())

    # 数据读取
    for path, im, im0s, vid_cap, s in dataset:
        with dt[0]:
            im = torch.from_numpy(im).to(model.device)
            im = im.half() if model.fp16 else im.float()  # uint8 to fp16/32
            im /= 255  # 0 - 255 to 0.0 - 1.0
            if len(im.shape) == 3:
                im = im[None]  # expand for batch dim

        # Inference
        with dt[1]:
            pred = model(im)
        # NMS
        with dt[2]:
            pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)

        det = pred[0]
        annotator = Annotator(im0s, line_width=3, example=str(names))
        if len(det):
            # 图像标注区域等比缩放
            det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0s.shape).round()

            for c in det[:, 5].unique():
                n = (det[:, 5] == c).sum()  # detections per class
                s += f"{
      
      n} {
      
      names[int(c)]}{
      
      's' * (n > 1)}, "  # add to string

            # 打印检测结果
            *xyxy, conf, cls = det.tolist()[0]
            name = names[int(c)]
            conf = f'{
      
      float(conf):.2f}'
            print(xyxy, "可信度:", conf, name)

            # 图像标注
            label = name + " " + conf
            annotator.box_label(xyxy, label, color=colors(c, True))

            # 保存图片
            img = annotator.result()

    return img

if __name__ == '__main__':
     model_path = '' # 模型路径
     img_path = '' # 检测图片路径
     save_path = '' # 保存路径
     
     device = ''
     device = select_device(device)

     model_detect = DetectMultiBackend(model_path, device=device)
     img = dtect_img(model_detect, model_path, save_path=)

Guess you like

Origin blog.csdn.net/YierAnla/article/details/128198555