YOLOV5训练自己的数据集_pytorch_ubuntu

目录

1.首先安装YOLOv5需要的环境

2.数据集准备

3.修改相关配置文件/参数

3.1修改./yolov5/data/coco128.yaml

3.2修改yolov5s.yaml

 3.3修改train.py

4.测试


1.首先安装YOLOv5需要的环境

为了不和服务器上其他的环境冲突,这里使用conda创建yolov5需要的环境,

conda create -n yolov5 python=3.8
conda activate 

然后下载yolov5工程

git clone https://github.com/ultralytics/yolov5  # clone repo
cd yolov5
pip install -r requirements.txt  # install dependencies

2.数据集准备

这里我们用labelimg标注自己的数据集,labelimg的安装见:https://blog.csdn.net/u013171226/article/details/115013514

利用labelimg标注数据的时候,注意选择生成的txt格式为yolo格式,如下图所示:

标注完成之后,我们得到的是图片数据以及和图片数据一一对应的label数据,我们把图片数据放到images文件夹里面,并且在images里面分成train2017和val2017两个文件夹,标签的txt放到labels文件夹里面,并且labels里面分成train2017和val2017两部分。

在github的yolov5工程里面有利用yolov5训练自己数据的说明:https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data

这里也是模仿coco128的数据格式去准备自己的数据集,首先下载coco128数据集,

cd ./yolov5
wget https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip
unar coco128.zip

然后我们把coco128里面原来的images和labels文件夹删掉,然后把我们上面标注之后得到的images文件夹和labels文件夹放到coco128里面。

3.修改相关配置文件/参数

3.1修改./yolov5/data/coco128.yaml

# COCO 2017 dataset http://cocodataset.org - first 128 training images
# Train command: python train.py --data coco128.yaml
# Default dataset location is next to /yolov5:
#   /parent_folder
#     /coco128
#     /yolov5


# download command/URL (optional)
#download: https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip

# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: ../yolov5/coco128/images/train2017/  # 128 images
val: ../yolov5/coco128/images/train2017/  # 128 images

# number of classes
nc: 1

# class names
names: [ 'logo']

1.把download: https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip 注释掉。

2.修改train val那里的路径。

3.修改分类树木nc

4.修改names里面的分类名称。

3.2修改yolov5s.yaml

yolov5提供了多个模型供选择,

这里我们选择YOLOv5s,那么相应的我们就要修改yolov5s.yaml,只需要修改nc数量即可,

# parameters
nc: 1  # number of classes
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple

 3.3修改train.py

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', type=str, default='yolov5s.pt', help='initial weights path')
    parser.add_argument('--cfg', type=str, default='./models/yolov5s.yaml', help='model.yaml path')
    parser.add_argument('--data', type=str, default='data/coco128.yaml', help='data.yaml path')
    parser.add_argument('--hyp', type=str, default='data/hyp.scratch.yaml', help='hyperparameters path')
    parser.add_argument('--epochs', type=int, default=300)
    parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs')
    parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, test] image sizes')
    parser.add_argument('--rect', action='store_true', help='rectangular training')
    parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
    parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
    parser.add_argument('--notest', action='store_true', help='only test final epoch')
    parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
    parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters')
    parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
    parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
    parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
    parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
    parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
    parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
    parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
    parser.add_argument('--log-imgs', type=int, default=16, help='number of images for W&B logging, max 100')
    parser.add_argument('--log-artifacts', action='store_true', help='log artifacts, i.e. final trained model')
    parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers')
    parser.add_argument('--project', default='runs/train', help='save to project/name')
    parser.add_argument('--entity', default=None, help='W&B entity')
    parser.add_argument('--name', default='exp', help='save to project/name')
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    parser.add_argument('--quad', action='store_true', help='quad dataloader')
    parser.add_argument('--linear-lr', action='store_true', help='linear LR')
    opt = parser.parse_args()

由于我们选用的是yolov5s,所以修改parser.add_argument('--cfg', type=str, default='./models/yolov5s.yaml',     然后执行python  train.py开始训练

4.测试

修改detect.py,我这里是把检测出来的目标裁剪成子图然后保存下来,增加如下代码(从第6行到22行为增加的代码)

                for *xyxy, conf, cls in reversed(det):
                    if save_txt:  # Write to file
                        xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh
                        print("line---")
                        line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh)  # label format
                        print(xywh)
                        print(im0.shape[1])
                        print(im0.shape[0])
                        width = im0.shape[1]
                        height = im0.shape[0] 
                        x = int(xywh[0]*width)
                        y = int(xywh[1]*height)
                        w = int(xywh[2]*width)
                        h = int(xywh[3]*height)
                        x1 = int(x - w/2)
                        x2 = int(x + w/2)
                        y1 = int(y - h/2)
                        y2 = height 
                        print("x1,x2,y1,y2", x1,x2,y1,y2)
                        imgSon = im0[y1:y2, x1:x2]
                        print("save_path",save_path)
                        cv2.imwrite(save_path, imgSon)
                        with open(txt_path + '.txt', 'a') as f:
                            f.write(('%g ' * len(line)).rstrip() % line + '\n')

                    if save_img or view_img:  # Add bbox to image
                        label = f'{names[int(cls)]} {conf:.2f}'
                        plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3)

猜你喜欢

转载自blog.csdn.net/u013171226/article/details/115064641