yolov7 crack detection

B station video notes.

1. First go to Github to find the RoboFlow warehouse address

This tutorial provides traditional algorithms such as Resnet, YOLO, etc., as well as some newer algorithms.
insert image description here

2. Open the routine through Colab

  • It can be opened directly through Colab , and other opening methods are also supported. Here are three ways.
    insert image description here
  • Tips :insert image description here
  • Click Authorize.

3. Algorithm steps

  • Prepare before starting

  • Install YOLOV7

  • Install Requirements

  • Using a pre-trained Coco Model

  • required data format

  • Download the dataset from Roboflow Universe

  • custom training


  • Prepare user datasets for evaluation : In this tutorial, we use one of the 90,000+ datasets downloaded from the Roboflow Universe website, if you have your own images (or have already done labeling), you can use Roboflow to convert your dataset [A set of tools developers use to quickly and accurately build better computer vision models]. More than 100K+ developers use Roboflow for automatic labeling , converting data set formats (such as YOLOV7), training, deploying and improving their data sets or models.

  • Follow the Getting Started guide to create or prepare your own dataset, making sure to select the Instance Segmentation option if you want to have your own dataset in Roboflow.

4. start

First we make sure we can use the GPU, which can be executed using nvidia-smi. Run the command prompt:
insert image description here
pip is still running ,insert image description here

5. Install YOLO v7

  • Get the current path.
import os
HOME = os.getcwd()
print(HOME)

![Insert picture description here](https://img-blog.csdnimg.cn/1a56e3c6c1424c188faa24b7328c1374.pn

  • Clone YOLO V7 code
# clone YOLOv7 repository
%cd {
    
    HOME}
!git clone https://github.com/WongKinYiu/yolov7

# navigate to yolov7 directory and checkout u7 branch of YOLOv7 - this is hash of lates commit from u7 branch as of 12/21/2022
%cd {
    
    HOME}/yolov7
!git checkout 44f30af0daccb1a3baecc5d80eae22948516c579

insert image description here

6. Install dependencies

%cd {
    
    HOME}/yolov7/seg
!pip install --upgrade pip
!pip install -r requirements.txt

insert image description hereSome effects are displayed.

7. Use the pre-trained COCO model for inference

# download COCO starting checkpoint to yolov7/seg directory
%cd {
    
    HOME}/yolov7/seg
!wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-seg.pt
WEIGHTS_PATH = f"{
      
      HOME}/yolov7/seg/yolov7-seg.pt"

insert image description here

# 下载图片到yolov7/seg目录
# download example image to yolov7/seg directory
%cd {
    
    HOME}/yolov7/seg
!wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1sPYHUcIW48sJ67kh5MHOI3GfoXlYNOfJ' -O dog.jpeg
IMAGE_PATH = f"{
      
      HOME}/yolov7/seg/dog.jpeg"

insert image description herestart forecasting

%cd {
    
    HOME}/yolov7/seg
!python segment/predict.py --weights $WEIGHTS_PATH --source $IMAGE_PATH --name coco

insert image description hereNote here that YOLOV7 will create a separate result folder, the default is exp, exp2, exp3, etc.,

RESULT_IMAGE_PATH = f"{
      
      HOME}/yolov7/seg/runs/predict-seg/coco/dog.jpeg"
from IPython.display import Image, display
display(Image(filename=RESULT_IMAGE_PATH))

Show forecast results:
insert image description here

8. Required data format

For the YOLOv7 segmentation model, we need YOLO v7 Pytorch format. Note : If you want to learn more annotation formats, you can visit
Computer Vision Annotation Formats
1. Dataset directory structure
Dataset contains pictures and labels, divided into three parts, training, testing and To verify the three subsections, additionally, there should be a data.yaml file in the root directory of the dataset.
insert image description here

2. The structure of the annotation file

Each tag file is a txt file with the same name as the picture, please see the content of the tag file below.
insert image description hereEach line in the label file should have a certain structure, class_index x1 y1 x2 y2 x3 y3 ...

3.data.yaml file structure

insert image description here

9. Download the dataset from Roboflow Universe

You need your API_KEY , you can find it in the information in the upper right corner of Roboflow, and then set , you will re-enter the setting interface , click Roboflow->Roboflow API on the left WORKSPACES, copy the private API key, use Shift+enter to run this cell, Paste the API-key into the prompt window.
Note : Sometimes the network is slow, and the execution will be stuck.
insert image description here
After entering the secret key here , press Enter directly, and the button in this running state is always in a rotating state. Press Enter directly to run the next line.

%cd {
    
    HOME}/yolov7/seg

!pip install roboflow --quiet

from roboflow import Roboflow

rf = Roboflow(api_key=api_key)
project = rf.workspace("university-bswxt").project("crack-bphdr")
dataset = project.version(2).download("yolov7")

insert image description here

10. Training

%cd {
    
    HOME}/yolov7/seg
!python segment/train.py --batch 16 \
 --epochs 10 \
 --data {
    
    dataset.location}/data.yaml \
 --weights $WEIGHTS_PATH \
 --device 0 \
 --name custom

insert image description here

from IPython.display import Image, display
display(Image(filename=f"{
      
      HOME}/yolov7/seg/runs/train-seg/custom/val_batch0_labels.jpg"))

insert image description here

11. Evaluation

we can evaluate

%cd {
    
    HOME}/yolov7/seg
!python segment/predict.py \
--weights {
    
    HOME}/yolov7/seg/runs/train-seg/custom/weights/best.pt \
--conf 0.25 \
--source {
    
    dataset.location}/test/images

insert image description here
We can display some results:

import glob
from IPython.display import Image, display

for imageName in glob.glob('/content/yolov7/seg/runs/predict-seg/exp/*.jpg')[:2]:
      display(Image(filename=imageName))
      print("\n")

insert image description here
Note: The ones in front are cats and dogs. Just for testing and later crack detection are two items, the former is mainly used for testing

Guess you like

Origin blog.csdn.net/u013035197/article/details/131859904