1. First go to Github to find the RoboFlow warehouse address
This tutorial provides traditional algorithms such as Resnet, YOLO, etc., as well as some newer algorithms.
2. Open the routine through Colab
- It can be opened directly through Colab , and other opening methods are also supported. Here are three ways.
- Tips :
- Click Authorize.
3. Algorithm steps
-
Prepare before starting
-
Install YOLOV7
-
Install Requirements
-
Using a pre-trained Coco Model
-
required data format
-
Download the dataset from Roboflow Universe
-
custom training
-
Prepare user datasets for evaluation : In this tutorial, we use one of the 90,000+ datasets downloaded from the Roboflow Universe website, if you have your own images (or have already done labeling), you can use Roboflow to convert your dataset [A set of tools developers use to quickly and accurately build better computer vision models]. More than 100K+ developers use Roboflow for automatic labeling , converting data set formats (such as YOLOV7), training, deploying and improving their data sets or models. -
Follow the Getting Started guide to create or prepare your own dataset, making sure to select the Instance Segmentation option if you want to have your own dataset in Roboflow.
4. start
First we make sure we can use the GPU, which can be executed using nvidia-smi. Run the command prompt:
pip is still running ,
5. Install YOLO v7
- Get the current path.
import os
HOME = os.getcwd()
print(HOME)
- Clone YOLO V7 code
# clone YOLOv7 repository
%cd {
HOME}
!git clone https://github.com/WongKinYiu/yolov7
# navigate to yolov7 directory and checkout u7 branch of YOLOv7 - this is hash of lates commit from u7 branch as of 12/21/2022
%cd {
HOME}/yolov7
!git checkout 44f30af0daccb1a3baecc5d80eae22948516c579
6. Install dependencies
%cd {
HOME}/yolov7/seg
!pip install --upgrade pip
!pip install -r requirements.txt
Some effects are displayed.
7. Use the pre-trained COCO model for inference
# download COCO starting checkpoint to yolov7/seg directory
%cd {
HOME}/yolov7/seg
!wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-seg.pt
WEIGHTS_PATH = f"{
HOME}/yolov7/seg/yolov7-seg.pt"
# 下载图片到yolov7/seg目录
# download example image to yolov7/seg directory
%cd {
HOME}/yolov7/seg
!wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1sPYHUcIW48sJ67kh5MHOI3GfoXlYNOfJ' -O dog.jpeg
IMAGE_PATH = f"{
HOME}/yolov7/seg/dog.jpeg"
start forecasting
%cd {
HOME}/yolov7/seg
!python segment/predict.py --weights $WEIGHTS_PATH --source $IMAGE_PATH --name coco
Note here that YOLOV7 will create a separate result folder, the default is exp, exp2, exp3, etc.,
RESULT_IMAGE_PATH = f"{
HOME}/yolov7/seg/runs/predict-seg/coco/dog.jpeg"
from IPython.display import Image, display
display(Image(filename=RESULT_IMAGE_PATH))
Show forecast results:
8. Required data format
For the YOLOv7 segmentation model, we need YOLO v7 Pytorch format. Note : If you want to learn more annotation formats, you can visit
Computer Vision Annotation Formats
1. Dataset directory structure
Dataset contains pictures and labels, divided into three parts, training, testing and To verify the three subsections, additionally, there should be a data.yaml file in the root directory of the dataset.
2. The structure of the annotation file
Each tag file is a txt file with the same name as the picture, please see the content of the tag file below.
Each line in the label file should have a certain structure, class_index x1 y1 x2 y2 x3 y3 ...
3.data.yaml file structure
9. Download the dataset from Roboflow Universe
You need your API_KEY , you can find it in the information in the upper right corner of Roboflow, and then set , you will re-enter the setting interface , click Roboflow->Roboflow API on the left WORKSPACES, copy the private API key, use Shift+enter to run this cell, Paste the API-key into the prompt window.
Note : Sometimes the network is slow, and the execution will be stuck.
After entering the secret key here , press Enter directly, and the button in this running state is always in a rotating state. Press Enter directly to run the next line.
%cd {
HOME}/yolov7/seg
!pip install roboflow --quiet
from roboflow import Roboflow
rf = Roboflow(api_key=api_key)
project = rf.workspace("university-bswxt").project("crack-bphdr")
dataset = project.version(2).download("yolov7")
10. Training
%cd {
HOME}/yolov7/seg
!python segment/train.py --batch 16 \
--epochs 10 \
--data {
dataset.location}/data.yaml \
--weights $WEIGHTS_PATH \
--device 0 \
--name custom
from IPython.display import Image, display
display(Image(filename=f"{
HOME}/yolov7/seg/runs/train-seg/custom/val_batch0_labels.jpg"))
11. Evaluation
we can evaluate
%cd {
HOME}/yolov7/seg
!python segment/predict.py \
--weights {
HOME}/yolov7/seg/runs/train-seg/custom/weights/best.pt \
--conf 0.25 \
--source {
dataset.location}/test/images
We can display some results:
import glob
from IPython.display import Image, display
for imageName in glob.glob('/content/yolov7/seg/runs/predict-seg/exp/*.jpg')[:2]:
display(Image(filename=imageName))
print("\n")
Note: The ones in front are cats and dogs. Just for testing and later crack detection are two items, the former is mainly used for testing