BiSeNetv2 (pytorch) test, training cityscapes

1. Source code:

github: https://github.com/CoinCheung/BiSeNet

git clone https://github.com/CoinCheung/BiSeNet.git

2. Pre-trained model:

Unzip the project after downloading, and create a folder [model] in it to store the pre-trained model;
insert image description here

3. Run the demo

conda create -n bisenet python=3.8
conda activate bisenet

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip3 install opencv-python
pip3 install tabulate tqdm

3.1 Use [bisenetv2_city] to test pictures:

python tools/demo.py --config configs/bisenetv2_city.py --weight-path ./model/model_final_v2_city.pth --img-path ./example.png

The result will be saved as [res.jpg]
Please add a picture description

3.2 Use [bisenetv2_coco] to test the video:

python tools/demo_video.py --config configs/bisenetv2_coco.py --weight-path ./model/model_final_v2_coco.pth --input ./video.mp4 --output res.mp4
will save the result as [res.mp4], The displayed result is the effect intercepted from the video, so the color image does not correspond to the effect of the predicted image, and there is a small number of frames separated

4 Training cityscapes dataset

4.1 Download the dataset and decompress it

Official website link: https://www.cityscapes-dataset.com/, downloading data requires registration, and the account has certain requirements. Data download after login:
insert image description here
follow the project call data path, we need to create a soft link under [./datasets/cityscapes] under the project path. Enter this path and run:

cd ./datasets/cityscapes
rm-rf gtFine leftImg8bit 
ln -s /mnt/e/project/data/BiSeNetV2/gtFine gtFine
ln -s /mnt/e/project/data/BiSeNetV2/leftImg8bit leftImg8bit 

4.2 Training BiSeNetv2-cityscapes

The distributed training of pytorch provided by the source code, and we often have a single machine with a single card, or a single machine with multiple cards.

  • Stand-alone multi-card
export CUDA_VISIBLE_DEVICES=0,1
python -m torch.distributed.launch --nproc_per_node=2 tools/train_amp.py --config configs/bisenetv2_city.py
  • Stand-alone single card
export CUDA_VISIBLE_DEVICES=0
python -m torch.distributed.launch --nproc_per_node=1 tools/train_amp.py --config configs/bisenetv2_city.py
  • Notice:
  1. If the error "train_amp.py: error: unrecognized arguments: --local-rank=0" is reported:
    replace python3 -m torch.distributed.launch in the script with torchrun (this method is recommended);
    or modify the torch version and reconfigure the environment ;
  2. If an error is reported "
    modify ims_per_gpu in BiSeNet/configs/bisenetv2_city.py, change it to a smaller value, such as changing it to 2;
    or add:
import os
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:128"

4.3 Model Evaluation

python tools/evaluate.py --config configs/bisenetv2_city.py --weight-path ./res/model_final.pth

Guess you like

Origin blog.csdn.net/zfjBIT/article/details/131718168