Achieve FlowNetPytorch use of pre-trained model (ubuntu18.04 cuda10.1 cudnn7.6.4)

光流的定义之类的,大家如果不了解可以自行搜索,这里就不讲了。

要进行光流提取,有很多传统的方法,不一定要用深度学习,例如用opencv里面自带的方法也可以做。这里说一说flownet这个网络 目前看有v1 v2 v3了 原作者的github一直在更新也给了docker版本,奈何我这里配置docker的images就用不了,因此在网上找到了一个pytorch的实现。这里把实现过程分享给各位。

The first step is to configure their own pytorch how to configure specific There are many online tutorials, I just emphasize that, when installed according to a certain order, do not be indiscriminate.

Download pytorch version of the code, link here

# 1 into the virtual environment pytorch determine if the environment has been installed cuda cudnn torch and so on, and the version correspond well to ensure that their gpu available, no gpu if not I will help you, but in fact can do, but to change the place very many.

conda activate pytorch

#have a test

python
import torch
ptint(torch.cuda.is_available())

# If the return is true Congratulations, you basically do the

# 2 look just downloaded folder has a requirements.txt s how their own virtual environment is not there to see that these packages

conda list

# Or use pip

pip list

# Ensure that these are installed the enter the following command to test whether a computer has all the required packages installed

python main.py -h

# If not installed, or the version in question, according to the error prompt, appropriate installation or lift version

# 2 test whether the computer has all the required packages installed

python main.py -h

# If not installed, or the version in question, according to the error prompt, appropriate installation or lift version

# 3 prepare data and load pre-trained models, to build their own folder, data and models into them
/ Home / FlowNet / FlowNetPytorch-Master / the Data
# Shuoju path to the complete name of the data given in the name of a little problem

#img_pairs = []
#    for ext in args.img_exts:
#        test_files = data_dir.files('*1.{}'.format(ext))
#        for file in test_files:
#            img_pair = file.parent / (file.namebase[:-1] + '2.{}'.format(ext))
#            if img_pair.isfile():
#                img_pairs.append([file, img_pair])

# According to run_inference.py program code can be seen on the picture name should be the end of 1 and 2 instead of 0 and 1 at the end it is necessary to modify the name of a picture

/home/flownet/FlowNetPytorch-master/pretrained/flownetc_EPE1_766.tar
# full name should be written into the model, download the pre-training model in the original author Google cloud plate, need a ladder, it will be very slow to download their own training needs its own data set, may be slower (if there is a need to contact me to help carry the Baidu cloud disk), download do not go unpack

# 4 batch runs help, check the need to enter something

python run_inference.py -h

Operating results are as follows:

PyTorch FlowNet inference on a folder of img pairs

positional arguments:
  DIR                   path to images folder, image names must match
                        '[name]0.[ext]' and '[name]1.[ext]' ##注意!!这里的提示有问题!!,不是0和1##
  PTH                   path to pre-trained model

optional arguments:
  -h, --help            show this help message and exit
  --output DIR, -o DIR  path to output folder. If not set, will be created in
                        data folder (default: None)
  --output-value {raw,vis,both}, -v {raw,vis,both}
                        which value to output, between raw input (as a npy
                        file) and color vizualisation (as an image file). If
                        not set, will output both (default: both)
  --div-flow DIV_FLOW   value by which flow will be divided. overwritten if
                        stored in pretrained file (default: 20)
  --img-exts [EXT [EXT ...]]
                        images extensions to glob (default: ['png', 'jpg',
                        'bmp', 'ppm'])
  --max_flow MAX_FLOW   max flow value. Flow map color is saturated above this
                        value. If not set, will use flow map's max value
                        (default: None)
  --upsampling {nearest,bilinear}, -u {nearest,bilinear}
                        if not set, will output FlowNet raw input,which is 4
                        times downsampled. If set, will output full resolution
                        flow map, with selected upsampling (default: None) ##建议设置,输出完整大小的flow##
  --bidirectional       if set, will output invert flow (from 1 to 0) along
                        with regular flow (default: False) ##设置后 对偶输出,根据需要决定

After reading # 5, write code that runs directly batch test

python run_inference.py -u bilinear /home/flownet/FlowNetPytorch-master/data /home/flownet/FlowNetPytorch-master/pretrained/flownetc_EPE1_766.tar

Run # 6 as follows, made an example

=> will save raw output and RGB visualization
=> fetching img pairs in '/home/flownet/FlowNetPytorch-master/data'       ##找到的图片位置
=> will save everything to /home/flownet/FlowNetPytorch-master/data/flow  ##默认结果保存位置
18 samples found                                                ##找到了图片对个数,一开始根据 -h中的设置,怎么都跑不出来,后来检查了源代码,发现作者帮助给错了!!!
=> using pre-trained model 'flownetc'                                            ##加载预训练模型,可以更换
100%|███████████| 18/18 [00:06<00:00,  2.80it/s]#速度还挺快的

# 7 to the inspection result output directory
# simultaneously output the results of flow visualization and visualization result flow can take look for further use

# At this point I realize the whole process of using the pytorch flownet be reproducible, batch testing. If you want to train yourself, and what you need to download the original author of the flying chair dataset, if the original author of the code are interested and want to learn and reproduce articles, please click here

# Next I want to achieve the following flownet3 source code here , if you already have implemented friends welcomed the guidance and discussion

Published an original article · won praise 0 · Views 15

Guess you like

Origin blog.csdn.net/weixin_43969966/article/details/104275286