运行结果展示 |(DPT)Vision Transformers for Dense Prediction

代码地址:isl-org/DPT: Dense Prediction Transformers (github.com)​​​​​​

这篇文章的环境非常好配,几乎没有报什么error,按照readme里给的步骤一两步就搞定了。

    Setup

  1. Download the model weights and place them in the weights folder
  2. Set up dependencies:

    pip install -r requirements.txt

     Usage

  1. Place one or more input images in the folder input.

  2. Run a monocular depth estimation model:

    python run_monodepth.py

    Or run a semantic segmentation model:

    python run_segmentation.py
  3. The results are written to the folder output_monodepth and output_semseg, respectively.

深度估计的weights选择的是:

 语义分割的weights选择的是:

下面来看看稿主跑出来的结果和稿主对此的评价吧:

 1.视野较为开阔的风景照:(守序善良)

 2.桌面物体近照:(中立善良)

 3.小动物:(混乱邪恶)

                               

 

 4.特殊场景:(守序邪恶)

 素描画

 电影院拍摄幕布

 贴了贴纸的墙

P过的图

5.二次元:(中立邪恶)

 

 

 

 

猜你喜欢

转载自blog.csdn.net/qd1813100174/article/details/128176216