1.2、基于增量式生成遮挡与对抗抑制的行人再识别(代码理解与实验进度+报告)

先让代码跑起来: 

①、运行命令:

  python scripts/main.py --config-file configs/test.yaml

②、修改错误 

直接按照readme中的命令进行运行会出现错误,以下为我对代码的调整从而正常运行:

1、将 default_config.py 中的第200行如下代码注释:

 'visactmap': cfg.test.visactmap

2、配置ReID数据源:market1501,

下载地址

  1. Google Drive
  2. Baidu Disk

下载好后按照人如下文件夹进行配置:

3、修改test.yml中的7+8行,修改为如下:

sources: ['market1501']
targets: ['market1501']

 ③实验结果与运行中的日志

备注:此处我将test.yml中的  max_epoch: 6 # 原始是60

(base) E:\PyCharm_workspace\AAA\2021-TIP-IGOAS-main>python scripts/main.py --config-file configs/test.yaml
====================================
Show configuration
adam:
  beta1: 0.9
  beta2: 0.999
cuhk03:
  classic_split: False
  labeled_images: False
  use_metric_cuhk03: False
data:
  combineall: False
  height: 256
  norm_mean: [0.485, 0.456, 0.406]
  norm_std: [0.229, 0.224, 0.225]
  root: reid-data
  save_dir: log/test
  sources: ['market1501']
  split_id: 0
  targets: ['market1501']
  transforms: ['random_flip', 'random_crop']
  type: image
  width: 128
  workers: 0
loss:
  name: softmax
  softmax:
    label_smooth: True
  triplet:
    margin: 0.3
    weight_s: 1.0
    weight_t: 1.0
    weight_x: 1.0
market1501:
  use_500k_distractors: False
model:
  load_weights:
  name: resnet50_fc512
  pretrained: True
  resume:
rmsprop:
  alpha: 0.99
sampler:
  num_instances: 4
  train_sampler: RandomSampler
sgd:
  dampening: 0.0
  momentum: 0.9
  nesterov: False
test:
  batch_size: 100
  dist_metric: euclidean
  eval_freq: -1
  evaluate: False
  normalize_feature: False
  ranks: [1, 3, 5, 10]
  rerank: False
  start_eval: 0
  visactmap: False
  visrank: False
  visrank_topk: 10
train:
  base_lr_mult: 0.1
  batch_size: 64
  fixbase_epoch: 5
  gamma: 0.1
  lr: 0.0003
  lr_scheduler: single_step
  max_epoch: 6
  new_layers: ['classifier']
  open_layers: ['fc', 'classifier']
  optim: adam
  print_freq: 20
  seed: 1
  staged_lr: False
  start_epoch: 0
  stepsize: [20]
  weight_decay: 0.0005
use_gpu: False
video:
  pooling_method: avg
  sample_method: evenly
  seq_len: 15

Collecting env info ...
** System info **
PyTorch version: 1.9.0+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 10 专业版
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: N/A

Python version: 3.8 (64-bit runtime)
Python platform: Windows-10-10.0.10240-SP0
Is CUDA available: False
CUDA runtime version: 10.0.130
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\bin\cudnn64_7.dll
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.18.5
[pip3] numpydoc==1.1.0
[pip3] torch==1.9.0+cpu
[pip3] torchaudio==0.9.0
[pip3] torchfile==0.1.0
[pip3] torchreid==1.4.0
[pip3] torchvision==0.10.0+cpu
[conda] blas                      1.0                         mkl    defaults
[conda] cpuonly                   2.0                           0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] cudatoolkit               10.1.243             h3826478_8    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
[conda] mkl                       2020.2                      256    defaults
[conda] mkl-service               2.3.0            py38hb782905_0    defaults
[conda] mkl_fft                   1.2.0            py38h45dec08_0    defaults
[conda] mkl_random                1.1.1            py38h47e9c7a_0    defaults
[conda] numpy                     1.18.5                   pypi_0    pypi
[conda] numpydoc                  1.1.0              pyhd3eb1b0_1    defaults
[conda] pytorch-mutex             1.0                         cpu    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] torch                     1.7.0+cu101              pypi_0    pypi
[conda] torchaudio                0.9.0                    pypi_0    pypi
[conda] torchfile                 0.1.0                    pypi_0    pypi
[conda] torchreid                 1.4.0                    pypi_0    pypi
[conda] torchvision               0.10.0+cpu               pypi_0    pypi
        Pillow (8.0.1)

Building train transforms ...
+ resize to 256x128
+ random flip
+ random crop (enlarge to 288x144 and crop 256x128)
+ to torch tensor of range [0, 1]
+ normalization (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
Building test transforms ...
+ resize to 256x128
+ to torch tensor of range [0, 1]
+ normalization (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
=> Loading train (source) dataset
=> Loaded Market1501
  ----------------------------------------
  subset   | # ids | # images | # cameras
  ----------------------------------------
  train    |   751 |    12936 |         6
  query    |   750 |     3368 |         6
  gallery  |   751 |    15913 |         6
  ----------------------------------------
=> Loading test (target) dataset
=> Loaded Market1501
  ----------------------------------------
  subset   | # ids | # images | # cameras
  ----------------------------------------
  train    |   751 |    12936 |         6
  query    |   750 |     3368 |         6
  gallery  |   751 |    15913 |         6
  ----------------------------------------


  **************** Summary ****************
  source            : ['market1501']
  # source datasets : 1
  # source ids      : 751
  # source images   : 12936
  # source cameras  : 6
  target            : ['market1501']
  *****************************************


Building model: resnet50_fc512
Building softmax-engine for image-reid
2022-07-27 11:27:19.487379: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110
.dll not found
2022-07-27 11:27:19.487603: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
=> Start training
* Only train ['fc', 'classifier'] (epoch: 1/5)
E:\Anaconda3.8\lib\site-packages\torch\nn\functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to cha
nge. Please do not use them for anything important until they are released as stable. (Triggered internally at  ..\c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
epoch: [1/6][20/202]    time 6.816 (6.892)      data 0.164 (0.171)      eta 2:16:55     loss 6.3618 (6.5444)    acc 1.5625 (1.8750)     lr 0.000300
epoch: [1/6][40/202]    time 6.816 (6.863)      data 0.179 (0.170)      eta 2:14:03     loss 6.2549 (6.4397)    acc 4.6875 (2.9688)     lr 0.000300
epoch: [1/6][60/202]    time 6.777 (6.858)      data 0.172 (0.170)      eta 2:11:40     loss 6.0088 (6.3716)    acc 6.2500 (3.7240)     lr 0.000300
epoch: [1/6][80/202]    time 6.726 (6.855)      data 0.161 (0.169)      eta 2:09:19     loss 5.9209 (6.2932)    acc 6.2500 (4.4922)     lr 0.000300
epoch: [1/6][100/202]   time 6.857 (6.852)      data 0.164 (0.168)      eta 2:06:59     loss 5.5431 (6.1969)    acc 18.7500 (5.4531)    lr 0.000300
epoch: [1/6][120/202]   time 6.770 (6.850)      data 0.167 (0.167)      eta 2:04:39     loss 5.7102 (6.1164)    acc 10.9375 (6.5885)    lr 0.000300
epoch: [1/6][140/202]   time 6.868 (6.850)      data 0.170 (0.167)      eta 2:02:22     loss 5.7218 (6.0380)    acc 9.3750 (7.4442)     lr 0.000300
epoch: [1/6][160/202]   time 6.736 (6.848)      data 0.156 (0.167)      eta 2:00:04     loss 5.3952 (5.9657)    acc 15.6250 (8.1934)    lr 0.000300
epoch: [1/6][180/202]   time 6.824 (6.848)      data 0.157 (0.167)      eta 1:57:47     loss 5.2603 (5.8870)    acc 14.0625 (9.1406)    lr 0.000300
epoch: [1/6][200/202]   time 6.816 (6.847)      data 0.179 (0.167)      eta 1:55:28     loss 4.9886 (5.8131)    acc 25.0000 (10.0938)   lr 0.000300
* Only train ['fc', 'classifier'] (epoch: 2/5)
epoch: [2/6][20/202]    time 6.868 (6.838)      data 0.185 (0.171)      eta 1:52:49     loss 4.7034 (4.7845)    acc 28.1250 (26.4062)   lr 0.000300
epoch: [2/6][40/202]    time 6.880 (6.842)      data 0.171 (0.168)      eta 1:50:36     loss 4.3852 (4.7209)    acc 37.5000 (28.0859)   lr 0.000300
epoch: [2/6][60/202]    time 6.842 (6.841)      data 0.157 (0.167)      eta 1:48:18     loss 4.4966 (4.6670)    acc 31.2500 (29.0104)   lr 0.000300
epoch: [2/6][80/202]    time 6.811 (6.843)      data 0.165 (0.167)      eta 1:46:03     loss 4.3078 (4.6158)    acc 35.9375 (29.5898)   lr 0.000300
epoch: [2/6][100/202]   time 6.925 (6.839)      data 0.161 (0.166)      eta 1:43:43     loss 4.3373 (4.5703)    acc 34.3750 (30.1094)   lr 0.000300
epoch: [2/6][120/202]   time 6.836 (6.845)      data 0.164 (0.167)      eta 1:41:31     loss 4.1734 (4.5243)    acc 37.5000 (30.8724)   lr 0.000300
epoch: [2/6][140/202]   time 6.791 (6.841)      data 0.180 (0.167)      eta 1:39:11     loss 4.0590 (4.4804)    acc 40.6250 (31.5290)   lr 0.000300
epoch: [2/6][160/202]   time 6.755 (6.840)      data 0.161 (0.167)      eta 1:36:53     loss 4.2083 (4.4417)    acc 35.9375 (31.7480)   lr 0.000300
epoch: [2/6][180/202]   time 6.748 (6.838)      data 0.169 (0.166)      eta 1:34:35     loss 3.9434 (4.3975)    acc 37.5000 (32.4913)   lr 0.000300
epoch: [2/6][200/202]   time 6.741 (6.833)      data 0.156 (0.166)      eta 1:32:14     loss 3.8164 (4.3500)    acc 42.1875 (33.5234)   lr 0.000300
* Only train ['fc', 'classifier'] (epoch: 3/5)
epoch: [3/6][20/202]    time 6.770 (6.793)      data 0.158 (0.168)      eta 1:29:12     loss 3.6055 (3.6669)    acc 45.3125 (50.9375)   lr 0.000300
epoch: [3/6][40/202]    time 6.696 (6.794)      data 0.165 (0.167)      eta 1:26:58     loss 3.6760 (3.6184)    acc 48.4375 (50.8594)   lr 0.000300
epoch: [3/6][60/202]    time 6.850 (6.799)      data 0.167 (0.166)      eta 1:24:45     loss 3.4670 (3.6014)    acc 56.2500 (50.5208)   lr 0.000300
epoch: [3/6][80/202]    time 6.894 (6.800)      data 0.168 (0.165)      eta 1:22:30     loss 3.3725 (3.5800)    acc 54.6875 (51.0352)   lr 0.000300
epoch: [3/6][100/202]   time 6.774 (6.801)      data 0.167 (0.165)      eta 1:20:14     loss 3.2032 (3.5503)    acc 62.5000 (51.8438)   lr 0.000300
epoch: [3/6][120/202]   time 6.805 (6.804)      data 0.159 (0.164)      eta 1:18:00     loss 3.3772 (3.5144)    acc 48.4375 (52.7865)   lr 0.000300
epoch: [3/6][140/202]   time 6.809 (6.801)      data 0.162 (0.164)      eta 1:15:43     loss 3.2196 (3.4900)    acc 54.6875 (53.3147)   lr 0.000300
epoch: [3/6][160/202]   time 6.872 (6.802)      data 0.163 (0.165)      eta 1:13:27     loss 3.1291 (3.4630)    acc 53.1250 (53.8477)   lr 0.000300
epoch: [3/6][180/202]   time 6.849 (6.803)      data 0.162 (0.164)      eta 1:11:12     loss 3.2129 (3.4394)    acc 60.9375 (54.3316)   lr 0.000300
epoch: [3/6][200/202]   time 6.774 (6.803)      data 0.163 (0.164)      eta 1:08:56     loss 3.3399 (3.4097)    acc 51.5625 (55.0469)   lr 0.000300
* Only train ['fc', 'classifier'] (epoch: 4/5)
epoch: [4/6][20/202]    time 6.814 (6.811)      data 0.162 (0.165)      eta 1:06:31     loss 2.8802 (2.8732)    acc 68.7500 (68.9062)   lr 0.000300
epoch: [4/6][40/202]    time 6.741 (6.809)      data 0.166 (0.165)      eta 1:04:13     loss 2.9400 (2.8683)    acc 68.7500 (68.8672)   lr 0.000300
epoch: [4/6][60/202]    time 6.743 (6.809)      data 0.159 (0.165)      eta 1:01:57     loss 2.9899 (2.8620)    acc 64.0625 (69.0625)   lr 0.000300
epoch: [4/6][80/202]    time 6.789 (6.809)      data 0.174 (0.165)      eta 0:59:41     loss 2.7608 (2.8500)    acc 70.3125 (69.4727)   lr 0.000300
epoch: [4/6][100/202]   time 6.737 (6.805)      data 0.164 (0.165)      eta 0:57:23     loss 2.7406 (2.8430)    acc 68.7500 (69.2812)   lr 0.000300
epoch: [4/6][120/202]   time 6.752 (6.807)      data 0.155 (0.165)      eta 0:55:08     loss 2.7417 (2.8296)    acc 73.4375 (69.5312)   lr 0.000300
epoch: [4/6][140/202]   time 6.724 (6.805)      data 0.154 (0.165)      eta 0:52:51     loss 2.6639 (2.8067)    acc 78.1250 (69.9665)   lr 0.000300
epoch: [4/6][160/202]   time 6.874 (6.810)      data 0.148 (0.165)      eta 0:50:37     loss 2.5117 (2.7874)    acc 84.3750 (70.5859)   lr 0.000300
epoch: [4/6][180/202]   time 6.866 (6.811)      data 0.158 (0.165)      eta 0:48:21     loss 2.9964 (2.7725)    acc 64.0625 (70.7031)   lr 0.000300
epoch: [4/6][200/202]   time 6.884 (6.810)      data 0.180 (0.164)      eta 0:46:04     loss 2.4032 (2.7569)    acc 82.8125 (70.9453)   lr 0.000300
* Only train ['fc', 'classifier'] (epoch: 5/5)
epoch: [5/6][20/202]    time 6.777 (6.812)      data 0.169 (0.165)      eta 0:43:35     loss 2.4123 (2.3783)    acc 84.3750 (83.0469)   lr 0.000300
epoch: [5/6][40/202]    time 6.774 (6.833)      data 0.166 (0.166)      eta 0:41:27     loss 2.3745 (2.3695)    acc 75.0000 (82.1484)   lr 0.000300
epoch: [5/6][60/202]    time 6.813 (6.821)      data 0.163 (0.166)      eta 0:39:06     loss 2.3216 (2.3591)    acc 82.8125 (81.8229)   lr 0.000300
epoch: [5/6][80/202]    time 6.781 (6.814)      data 0.157 (0.165)      eta 0:36:47     loss 2.3605 (2.3564)    acc 81.2500 (81.7578)   lr 0.000300
epoch: [5/6][100/202]   time 6.846 (6.815)      data 0.172 (0.165)      eta 0:34:31     loss 2.1970 (2.3484)    acc 82.8125 (81.9219)   lr 0.000300
epoch: [5/6][120/202]   time 6.810 (6.816)      data 0.169 (0.166)      eta 0:32:15     loss 2.4157 (2.3404)    acc 82.8125 (81.7969)   lr 0.000300
epoch: [5/6][140/202]   time 6.796 (6.813)      data 0.167 (0.165)      eta 0:29:58     loss 2.1860 (2.3356)    acc 81.2500 (81.7746)   lr 0.000300
epoch: [5/6][160/202]   time 6.844 (6.811)      data 0.151 (0.165)      eta 0:27:41     loss 2.3573 (2.3257)    acc 78.1250 (81.9922)   lr 0.000300
epoch: [5/6][180/202]   time 6.841 (6.811)      data 0.159 (0.165)      eta 0:25:25     loss 2.3251 (2.3156)    acc 82.8125 (82.0399)   lr 0.000300
epoch: [5/6][200/202]   time 6.794 (6.810)      data 0.169 (0.165)      eta 0:23:09     loss 2.2422 (2.3095)    acc 76.5625 (81.8906)   lr 0.000300
epoch: [6/6][20/202]    time 21.084 (20.940)    data 0.163 (0.166)      eta 1:03:31     loss 2.6100 (2.7904)    acc 64.0625 (62.8906)   lr 0.000300
epoch: [6/6][40/202]    time 21.022 (21.018)    data 0.162 (0.165)      eta 0:56:44     loss 2.2113 (2.5835)    acc 79.6875 (68.0859)   lr 0.000300
epoch: [6/6][60/202]    time 21.309 (21.065)    data 0.165 (0.166)      eta 0:49:51     loss 1.9558 (2.4592)    acc 90.6250 (71.5365)   lr 0.000300
epoch: [6/6][80/202]    time 21.037 (21.061)    data 0.165 (0.166)      eta 0:42:49     loss 2.1793 (2.3695)    acc 81.2500 (73.9648)   lr 0.000300
epoch: [6/6][100/202]   time 21.041 (21.069)    data 0.163 (0.166)      eta 0:35:49     loss 2.0371 (2.2969)    acc 81.2500 (76.0625)   lr 0.000300
epoch: [6/6][120/202]   time 21.257 (21.083)    data 0.169 (0.166)      eta 0:28:48     loss 1.7867 (2.2302)    acc 90.6250 (77.8906)   lr 0.000300
epoch: [6/6][140/202]   time 21.179 (21.095)    data 0.171 (0.166)      eta 0:21:47     loss 1.7201 (2.1762)    acc 95.3125 (79.4196)   lr 0.000300
epoch: [6/6][160/202]   time 21.134 (21.098)    data 0.172 (0.166)      eta 0:14:46     loss 1.7094 (2.1312)    acc 89.0625 (80.5664)   lr 0.000300
epoch: [6/6][180/202]   time 21.164 (21.101)    data 0.175 (0.166)      eta 0:07:44     loss 1.7843 (2.0896)    acc 92.1875 (81.6493)   lr 0.000300
epoch: [6/6][200/202]   time 21.206 (21.104)    data 0.166 (0.166)      eta 0:00:42     loss 1.8008 (2.0551)    acc 92.1875 (82.4531)   lr 0.000300
=> Final test
##### Evaluating market1501 (source) #####
Extracting features from query set ...
Done, obtained 3368-by-512 matrix
Extracting features from gallery set ...
Done, obtained 15913-by-512 matrix
Speed: 10.6085 sec/batch
Computing distance matrix with metric=euclidean ...
Computing CMC and mAP ...
** Results **
mAP: 50.6%
CMC curve
Rank-1  : 74.0%
Rank-3  : 85.3%
Rank-5  : 89.3%
Rank-10 : 93.1%
Checkpoint saved to "log/test\model\model.pth.tar-6"
Elapsed 3:41:05

预训练模型的位置:

猜你喜欢

转载自blog.csdn.net/weixin_43135178/article/details/126011023