半教師ありセマンティック セグメンテーションのための競合ベースのクロスビュー一貫性、CVPR2023
論文: https://arxiv.org/abs/2303.01276
コード: https://github.com/xiaoyao3302/CCVC/
まとめ
半教師ありセマンティック セグメンテーション (SSS) を使用すると、完全に注釈が付けられた大規模なトレーニング データの必要性を軽減できます。既存の手法では、疑似ラベルを扱うときに確証バイアスが発生することがよくありますが、これは共同トレーニング フレームワークによって軽減できます。共同トレーニングに基づく現在の SSS 手法は、さまざまなサブネットワークの崩壊を防ぐために手動で作成された摂動に依存していますが、人工的な摂動では最適な解決策を得るのが困難です。この論文では、2 つのサブネットワークに関連のないビューから有益な特徴を強制的に学習させることを目的とした、2 つのブランチの共同トレーニング フレームワークに基づく新しい競合ベースのクロスビュー一貫性 (CCVC) 手法を提案します。まず、新しいクロスビュー一貫性 (CVC) 戦略を提案します。これは、特徴差の損失を導入することで 2 つのサブネットワークが同じ入力から異なる特徴を学習することを促進しますが、これらの不連続な特徴は入力に対して一貫した予測スコアを生成することが期待されます。CVC 戦略は、両方のサブネットがクラッシュするのを防ぐのに役立ちます。さらに、モデルが競合予測からより有用な情報を学習してトレーニング プロセスの安定性を確保するために、競合ベースの擬似ラベル付け (CPL) 方法がさらに提案されています。
序章
完全に監視されたセマンティック セグメンテーションでは、正確なアノテーション データを収集するために多大な労力が必要です。半教師あり学習では、少量のラベル付きデータと大量のラベルなしデータを使用してセマンティック セグメンテーションを実現できます。しかし、ラベルなしデータをモデルのトレーニング用のラベル付きデータを支援するためにどのように使用するかが重要な問題です。
疑似ラベルの使用方法は確証バイアスの影響を受ける可能性があり、不安定なトレーニングによるパフォーマンスの低下を引き起こす可能性があります。一貫性正則化に基づくアプローチは、より優れたパフォーマンスを示しますが、ほとんどのアプローチは、弱く摂動された入力からの予測に依存して擬似ラベルを生成し、その後、強く摂動された入力からの予測の監視として使用されます。確証バイアスの影響も受けます。
共同トレーニングにより、異なるサブネットワークが異なるビューから同じインスタンスを推論し、擬似ラベル付けを通じて 1 つのビューから学習した知識を別のビューに転送できるようになります。特に、共同トレーニングはマルチビュー参照に依存してモデルの認識を高め、生成された擬似ラベルの信頼性を向上させます。重要なのは、モデルがさまざまなビューからの入力に基づいて正しい予測を行えるように、さまざまなサブネットワークが互いに崩壊するのをどのように防ぐかです。ただし、ほとんどの SSS 手法で使用される手動摂動では、異種特徴の学習が保証されず、サブネットワークの崩壊が効果的に防止されます。
上記の問題に直面して、この論文では、SSS 用の新しい競合ベースのクロスビュー整合性 (CCVC) 戦略を提案します。これにより、モデル内の 2 つのサブネットワークが、共同トレーニングのために、 のビューから信頼できる予測を異なる学習できるようになります。各サブネットワークが信頼性が高く意味のある予測を行うことができるようになります。
- まず、差分損失を伴うクロスビュー整合性 (CVC) 法が提案され、2 つのサブネットワークによって抽出された特徴間の類似性を最小限に抑え、異なる特徴の抽出を促進し、2 つのサブネットワークが互いに崩壊するのを防ぎます。
- 次に、あるサブネットワークから学習した知識が、相互擬似ラベル付けを使用して別のサブネットワークに転送され、ネットワーク認識が向上し、異なるビューからの同じ入力について正しく推論できるようになり、より信頼性の高い予測が得られます。
- ただし、差分損失によりモデルに強すぎる摂動が生じる可能性があり、その結果、サブネットワークによって抽出された特徴に予測にとってあまり意味のない情報が含まれる可能性があり、その結果、2 つのサブネットワークからの予測が一貫性がなく信頼性が低くなります。これは確証バイアスの問題を引き起こし、サブネットワークの共同トレーニングに悪影響を及ぼします。この問題に対処するために、衝突ベースの擬似ラベル (CPL) 方法がさらに提案され、各サブネットワークの矛盾する予測によって生成される擬似ラベルを奨励し、互いの予測をより強力に監視して、両方のサブネットワークからの一貫した予測を強制します。予測と予測の信頼性。これにより、確証バイアスの影響が軽減され、トレーニング プロセスがより安定することが期待されます。
図 1 に示すように、相互整合性正則化 (CCR) モデルの 2 つのサブネットワークから抽出された特徴間の類似性スコアは高いレベルに維持されており、CCR の推論の観点がある程度関連していることを示しています。対照的に、CVC アプローチでは、再分析されたビューが十分に異なるため、より信頼性の高い予測が生成されます。
関連作業
セマンティックセグメンテーション
半教師ありセマンティックセグメンテーション
共同訓練
方法
CCVC(競合ベースのクロスビュー一貫性)
ビュー間の一貫性
Cross-View Consistency (CVC) アプローチ。共同トレーニングに基づく 2 つの分岐ネットワークが使用されます。この場合、2 つのサブネットワーク (Ψ1 と Ψ2) は同様のアーキテクチャを持ちますが、パラメーターは共有しません。各サブネットワークを特徴抽出器と分類器に分割します。目標は、2 つのサブネットワークが異なるビューからの入力を推論できるようにすることであり、抽出される特徴は異なるものでなければなりません。したがって、2 つの特徴抽出器によって抽出された特徴間のコサイン類似性は最小化されます。
係数 1 は、損失の値が常に負でないことを保証するためのものであることに注意してください。2 つのサブネットワークは、共通の関係を持たない特徴を出力することが奨励され、それによって 2 つのサブネットワークに、無関係な 2 つのビューからの入力について推論する方法を学習させることができます。
ほとんどの SSS メソッドは、ImageNet で事前トレーニングされた ResNet を DeepLabv3+ のバックボーンとして使用し、小さい学習率でバックボーンを微調整するだけであるため、この論文での特徴差の最大化操作の達成が困難になります。この問題に対処するために、この論文では、畳み込み層を使用して抽出された特徴を別の特徴空間にマッピングすることにより、ネットワークの異質性を実現します。差分の損失は次のように書き換えられます。
ラベル付きデータとラベルなしデータの両方が差分損失で監視されます。差分の合計損失は次のとおりです。
ラベル付きデータは、監視としてグラウンド トゥルース ラベルを使用し、両方のサブネットワークが監視されます。ラベル付きデータの教師あり損失は次のとおりです。
ラベルのないデータの擬似ラベル付けにより、各サブネットワークが他のサブネットワークからセマンティック情報を学習できるようになります。クロスエントロピー損失を適用してモデルを微調整します。2 つのサブネットワークは、擬似ラベルとして相互に監視します。ラベルのないデータの損失は次のとおりです (相互整合性の損失)。
ネットワーク全体の合計損失: 監視損失、一貫性損失、および差分損失の加重合計。
競合ベースの疑似ラベル付け
クロスビュー整合性 (CVC) メソッドを使用して、2 つのサブネットワークは異なるビューからセマンティック情報を学習します。ただし、特徴差の損失によってモデルにあまりにも強い摂動が導入される場合、トレーニングは十分に安定しません。したがって、2 つのサブネットワークが相互に有用なセマンティック情報を学習できることを保証するのは難しく、予測の信頼性にさらに影響を与える可能性があります。
したがって、この論文では、2 つのサブネットワークが矛盾する予測からより多くのセマンティック情報を学習して一貫した予測を行うことを可能にする、競合ベースの疑似ラベル付け (CPL) 方法を提案しています。これにより、2 つのサブネットワークが同じ信頼できる予測とさらなる安定化を確実に生成できるようになります。トレーニングの。バイナリ値 δ を使用して、予測が矛盾するかどうかを定義します。
目標は、モデルがこれらの矛盾する予測からより多くの意味情報を学習できるようにすることです。したがって、これらの予測を使用してモデルを微調整するための擬似ラベルを生成する場合、より高い重み ωc がこれらの擬似ラベルによって監視されるクロスエントロピー損失に割り当てられます。
ただし、トレーニング中、一部の疑似ラベルが間違っている可能性があるため、トレーニングは確証バイアスにも悩まされる可能性があります。この論文はさらに、競合予測を 2 つのカテゴリ、競合するが信頼できる (CC) 予測、競合するが信頼できない (CU) 予測に分類し、CC 予測によって生成された疑似ラベルに ωc のみを割り当てます。
バイナリ値を使用してCC 予測を定義します。CU 予測と競合のない予測を表す共用体を使用する
CU 予測によって生成された擬似ラベルは、クラス間の関係に関する潜在的な情報も含まれている可能性があるため、直接破棄するのではなく、通常の重みを使用してモデルを微調整するために引き続き使用されます。一貫性の損失は次のように書き換えられます。
の、
CCVC 手法は、2 つのサブネットワークが同じ入力について異なる観点から推論することを効果的に促進でき、2 つのサブネットワーク間の知識の伝達により各サブネットワークの認識が向上し、それによって予測の信頼性が向上します。
推論フェーズでは、予測を生成するために必要なネットワークのブランチは 1 つだけです。
実験
キーコード
CCVC_no_aug.py
# https://github.com/xiaoyao3302/CCVC/blob/master/CCVC_no_aug.py
args = parser.parse_args()
args.world_size = args.gpus * args.nodes
args.ddp = True if args.gpus > 1 else False
def main(gpu, ngpus_per_node, cfg, args):
args.local_rank = gpu
if args.local_rank <= 0:
os.makedirs(args.save_path, exist_ok=True)
logger = init_log_save(args.save_path, 'global', logging.INFO)
logger.propagate = 0
if args.local_rank <= 0:
tb_dir = args.save_path
tb = SummaryWriter(log_dir=tb_dir)
if args.ddp:
dist.init_process_group(backend='nccl', rank=args.local_rank, world_size=args.world_size)
# dist.init_process_group(backend='nccl', init_method='env://', rank=args.local_rank, world_size=args.world_size)
if args.local_rank <= 0:
logger.info('{}\n'.format(pprint.pformat(cfg)))
random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed)
torch.backends.cudnn.benchmark = True
model = Discrepancy_DeepLabV3Plus(args, cfg)
if args.local_rank <= 0:
logger.info('Total params: {:.1f}M\n'.format(count_params(model)))
# pdb.set_trace()
# TODO: check the parameter !!!
optimizer = SGD([{'params': model.branch1.backbone.parameters(), 'lr': args.base_lr * args.lr_backbone},
{'params': model.branch2.backbone.parameters(), 'lr': args.base_lr * args.lr_backbone},
{'params': [param for name, param in model.named_parameters() if 'backbone' not in name],
'lr': args.base_lr * args.lr_network}], lr=args.base_lr, momentum=0.9, weight_decay=1e-4)
model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
model.cuda(args.local_rank)
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank], output_device=args.local_rank, broadcast_buffers=False, find_unused_parameters=False)
# # try to load saved model
# try:
# model.load_model(args.load_path)
# if args.local_rank <= 0:
# logger.info('load saved model')
# except:
# if args.local_rank <= 0:
# logger.info('no saved model')
# ---- #
# loss #
# ---- #
# CE loss for labeled data
criterion_l = nn.CrossEntropyLoss(reduction='mean', ignore_index=255).cuda(args.local_rank)
# consistency loss for unlabeled data
criterion_u = nn.CrossEntropyLoss(reduction='none').cuda(args.local_rank)
# ------- #
# dataset #
# ------- #
trainset_u = SemiDataset(cfg['dataset'], cfg['data_root'], 'train_u',
args.crop_size, args.unlabeled_id_path)
trainset_l = SemiDataset(cfg['dataset'], cfg['data_root'], 'train_l',
args.crop_size, args.labeled_id_path, nsample=len(trainset_u.ids))
valset = SemiDataset(cfg['dataset'], cfg['data_root'], 'val')
if args.ddp:
trainsampler_l = torch.utils.data.distributed.DistributedSampler(trainset_l)
else:
trainsampler_l = None
trainloader_l = DataLoader(trainset_l, batch_size=args.batch_size,
pin_memory=True, num_workers=args.num_workers, drop_last=True, sampler=trainsampler_l)
if args.ddp:
trainsampler_u = torch.utils.data.distributed.DistributedSampler(trainset_u)
else:
trainsampler_u = None
trainloader_u = DataLoader(trainset_u, batch_size=args.batch_size,
pin_memory=True, num_workers=args.num_workers, drop_last=True, sampler=trainsampler_u)
valloader = DataLoader(valset, batch_size=1, pin_memory=True, num_workers=args.num_workers, drop_last=False)
# if args.ddp:
# valsampler = torch.utils.data.distributed.DistributedSampler(valset)
# else:
# valsampler = None
# valloader = DataLoader(valset, batch_size=1, pin_memory=True, num_workers=args.num_workers, drop_last=False, sampler=valsampler)
total_iters = len(trainloader_u) * args.epochs
previous_best = 0.0
previous_best1 = 0.0
previous_best2 = 0.0
# can change with epochs, add SPL here
conf_threshold = args.conf_threshold
for epoch in range(args.epochs):
if args.local_rank <= 0:
logger.info('===========> Epoch: {:}, backbone1 LR: {:.4f}, backbone2 LR: {:.4f}, segmentation LR: {:.4f}'.format(
epoch, optimizer.param_groups[0]['lr'], optimizer.param_groups[1]['lr'], optimizer.param_groups[-1]['lr']))
logger.info('===========> Epoch: {:}, Previous best of ave: {:.2f}, Previous best of branch1: {:.2f}, Previous best of branch2: {:.2f}'.format(
epoch, previous_best, previous_best1, previous_best2))
total_loss, total_loss_CE, total_loss_con, total_loss_dis = 0.0, 0.0, 0.0, 0.0
total_mask_ratio = 0.0
trainloader_l.sampler.set_epoch(epoch)
trainloader_u.sampler.set_epoch(epoch)
loader = zip(trainloader_l, trainloader_u)
total_labeled = 0
total_unlabeled = 0
for i, ((labeled_img, labeled_img_mask), (unlabeled_img, ignore_img_mask, cutmix_box)) in enumerate(loader):
labeled_img, labeled_img_mask = labeled_img.cuda(args.local_rank), labeled_img_mask.cuda(args.local_rank)
unlabeled_img, ignore_img_mask, cutmix_box = unlabeled_img.cuda(args.local_rank), ignore_img_mask.cuda(args.local_rank), cutmix_box.cuda(args.local_rank)
model.train()
optimizer.zero_grad()
dist.barrier()
num_lb, num_ulb = labeled_img.shape[0], unlabeled_img.shape[0]
total_labeled += num_lb
total_unlabeled += num_ulb
# =========================================================================================
# labeled data: labeled_img, labeled_img_mask
# =========================================================================================
labeled_logits = model(labeled_img)
# =========================================================================================
# unlabeled data: unlabeled_img, ignore_img_mask, cutmix_box
# =========================================================================================
unlabeled_logits = model(unlabeled_img)
# to count the confident predictions
unlabeled_pred_confidence1 = unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[0]
unlabeled_pred_confidence2 = unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[0]
# =========================================================================================
# calculate loss
# =========================================================================================
# -------------
# labeled
# -------------
# CE loss
labeled_pred1 = labeled_logits['pred1']
labeled_pred2 = labeled_logits['pred2']
loss_CE1 = criterion_l(labeled_pred1, labeled_img_mask)
loss_CE2 = criterion_l(labeled_pred2, labeled_img_mask)
loss_CE = (loss_CE1 + loss_CE2) / 2
loss_CE = loss_CE * args.w_CE
# -------------
# unlabeled
# -------------
# consistency loss
unlabeled_pred1 = unlabeled_logits['pred1']
unlabeled_pred2 = unlabeled_logits['pred2']
if args.mode_confident == 'normal':
loss_con1 = criterion_u(unlabeled_pred2, unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * ((unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[0] > conf_threshold) & (ignore_img_mask != 255))
loss_con1 = torch.sum(loss_con1) / torch.sum(ignore_img_mask != 255).item()
loss_con2 = criterion_u(unlabeled_pred1, unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * ((unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[0] > conf_threshold) & (ignore_img_mask != 255))
loss_con2 = torch.sum(loss_con2) / torch.sum(ignore_img_mask != 255).item()
elif args.mode_confident == 'soft':
confident_pred1, confident_pred2, unconfident_pred1, unconfident_pred2 = soft_label_selection(unlabeled_pred1, unlabeled_pred2, conf_threshold)
loss_con1_confident = criterion_u(unlabeled_pred2, unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (confident_pred1 & (ignore_img_mask != 255))
loss_con2_confident = criterion_u(unlabeled_pred1, unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (confident_pred2 & (ignore_img_mask != 255))
loss_con1_unconfident = args.w_unconfident * criterion_u(unlabeled_pred2, unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (unconfident_pred1 & (ignore_img_mask != 255))
loss_con2_unconfident = args.w_unconfident * criterion_u(unlabeled_pred1, unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (unconfident_pred2 & (ignore_img_mask != 255))
loss_con1 = (torch.sum(loss_con1_confident) + torch.sum(loss_con1_unconfident)) / torch.sum(ignore_img_mask != 255).item()
loss_con2 = (torch.sum(loss_con2_confident) + torch.sum(loss_con2_unconfident)) / torch.sum(ignore_img_mask != 255).item()
elif args.mode_confident == 'vote':
same_pred, different_pred = vote_label_selection(unlabeled_pred1, unlabeled_pred2)
loss_con1_same = criterion_u(unlabeled_pred2, unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (same_pred & (ignore_img_mask != 255))
loss_con2_same = criterion_u(unlabeled_pred1, unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (same_pred & (ignore_img_mask != 255))
loss_con1_different = args.w_confident * criterion_u(unlabeled_pred2, unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (different_pred & (ignore_img_mask != 255))
loss_con2_different = args.w_confident * criterion_u(unlabeled_pred1, unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (different_pred & (ignore_img_mask != 255))
loss_con1 = (torch.sum(loss_con1_same) + torch.sum(loss_con1_different)) / torch.sum(ignore_img_mask != 255).item()
loss_con2 = (torch.sum(loss_con2_same) + torch.sum(loss_con2_different)) / torch.sum(ignore_img_mask != 255).item()
elif args.mode_confident == 'vote_threshold':
different1_confident, different1_else, different2_confident, different2_else = vote_threshold_label_selection(unlabeled_pred1, unlabeled_pred2, conf_threshold)
loss_con1_else = criterion_u(unlabeled_pred2, unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (different1_else & (ignore_img_mask != 255))
loss_con2_else = criterion_u(unlabeled_pred1, unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (different2_else & (ignore_img_mask != 255))
loss_con1_cc = args.w_confident * criterion_u(unlabeled_pred2, unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (different1_confident & (ignore_img_mask != 255))
loss_con2_cc = args.w_confident * criterion_u(unlabeled_pred1, unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (different2_confident & (ignore_img_mask != 255))
loss_con1 = (torch.sum(loss_con1_else) + torch.sum(loss_con1_cc)) / torch.sum(ignore_img_mask != 255).item()
loss_con2 = (torch.sum(loss_con2_else) + torch.sum(loss_con2_cc)) / torch.sum(ignore_img_mask != 255).item()
elif args.mode_confident == 'vote_soft':
same_pred, different_confident_pred1, different_confident_pred2, different_unconfident_pred1, different_unconfident_pred2 = vote_soft_label_selection(unlabeled_pred1, unlabeled_pred2, conf_threshold)
loss_con1_same = criterion_u(unlabeled_pred2, unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (same_pred & (ignore_img_mask != 255))
loss_con2_same = criterion_u(unlabeled_pred1, unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (same_pred & (ignore_img_mask != 255))
loss_con1_different_confident = args.w_confident * criterion_u(unlabeled_pred2, unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (different_confident_pred1 & (ignore_img_mask != 255))
loss_con2_different_confident = args.w_confident * criterion_u(unlabeled_pred1, unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (different_confident_pred2 & (ignore_img_mask != 255))
loss_con1_different_unconfident = args.w_unconfident * criterion_u(unlabeled_pred2, unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (different_unconfident_pred1 & (ignore_img_mask != 255))
loss_con2_different_unconfident = args.w_unconfident * criterion_u(unlabeled_pred1, unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (different_unconfident_pred2 & (ignore_img_mask != 255))
loss_con1 = (torch.sum(loss_con1_same) + torch.sum(loss_con1_different_confident) + torch.sum(loss_con1_different_unconfident)) / torch.sum(ignore_img_mask != 255).item()
loss_con2 = (torch.sum(loss_con2_same) + torch.sum(loss_con2_different_confident) + torch.sum(loss_con2_different_unconfident)) / torch.sum(ignore_img_mask != 255).item()
else:
loss_con1 = criterion_u(unlabeled_pred2, unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (ignore_img_mask != 255)
loss_con1 = torch.sum(loss_con1) / torch.sum(ignore_img_mask != 255).item()
loss_con2 = criterion_u(unlabeled_pred1, unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (ignore_img_mask != 255)
loss_con2 = torch.sum(loss_con2) / torch.sum(ignore_img_mask != 255).item()
loss_con = (loss_con1 + loss_con2) / 2
loss_con = loss_con * args.w_con
# -------------
# both
# -------------
# discrepancy loss
cos_dis = nn.CosineSimilarity(dim=1, eps=1e-6)
# labeled
labeled_feature1 = labeled_logits['feature1']
labeled_feature2 = labeled_logits['feature2']
loss_dis_labeled1 = 1 + cos_dis(labeled_feature1.detach(), labeled_feature2).mean()
loss_dis_labeled2 = 1 + cos_dis(labeled_feature2.detach(), labeled_feature1).mean()
loss_dis_labeled = (loss_dis_labeled1 + loss_dis_labeled2) / 2
# unlabeled
unlabeled_feature1 = unlabeled_logits['feature1']
unlabeled_feature2 = unlabeled_logits['feature2']
loss_dis_unlabeled1 = 1 + cos_dis(unlabeled_feature1.detach(), unlabeled_feature2).mean()
loss_dis_unlabeled2 = 1 + cos_dis(unlabeled_feature2.detach(), unlabeled_feature1).mean()
loss_dis_unlabeled = (loss_dis_unlabeled1 + loss_dis_unlabeled2) / 2
loss_dis = (loss_dis_labeled + loss_dis_unlabeled) / 2
loss_dis = loss_dis * args.w_dis
# -------------
# total
# -------------
loss = loss_CE
if args.use_con:
loss = loss + loss_con
if args.use_dis:
loss = loss + loss_dis
dist.barrier()
loss.backward()
optimizer.step()
total_loss += loss.item()
total_loss_CE += loss_CE.item()
total_loss_con += loss_con.item()
total_loss_dis += loss_dis.item()
total_confident = (((unlabeled_pred_confidence1 >= 0.95) & (ignore_img_mask != 255)).sum().item() + ((unlabeled_pred_confidence2 >= 0.95) & (ignore_img_mask != 255)).sum().item()) / 2
total_mask_ratio += total_confident / (ignore_img_mask != 255).sum().item()
iters = epoch * len(trainloader_u) + i
# update lr
backbone_lr = args.base_lr * (1 - iters / total_iters) ** args.mul_scheduler
backbone_lr = backbone_lr * args.lr_backbone
seg_lr = args.base_lr * (1 - iters / total_iters) ** args.mul_scheduler
seg_lr = seg_lr * args.lr_network
optimizer.param_groups[0]["lr"] = backbone_lr
optimizer.param_groups[1]["lr"] = backbone_lr
for ii in range(2, len(optimizer.param_groups)):
optimizer.param_groups[ii]['lr'] = seg_lr
if (i % (len(trainloader_u) // 8) == 0) and (args.local_rank <= 0):
tb.add_scalar('train_loss_total', total_loss / (i+1), iters)
tb.add_scalar('train_loss_CE', total_loss_CE / (i+1), iters)
tb.add_scalar('train_loss_con', total_loss_con / (i+1), iters)
tb.add_scalar('train_loss_dis', total_loss_dis / (i+1), iters)
if (i % (len(trainloader_u) // 8) == 0) and (args.local_rank <= 0):
logger.info('Iters: {:}, Total loss: {:.3f}, Loss CE: {:.3f}, '
'Loss consistency: {:.3f}, Loss discrepancy: {:.3f}, Mask: {:.3f}'.format(
i, total_loss / (i+1), total_loss_CE / (i+1), total_loss_con / (i+1), total_loss_dis / (i+1),
total_mask_ratio / (i+1)))
if args.use_SPL:
conf_threshold += 0.01
if conf_threshold >= 0.95:
conf_threshold = 0.95
if cfg['dataset'] == 'cityscapes':
eval_mode = 'center_crop' if epoch < args.epochs - 20 else 'sliding_window'
else:
eval_mode = 'original'
dist.barrier()
# test with different branches
if args.local_rank <= 0:
if epoch == 4:
evaluate_result = evaluate_save(cfg['dataset'], args.save_path, args.local_rank, model, valloader, eval_mode, args, cfg, idx_epoch=5)
elif epoch == 9:
evaluate_result = evaluate_save(cfg['dataset'], args.save_path, args.local_rank, model, valloader, eval_mode, args, cfg, idx_epoch=10)
elif epoch == 19:
evaluate_result = evaluate_save(cfg['dataset'], args.save_path, args.local_rank, model, valloader, eval_mode, args, cfg, idx_epoch=20)
elif epoch == 39:
evaluate_result = evaluate_save(cfg['dataset'], args.save_path, args.local_rank, model, valloader, eval_mode, args, cfg, idx_epoch=40)
else:
evaluate_result = evaluate(args.local_rank, model, valloader, eval_mode, args, cfg)
mIOU1 = evaluate_result['IOU1']
mIOU2 = evaluate_result['IOU2']
mIOU_ave = evaluate_result['IOU_ave']
tb.add_scalar('meanIOU_branch1', mIOU1, epoch)
tb.add_scalar('meanIOU_branch2', mIOU2, epoch)
tb.add_scalar('meanIOU_ave', mIOU_ave, epoch)
logger.info('***** Evaluation with branch 1 {} ***** >>>> meanIOU: {:.2f}\n'.format(eval_mode, mIOU1))
logger.info('***** Evaluation with branch 2 {} ***** >>>> meanIOU: {:.2f}\n'.format(eval_mode, mIOU2))
logger.info('***** Evaluation with two branches {} ***** >>>> meanIOU: {:.2f}\n'.format(eval_mode, mIOU_ave))
if mIOU1 > previous_best1:
if previous_best1 != 0:
os.remove(os.path.join(args.save_path, 'branch1_%s_%.2f.pth' % (args.backbone, previous_best1)))
previous_best1 = mIOU1
torch.save(model.module.state_dict(),
os.path.join(args.save_path, 'branch1_%s_%.2f.pth' % (args.backbone, mIOU1)))
if mIOU2 > previous_best2:
if previous_best2 != 0:
os.remove(os.path.join(args.save_path, 'branch2_%s_%.2f.pth' % (args.backbone, previous_best2)))
previous_best2 = mIOU2
torch.save(model.module.state_dict(),
os.path.join(args.save_path, 'branch2_%s_%.2f.pth' % (args.backbone, mIOU2)))
if mIOU_ave > previous_best:
if previous_best != 0:
os.remove(os.path.join(args.save_path, 'ave_%s_%.2f.pth' % (args.backbone, previous_best)))
previous_best = mIOU_ave
torch.save(model.module.state_dict(),
os.path.join(args.save_path, 'ave_%s_%.2f.pth' % (args.backbone, mIOU_ave)))
CCVC_aug.py
# https://github.com/xiaoyao3302/CCVC/blob/master/CCVC_aug.py
args = parser.parse_args()
args.world_size = args.gpus * args.nodes
args.ddp = True if args.gpus > 1 else False
def main(gpu, ngpus_per_node, cfg, args):
args.local_rank = gpu
if args.local_rank <= 0:
os.makedirs(args.save_path, exist_ok=True)
logger = init_log_save(args.save_path, 'global', logging.INFO)
logger.propagate = 0
if args.local_rank <= 0:
tb_dir = args.save_path
tb = SummaryWriter(log_dir=tb_dir)
if args.ddp:
dist.init_process_group(backend='nccl', rank=args.local_rank, world_size=args.world_size)
# dist.init_process_group(backend='nccl', init_method='env://', rank=args.local_rank, world_size=args.world_size)
if args.local_rank <= 0:
logger.info('{}\n'.format(pprint.pformat(cfg)))
random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed)
torch.backends.cudnn.benchmark = True
model = Discrepancy_DeepLabV3Plus(args, cfg)
if args.local_rank <= 0:
logger.info('Total params: {:.1f}M\n'.format(count_params(model)))
# pdb.set_trace()
# TODO: check the parameter !!!
optimizer = SGD([{'params': model.branch1.backbone.parameters(), 'lr': args.base_lr * args.lr_backbone},
{'params': model.branch2.backbone.parameters(), 'lr': args.base_lr * args.lr_backbone},
{'params': [param for name, param in model.named_parameters() if 'backbone' not in name],
'lr': args.base_lr * args.lr_network}], lr=args.base_lr, momentum=0.9, weight_decay=1e-4)
model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
model.cuda(args.local_rank)
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank], output_device=args.local_rank, broadcast_buffers=False, find_unused_parameters=False)
# # try to load saved model
# try:
# model.load_model(args.load_path)
# if args.local_rank <= 0:
# logger.info('load saved model')
# except:
# if args.local_rank <= 0:
# logger.info('no saved model')
# ---- #
# loss #
# ---- #
# CE loss for labeled data
if args.mode_criterion == 'CE':
criterion_l = nn.CrossEntropyLoss(reduction='mean', ignore_index=255).cuda(args.local_rank)
else:
criterion_l = ProbOhemCrossEntropy2d(**cfg['criterion']['kwargs']).cuda(args.local_rank)
# consistency loss for unlabeled data
criterion_u = nn.CrossEntropyLoss(reduction='none').cuda(args.local_rank)
# ------- #
# dataset #
# ------- #
trainset_u = SemiDataset(cfg['dataset'], cfg['data_root'], 'train_u',
args.crop_size, args.unlabeled_id_path)
trainset_l = SemiDataset(cfg['dataset'], cfg['data_root'], 'train_l',
args.crop_size, args.labeled_id_path, nsample=len(trainset_u.ids))
valset = SemiDataset(cfg['dataset'], cfg['data_root'], 'val')
if args.ddp:
trainsampler_l = torch.utils.data.distributed.DistributedSampler(trainset_l)
else:
trainsampler_l = None
trainloader_l = DataLoader(trainset_l, batch_size=args.batch_size,
pin_memory=True, num_workers=args.num_workers, drop_last=True, sampler=trainsampler_l)
if args.ddp:
trainsampler_u = torch.utils.data.distributed.DistributedSampler(trainset_u)
else:
trainsampler_u = None
trainloader_u = DataLoader(trainset_u, batch_size=args.batch_size,
pin_memory=True, num_workers=args.num_workers, drop_last=True, sampler=trainsampler_u)
valloader = DataLoader(valset, batch_size=1, pin_memory=True, num_workers=args.num_workers, drop_last=False)
# if args.ddp:
# valsampler = torch.utils.data.distributed.DistributedSampler(valset)
# else:
# valsampler = None
# valloader = DataLoader(valset, batch_size=1, pin_memory=True, num_workers=args.num_workers, drop_last=False, sampler=valsampler)
total_iters = len(trainloader_u) * args.epochs
previous_best = 0.0
previous_best1 = 0.0
previous_best2 = 0.0
# can change with epochs, add SPL here
conf_threshold = args.conf_threshold
for epoch in range(args.epochs):
if args.local_rank <= 0:
logger.info('===========> Epoch: {:}, backbone1 LR: {:.4f}, backbone2 LR: {:.4f}, segmentation LR: {:.4f}'.format(
epoch, optimizer.param_groups[0]['lr'], optimizer.param_groups[1]['lr'], optimizer.param_groups[-1]['lr']))
logger.info('===========> Epoch: {:}, Previous best of ave: {:.2f}, Previous best of branch1: {:.2f}, Previous best of branch2: {:.2f}'.format(
epoch, previous_best, previous_best1, previous_best2))
total_loss, total_loss_CE, total_loss_con, total_loss_dis = 0.0, 0.0, 0.0, 0.0
total_mask_ratio = 0.0
trainloader_l.sampler.set_epoch(epoch)
trainloader_u.sampler.set_epoch(epoch)
loader = zip(trainloader_l, trainloader_u)
total_labeled = 0
total_unlabeled = 0
for i, ((labeled_img, labeled_img_mask), (unlabeled_img, aug_unlabeled_img, ignore_img_mask, unlabeled_cutmix_box)) in enumerate(loader):
labeled_img, labeled_img_mask = labeled_img.cuda(args.local_rank), labeled_img_mask.cuda(args.local_rank)
unlabeled_img, aug_unlabeled_img, ignore_img_mask, unlabeled_cutmix_box = unlabeled_img.cuda(args.local_rank), aug_unlabeled_img.cuda(args.local_rank), ignore_img_mask.cuda(args.local_rank), unlabeled_cutmix_box.cuda(args.local_rank)
optimizer.zero_grad()
dist.barrier()
num_lb, num_ulb = labeled_img.shape[0], unlabeled_img.shape[0]
total_labeled += num_lb
total_unlabeled += num_ulb
# =========================================================================================
# labeled data: labeled_img, labeled_img_mask
# =========================================================================================
model.train()
labeled_logits = model(labeled_img)
# =========================================================================================
# unlabeled data: unlabeled_img, ignore_img_mask, cutmix_box
# =========================================================================================
# first feed the data into the model with no grad to get gt
with torch.no_grad():
model.eval()
unlabeled_logits = model(unlabeled_img)
unlabeled_pseudo_label1 = unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()
unlabeled_pseudo_label2 = unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()
unlabeled_pred1 = unlabeled_logits['pred1'].detach()
unlabeled_pred2 = unlabeled_logits['pred2'].detach()
# then perform cutmix
aug_unlabeled_img_for_mix = aug_unlabeled_img.clone()
aug_unlabeled_ignore_img_mask_for_mix = ignore_img_mask.clone()
aug_unlabeled_pseudo_label1_for_mix = unlabeled_pseudo_label1.clone()
aug_unlabeled_pseudo_label2_for_mix = unlabeled_pseudo_label2.clone()
aug_unlabeled_pred1_for_mix = unlabeled_pred1.clone()
aug_unlabeled_pred2_for_mix = unlabeled_pred2.clone()
# initial cutmix is companied with a probablility 0.5, increase the data divergence
aug_unlabeled_img_for_mix[unlabeled_cutmix_box.unsqueeze(1).expand(aug_unlabeled_img_for_mix.shape) == 1] = aug_unlabeled_img_for_mix.flip(0)[unlabeled_cutmix_box.unsqueeze(1).expand(aug_unlabeled_img_for_mix.shape) == 1]
aug_unlabeled_ignore_img_mask_for_mix[unlabeled_cutmix_box == 1] = aug_unlabeled_ignore_img_mask_for_mix.flip(0)[unlabeled_cutmix_box == 1]
aug_unlabeled_pseudo_label1_for_mix[unlabeled_cutmix_box == 1] = aug_unlabeled_pseudo_label1_for_mix.flip(0)[unlabeled_cutmix_box == 1]
aug_unlabeled_pseudo_label2_for_mix[unlabeled_cutmix_box == 1] = aug_unlabeled_pseudo_label2_for_mix.flip(0)[unlabeled_cutmix_box == 1]
aug_unlabeled_pred1_for_mix[unlabeled_cutmix_box.unsqueeze(1).expand(aug_unlabeled_pred1_for_mix.shape) == 1] = aug_unlabeled_pred1_for_mix.flip(0)[unlabeled_cutmix_box.unsqueeze(1).expand(aug_unlabeled_pred1_for_mix.shape) == 1]
aug_unlabeled_pred2_for_mix[unlabeled_cutmix_box.unsqueeze(1).expand(aug_unlabeled_pred2_for_mix.shape) == 1] = aug_unlabeled_pred2_for_mix.flip(0)[unlabeled_cutmix_box.unsqueeze(1).expand(aug_unlabeled_pred2_for_mix.shape) == 1]
# finally feed the mixed data into the model
model.train()
cutmixed_aug_unlabeled_logits = model(aug_unlabeled_img_for_mix)
# one extra branch: unlabeled data
raw_unlabeled_logits = model(unlabeled_img)
# to count the confident predictions
unlabeled_pred_confidence1 = raw_unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[0]
unlabeled_pred_confidence2 = raw_unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[0]
# =========================================================================================
# calculate loss
# =========================================================================================
# -------------
# labeled
# -------------
# CE loss
labeled_pred1 = labeled_logits['pred1']
labeled_pred2 = labeled_logits['pred2']
loss_CE1 = criterion_l(labeled_pred1, labeled_img_mask)
loss_CE2 = criterion_l(labeled_pred2, labeled_img_mask)
loss_CE = (loss_CE1 + loss_CE2) / 2
loss_CE = loss_CE * args.w_CE
# -------------
# unlabeled
# -------------
# consistency loss
raw_unlabeled_pred1 = raw_unlabeled_logits['pred1']
raw_unlabeled_pred2 = raw_unlabeled_logits['pred2']
cutmixed_aug_unlabeled_pred1 = cutmixed_aug_unlabeled_logits['pred1']
cutmixed_aug_unlabeled_pred2 = cutmixed_aug_unlabeled_logits['pred2']
if args.mode_confident == 'normal':
loss_con1 = criterion_u(cutmixed_aug_unlabeled_pred2, aug_unlabeled_pseudo_label1_for_mix) * ((aug_unlabeled_pred1_for_mix.softmax(dim=1).max(dim=1)[0] > conf_threshold) & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con1 = torch.sum(loss_con1) / torch.sum(aug_unlabeled_ignore_img_mask_for_mix != 255).item()
loss_con2 = criterion_u(cutmixed_aug_unlabeled_pred1, aug_unlabeled_pseudo_label2_for_mix) * ((aug_unlabeled_pred2_for_mix.softmax(dim=1).max(dim=1)[0] > conf_threshold) & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con2 = torch.sum(loss_con2) / torch.sum(aug_unlabeled_ignore_img_mask_for_mix != 255).item()
loss_raw_con1 = criterion_u(raw_unlabeled_pred2, raw_unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * ((raw_unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[0] > conf_threshold) & (ignore_img_mask != 255))
loss_raw_con1 = torch.sum(loss_raw_con1) / torch.sum(ignore_img_mask != 255).item()
loss_raw_con2 = criterion_u(raw_unlabeled_pred1, raw_unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * ((raw_unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[0] > conf_threshold) & (ignore_img_mask != 255))
loss_raw_con2 = torch.sum(loss_raw_con2) / torch.sum(ignore_img_mask != 255).item()
elif args.mode_confident == 'soft':
confident_pred1, confident_pred2, unconfident_pred1, unconfident_pred2 = soft_label_selection(aug_unlabeled_pred1_for_mix, aug_unlabeled_pred2_for_mix, conf_threshold)
loss_con1_confident = criterion_u(cutmixed_aug_unlabeled_pred2, aug_unlabeled_pseudo_label1_for_mix) * (confident_pred1 & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con2_confident = criterion_u(cutmixed_aug_unlabeled_pred1, aug_unlabeled_pseudo_label2_for_mix) * (confident_pred2 & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con1_unconfident = 0.5 * criterion_u(cutmixed_aug_unlabeled_pred2, aug_unlabeled_pseudo_label1_for_mix) * (unconfident_pred1 & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con2_unconfident = 0.5 * criterion_u(cutmixed_aug_unlabeled_pred1, aug_unlabeled_pseudo_label2_for_mix) * (unconfident_pred2 & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con1 = (torch.sum(loss_con1_confident) + torch.sum(loss_con1_unconfident)) / torch.sum(aug_unlabeled_ignore_img_mask_for_mix != 255).item()
loss_con2 = (torch.sum(loss_con2_confident) + torch.sum(loss_con2_unconfident)) / torch.sum(aug_unlabeled_ignore_img_mask_for_mix != 255).item()
raw_confident_pred1, raw_confident_pred2, raw_unconfident_pred1, raw_unconfident_pred2 = soft_label_selection(raw_unlabeled_pred1, raw_unlabeled_pred2, conf_threshold)
loss_raw_con1_confident = criterion_u(raw_unlabeled_pred2, raw_unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_confident_pred1 & (ignore_img_mask != 255))
loss_raw_con2_confident = criterion_u(raw_unlabeled_pred1, raw_unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_confident_pred2 & (ignore_img_mask != 255))
loss_raw_con1_unconfident = 0.5 * criterion_u(raw_unlabeled_pred2, raw_unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_unconfident_pred1 & (ignore_img_mask != 255))
loss_raw_con2_unconfident = 0.5 * criterion_u(raw_unlabeled_pred1, raw_unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_unconfident_pred2 & (ignore_img_mask != 255))
loss_raw_con1 = (torch.sum(loss_raw_con1_confident) + torch.sum(loss_raw_con1_unconfident)) / torch.sum(ignore_img_mask != 255).item()
loss_raw_con2 = (torch.sum(loss_raw_con2_confident) + torch.sum(loss_raw_con2_unconfident)) / torch.sum(ignore_img_mask != 255).item()
elif args.mode_confident == 'vote':
same_pred, different_pred = vote_label_selection(unlabeled_pred1, unlabeled_pred2)
loss_con1_same = criterion_u(cutmixed_aug_unlabeled_pred2, aug_unlabeled_pseudo_label1_for_mix) * (same_pred & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con2_same = criterion_u(cutmixed_aug_unlabeled_pred1, aug_unlabeled_pseudo_label2_for_mix) * (same_pred & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con1_different = 1.5 * criterion_u(cutmixed_aug_unlabeled_pred2, aug_unlabeled_pseudo_label1_for_mix) * (different_pred & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con2_different = 1.5 * criterion_u(cutmixed_aug_unlabeled_pred1, aug_unlabeled_pseudo_label2_for_mix) * (different_pred & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con1 = (torch.sum(loss_con1_same) + torch.sum(loss_con1_different)) / torch.sum(aug_unlabeled_ignore_img_mask_for_mix != 255).item()
loss_con2 = (torch.sum(loss_con2_same) + torch.sum(loss_con2_different)) / torch.sum(aug_unlabeled_ignore_img_mask_for_mix != 255).item()
raw_same_pred, raw_different_pred = vote_label_selection(raw_unlabeled_pred1, raw_unlabeled_pred2)
loss_raw_con1_same = criterion_u(raw_unlabeled_pred2, raw_unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_same_pred & (ignore_img_mask != 255))
loss_raw_con2_same = criterion_u(raw_unlabeled_pred1, raw_unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_same_pred & (ignore_img_mask != 255))
loss_raw_con1_different = 1.5 * criterion_u(raw_unlabeled_pred2, raw_unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_different_pred & (ignore_img_mask != 255))
loss_raw_con2_different = 1.5 * criterion_u(raw_unlabeled_pred1, raw_unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_different_pred & (ignore_img_mask != 255))
loss_raw_con1 = (torch.sum(loss_raw_con1_same) + torch.sum(loss_raw_con1_different)) / torch.sum(ignore_img_mask != 255).item()
loss_raw_con2 = (torch.sum(loss_raw_con2_same) + torch.sum(loss_raw_con2_different)) / torch.sum(ignore_img_mask != 255).item()
elif args.mode_confident == 'vote_threshold':
different1_confident, different1_else, different2_confident, different2_else = vote_threshold_label_selection(unlabeled_pred1, unlabeled_pred2, conf_threshold)
loss_con1_else = criterion_u(cutmixed_aug_unlabeled_pred2, aug_unlabeled_pseudo_label1_for_mix) * (different1_else & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con2_else = criterion_u(cutmixed_aug_unlabeled_pred1, aug_unlabeled_pseudo_label2_for_mix) * (different2_else & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con1_cc = args.w_confident * criterion_u(cutmixed_aug_unlabeled_pred2, aug_unlabeled_pseudo_label1_for_mix) * (different1_confident & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con2_cc = args.w_confident * criterion_u(cutmixed_aug_unlabeled_pred1, aug_unlabeled_pseudo_label2_for_mix) * (different2_confident & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con1 = (torch.sum(loss_con1_else) + torch.sum(loss_con1_cc)) / torch.sum(aug_unlabeled_ignore_img_mask_for_mix != 255).item()
loss_con2 = (torch.sum(loss_con2_else) + torch.sum(loss_con2_cc)) / torch.sum(aug_unlabeled_ignore_img_mask_for_mix != 255).item()
raw_different1_confident, raw_different1_else, raw_different2_confident, raw_different2_else = vote_threshold_label_selection(raw_unlabeled_pred1, raw_unlabeled_pred2, conf_threshold)
loss_raw_con1_else = criterion_u(raw_unlabeled_pred2, raw_unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_different1_else & (ignore_img_mask != 255))
loss_raw_con2_else = criterion_u(raw_unlabeled_pred1, raw_unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_different2_else & (ignore_img_mask != 255))
loss_raw_con1_cc = args.w_confident * criterion_u(raw_unlabeled_pred2, raw_unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_different1_confident & (ignore_img_mask != 255))
loss_raw_con2_cc = args.w_confident * criterion_u(raw_unlabeled_pred1, raw_unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_different2_confident & (ignore_img_mask != 255))
loss_raw_con1 = (torch.sum(loss_raw_con1_else) + torch.sum(loss_raw_con1_cc)) / torch.sum(ignore_img_mask != 255).item()
loss_raw_con2 = (torch.sum(loss_raw_con2_else) + torch.sum(loss_raw_con2_cc)) / torch.sum(ignore_img_mask != 255).item()
elif args.mode_confident == 'vote_soft':
same_pred, different_confident_pred1, different_confident_pred2, different_unconfident_pred1, different_unconfident_pred2 = vote_soft_label_selection(unlabeled_pred1, unlabeled_pred2, conf_threshold)
loss_con1_same = criterion_u(cutmixed_aug_unlabeled_pred2, aug_unlabeled_pseudo_label1_for_mix) * (same_pred & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con2_same = criterion_u(cutmixed_aug_unlabeled_pred1, aug_unlabeled_pseudo_label2_for_mix) * (same_pred & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con1_different_confident = 1.5 * criterion_u(cutmixed_aug_unlabeled_pred2, aug_unlabeled_pseudo_label1_for_mix) * (different_confident_pred1 & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con2_different_confident = 1.5 * criterion_u(cutmixed_aug_unlabeled_pred1, aug_unlabeled_pseudo_label2_for_mix) * (different_confident_pred2 & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con1_different_unconfident = 0.5 * criterion_u(cutmixed_aug_unlabeled_pred2, aug_unlabeled_pseudo_label1_for_mix) * (different_unconfident_pred1 & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con2_different_unconfident = 0.5 * criterion_u(cutmixed_aug_unlabeled_pred1, aug_unlabeled_pseudo_label2_for_mix) * (different_unconfident_pred2 & (aug_unlabeled_ignore_img_mask_for_mix != 255))
loss_con1 = (torch.sum(loss_con1_same) + torch.sum(loss_con1_different_confident) + torch.sum(loss_con1_different_unconfident)) / torch.sum(aug_unlabeled_ignore_img_mask_for_mix != 255).item()
loss_con2 = (torch.sum(loss_con2_same) + torch.sum(loss_con2_different_confident) + torch.sum(loss_con2_different_unconfident)) / torch.sum(aug_unlabeled_ignore_img_mask_for_mix != 255).item()
raw_same_pred, raw_different_confident_pred1, raw_different_confident_pred2, raw_different_unconfident_pred1, raw_different_unconfident_pred2 = vote_soft_label_selection(unlabeled_pred1, unlabeled_pred2, conf_threshold)
loss_raw_con1_same = criterion_u(raw_unlabeled_pred2, raw_unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_same_pred & (ignore_img_mask != 255))
loss_raw_con2_same = criterion_u(raw_unlabeled_pred1, raw_unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_same_pred & (ignore_img_mask != 255))
loss_raw_con1_different_confident = 1.5 * criterion_u(raw_unlabeled_pred2, raw_unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_different_confident_pred1 & (ignore_img_mask != 255))
loss_raw_con2_different_confident = 1.5 * criterion_u(raw_unlabeled_pred1, raw_unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_different_confident_pred2 & (ignore_img_mask != 255))
loss_raw_con1_different_unconfident = 0.5 * criterion_u(raw_unlabeled_pred2, raw_unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_different_unconfident_pred1 & (ignore_img_mask != 255))
loss_raw_con2_different_unconfident = 0.5 * criterion_u(raw_unlabeled_pred1, raw_unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (raw_different_unconfident_pred2 & (ignore_img_mask != 255))
loss_raw_con1 = (torch.sum(loss_raw_con1_same) + torch.sum(loss_raw_con1_different_confident) + torch.sum(loss_raw_con1_different_unconfident)) / torch.sum(ignore_img_mask != 255).item()
loss_raw_con2 = (torch.sum(loss_raw_con2_same) + torch.sum(loss_raw_con2_different_confident) + torch.sum(loss_raw_con2_different_unconfident)) / torch.sum(ignore_img_mask != 255).item()
else:
loss_con1 = criterion_u(cutmixed_aug_unlabeled_pred2, aug_unlabeled_pseudo_label1_for_mix) * (aug_unlabeled_ignore_img_mask_for_mix != 255)
loss_con1 = torch.sum(loss_con1) / torch.sum(aug_unlabeled_ignore_img_mask_for_mix != 255).item()
loss_con2 = criterion_u(cutmixed_aug_unlabeled_pred1, aug_unlabeled_pseudo_label2_for_mix) * (aug_unlabeled_ignore_img_mask_for_mix != 255)
loss_con2 = torch.sum(loss_con2) / torch.sum(aug_unlabeled_ignore_img_mask_for_mix != 255).item()
loss_raw_con1 = criterion_u(raw_unlabeled_pred2, raw_unlabeled_logits['pred1'].softmax(dim=1).max(dim=1)[1].detach().long()) * (ignore_img_mask != 255)
loss_raw_con1 = torch.sum(loss_raw_con1) / torch.sum(ignore_img_mask != 255).item()
loss_raw_con2 = criterion_u(raw_unlabeled_pred1, raw_unlabeled_logits['pred2'].softmax(dim=1).max(dim=1)[1].detach().long()) * (ignore_img_mask != 255)
loss_raw_con2 = torch.sum(loss_raw_con2) / torch.sum(ignore_img_mask != 255).item()
loss_con = (loss_con1 + loss_con2 + loss_raw_con1 + loss_raw_con2) / 4
loss_con = loss_con * args.w_con
# -------------
# both
# -------------
# discrepancy loss
cos_dis = nn.CosineSimilarity(dim=1, eps=1e-6)
# labeled
labeled_feature1 = labeled_logits['feature1']
labeled_feature2 = labeled_logits['feature2']
loss_dis_labeled1 = 1 + cos_dis(labeled_feature1.detach(), labeled_feature2).mean()
loss_dis_labeled2 = 1 + cos_dis(labeled_feature2.detach(), labeled_feature1).mean()
loss_dis_labeled = (loss_dis_labeled1 + loss_dis_labeled2) / 2
# unlabeled
cutmixed_aug_unlabeled_feature1 = cutmixed_aug_unlabeled_logits['feature1']
cutmixed_aug_unlabeled_feature2 = cutmixed_aug_unlabeled_logits['feature2']
loss_dis_cutmixed_aug_unlabeled1 = 1 + cos_dis(cutmixed_aug_unlabeled_feature1.detach(), cutmixed_aug_unlabeled_feature2).mean()
loss_dis_cutmixed_aug_unlabeled2 = 1 + cos_dis(cutmixed_aug_unlabeled_feature2.detach(), cutmixed_aug_unlabeled_feature1).mean()
loss_dis_cutmixed_aug_unlabeled = (loss_dis_cutmixed_aug_unlabeled1 + loss_dis_cutmixed_aug_unlabeled2) / 2
raw_unlabeled_feature1 = raw_unlabeled_logits['feature1']
raw_unlabeled_feature2 = raw_unlabeled_logits['feature2']
loss_dis_raw_unlabeled1 = 1 + cos_dis(raw_unlabeled_feature1.detach(), raw_unlabeled_feature2).mean()
loss_dis_raw_unlabeled2 = 1 + cos_dis(raw_unlabeled_feature2.detach(), raw_unlabeled_feature1).mean()
loss_dis_raw_unlabeled = (loss_dis_raw_unlabeled1 + loss_dis_raw_unlabeled2) / 2
loss_dis_unlabeled = (loss_dis_cutmixed_aug_unlabeled + loss_dis_raw_unlabeled) / 2
loss_dis = (loss_dis_labeled + loss_dis_unlabeled) / 2
loss_dis = loss_dis * args.w_dis
# -------------
# total
# -------------
loss = loss_CE + loss_con + loss_dis
dist.barrier()
loss.backward()
optimizer.step()
total_loss += loss.item()
total_loss_CE += loss_CE.item()
total_loss_con += loss_con.item()
total_loss_dis += loss_dis.item()
total_confident = (((unlabeled_pred_confidence1 >= 0.95) & (ignore_img_mask != 255)).sum().item() + ((unlabeled_pred_confidence2 >= 0.95) & (ignore_img_mask != 255)).sum().item()) / 2
total_mask_ratio += total_confident / (ignore_img_mask != 255).sum().item()
iters = epoch * len(trainloader_u) + i
# update lr
backbone_lr = args.base_lr * (1 - iters / total_iters) ** args.mul_scheduler
backbone_lr = backbone_lr * args.lr_backbone
seg_lr = args.base_lr * (1 - iters / total_iters) ** args.mul_scheduler
seg_lr = seg_lr * args.lr_network
optimizer.param_groups[0]['lr'] = backbone_lr
optimizer.param_groups[1]['lr'] = backbone_lr
for ii in range(2, len(optimizer.param_groups)):
optimizer.param_groups[ii]['lr'] = seg_lr
if (i % (len(trainloader_u) // 8) == 0) and (args.local_rank <= 0):
tb.add_scalar('train_loss_total', total_loss / (i+1), iters)
tb.add_scalar('train_loss_CE', total_loss_CE / (i+1), iters)
tb.add_scalar('train_loss_con', total_loss_con / (i+1), iters)
tb.add_scalar('train_loss_dis', total_loss_dis / (i+1), iters)
if (i % (len(trainloader_u) // 8) == 0) and (args.local_rank <= 0):
logger.info('Iters: {:}, Total loss: {:.3f}, Loss CE: {:.3f}, '
'Loss consistency: {:.3f}, Loss discrepancy: {:.3f}, Mask: {:.3f}'.format(
i, total_loss / (i+1), total_loss_CE / (i+1), total_loss_con / (i+1), total_loss_dis / (i+1),
total_mask_ratio / (i+1)))
if args.use_SPL:
conf_threshold += 0.01
if conf_threshold >= 0.95:
conf_threshold = 0.95
if cfg['dataset'] == 'cityscapes':
eval_mode = 'center_crop' if epoch < args.epochs - 20 else 'sliding_window'
else:
eval_mode = 'original'
dist.barrier()
# test with different branches
if args.local_rank <= 0:
if epoch == 4:
evaluate_result = evaluate_save(cfg['dataset'], args.save_path, args.local_rank, model, valloader, eval_mode, args, cfg, idx_epoch=5)
elif epoch == 9:
evaluate_result = evaluate_save(cfg['dataset'], args.save_path, args.local_rank, model, valloader, eval_mode, args, cfg, idx_epoch=10)
elif epoch == 19:
evaluate_result = evaluate_save(cfg['dataset'], args.save_path, args.local_rank, model, valloader, eval_mode, args, cfg, idx_epoch=20)
elif epoch == 39:
evaluate_result = evaluate_save(cfg['dataset'], args.save_path, args.local_rank, model, valloader, eval_mode, args, cfg, idx_epoch=40)
else:
evaluate_result = evaluate(args.local_rank, model, valloader, eval_mode, args, cfg)
mIOU1 = evaluate_result['IOU1']
mIOU2 = evaluate_result['IOU2']
mIOU_ave = evaluate_result['IOU_ave']
tb.add_scalar('meanIOU_branch1', mIOU1, epoch)
tb.add_scalar('meanIOU_branch2', mIOU2, epoch)
tb.add_scalar('meanIOU_ave', mIOU_ave, epoch)
logger.info('***** Evaluation with branch 1 {} ***** >>>> meanIOU: {:.2f}\n'.format(eval_mode, mIOU1))
logger.info('***** Evaluation with branch 2 {} ***** >>>> meanIOU: {:.2f}\n'.format(eval_mode, mIOU2))
logger.info('***** Evaluation with two branches {} ***** >>>> meanIOU: {:.2f}\n'.format(eval_mode, mIOU_ave))
if mIOU1 > previous_best1:
if previous_best1 != 0:
os.remove(os.path.join(args.save_path, 'branch1_%s_%.2f.pth' % (args.backbone, previous_best1)))
previous_best1 = mIOU1
torch.save(model.module.state_dict(),
os.path.join(args.save_path, 'branch1_%s_%.2f.pth' % (args.backbone, mIOU1)))
if mIOU2 > previous_best2:
if previous_best2 != 0:
os.remove(os.path.join(args.save_path, 'branch2_%s_%.2f.pth' % (args.backbone, previous_best2)))
previous_best2 = mIOU2
torch.save(model.module.state_dict(),
os.path.join(args.save_path, 'branch2_%s_%.2f.pth' % (args.backbone, mIOU2)))
if mIOU_ave > previous_best:
if previous_best != 0:
os.remove(os.path.join(args.save_path, 'ave_%s_%.2f.pth' % (args.backbone, previous_best)))
previous_best = mIOU_ave
torch.save(model.module.state_dict(),
os.path.join(args.save_path, 'ave_%s_%.2f.pth' % (args.backbone, mIOU_ave)))