AI安全对抗WP

AI安全对抗

简介

AI安全对抗赛简介
目前人工智能和机器学习技术被广泛应用在人机交互、推荐系统、安全防护等各个领域,其受攻击的可能性以及是否具备强抗打击能力备受业界关注,因此图像识别的准确性对人工智能产业至关重要。这一环节也是最容易被攻击者利用,通过对数据源的细微修改,在用户感知不到的情况下,使机器做出了错误的操作。这种方法会导致AI系统被入侵、错误命令被执行,执行后的连锁反应会造成的严重后果。

11月,飞桨巅峰赛:AI安全对抗赛正式上线。本次比赛针对图像分类任务,选手必须使用飞桨作为攻击方,对图片进行轻微扰动生成对抗样本,使已有深度学习模型识别错误。

报名成功并高于baseline成绩团队均可获得飞桨定制移动电源1个。(baseline成绩为72分)

特别注意:比赛要求选手必须使用飞桨参赛,允许使用其他工具箱 (Advbox) 以及机器学习+深度学习的融合模型。

飞桨(PaddlePaddle) 以百度多年的深度学习技术研究和业务应用为基础,集深度学习核心框架、基础模型库、端到端开发套件、工具组件和服务平台于一体,2016 年正式开源,是全面开源开放、技术领先、功能完备的产业级深度学习平台。飞桨源于产业实践,始终致力于与产业深入融合。目前飞桨已广泛应用于工业、农业、服务业等,服务150 多万开发者,与合作伙伴一起帮助越来越多的行业完成AI 能力赋能。

即通过对 指定图像 添加扰动,使目标模型(Target Model)分类错误,例如对于一张分类为A的图片,目标模型只要判别扰动后的样本不为A,即可判定成功。同时以生成扰动量越小越优。

左图为原图,右图为224 * 224大小,添加扰动后的图片

初始资源

  • baseline攻击脚本,内涵fgsm算法和pgd算法
  • 指定图像的数据集全集。
  • 初赛:两个白盒模型参数,一个黑盒(没给)。
  • 决赛:一个白盒模型,三个黑盒(其中有一个是autodl技术训练的),一个灰盒,灰盒模型的名称为resnext50,其为人工加固过的模型。

需求

  • 使用paddle参赛(百度的深度学习平台)
  • 生成的攻击图片尽可能的使评测模型全部分类错误
  • 生成的攻击图片添加的扰动尽可能小(即mse尽可能小)

一些思路及方法

初赛

  • 逻辑问题,baseline无论模型是否分类成功都进行攻击,也就是说当模型分类失败时,也进行攻击,平白增加了扰动,可做适当修改
  • 初赛给了两个模型,同时读取两个白盒模型,loss值相加,进行攻击
  • 更新算法,增大在两个白盒模型参数下的迁移程度,使用了下文提到的V1算法
  • 减小mse,使用了V2算法
  • 增大迁移程度,使用V3代替V1
  • 训练多种模型,在训练得出的模型参数上进行fgsm攻击,得到黑盒的一部分迁移效果,基本每个系列都训练了一到两个黑盒

决赛

  • 决赛中的三个黑盒除了autodl,剩下两个都是直接在初赛的模型, 也就是说主办方直接把初赛三个模型原封不动扔到复赛来,只不过加了个autodl和灰盒。
  • 灰盒可以通过扩充训练样本,把生成的对抗样本加入到原始数据集中,微调resnext的参数得来,利用训练好的灰盒进行攻击可以得到灰盒一半以上的攻击效果(既然主办方已经这么懒把初赛三个模型直接扔到决赛来,那我猜灰盒肯定是白盒resnext50预训练得来的)
  • fgsm算法对黑盒友好,但对白盒不友好。pgd算法对白盒友好,但对黑盒不友好。fgsm算法是每次前进一大步,如果针对白盒模型进行攻击,想要全部攻击成功,则添加的扰动不会小。pgd算法是每次根据得到的梯度值前进一小步,一两个轮次就基本可以攻击成功。step在使用pgd时可以小点,使用fgsm时可以略大些。
  • 为增大迁移效果,在V3算法基础上进行改进,使用V4算法,以增加输入多样性来提高对抗样本的迁移程度
  • 训练模型时,如果lr很大,但又不想重新进行训练,一般采取检查点方式读取训练好的参数,但一般使用检查点方式,初始设置的训练参数无法更改。解决方案:使用读取预训练模型参数代替检查点方式
  • 训练模型如果报错内存不够,适当调小batch_size可解决问题
  • 训练autodl系列模型时,就算指定class_dim数目为121,也会报错,类别不匹配。解决方案:在网络源码中将1000更改为121

从0分到91分

  • 使用PGD+Adam使用两个白盒模型参数训练出一个基准图片序列
  • 不断以训练得出的模型迁移模拟autodl黑盒模型,EOT,使用t为0.6,FGSM算法,1/30的像素点进行迁移
  • 不断以训练得出的灰盒模型迁移人工加固的模型。使用PGD+Adam+1/30像素点继续迁移

算法改进

v1

为增大迁移程度, 在pgd算法的基础上添加了动量

def M_PGD(adv_program, eval_program, gradients, o, input_layer, output_layer, momentum=0.5, step_size=1.0 / 256,
          epsilon=16.0 / 256, iteration=20, isTarget=False, target_label=0, use_gpu=True):
    place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace()
    exe = fluid.Executor(place)

    result = exe.run(eval_program,
                     fetch_list=output_layer,
                     feed={input_layer.name: o})

    o_label = result[0][0]  # 两个label 也是一样的

    o_label = np.argsort(o_label)[::-1][:1][0]

    if not isTarget:
        # 无定向攻击 target_label的值自动设置为原标签的值
        print("Non-Targeted attack target_label=o_label={}".format(o_label))
        target_label = o_label
    else:
        print("Targeted attack target_label={} o_label={}".format(target_label, o_label))

    target_label = np.array([target_label]).astype('int64')
    target_label = np.expand_dims(target_label, axis=0)

    adv = o.copy()
    S = 0
    for _ in range(iteration):
        # 计算梯度
        g = exe.run(adv_program,
                    fetch_list=[gradients],
                    feed={input_layer.name: adv, 'label': target_label}
                    )
        g = np.array(g[0][0])

        #print(g.shape)     [3,224,224]

        S = S * momentum + (g / np.mean(np.abs(g))) * (1-momentum)

        if isTarget:
            adv = adv - np.sign(S) * step_size
        else:
            adv = adv + np.sign(S) * step_size

    print("the current max point difference is {}".format(np.round(np.max(np.abs(adv-o)) * 255)))

    # 实施linf约束
    adv = linf_img_tenosr(o, adv, epsilon)

    return adv

v2

为减小添加扰动程度,在pgd和fgsm算法基础上,减小像素点的更改,现一直使用全部像素点的1/30进行攻击。

def G_PGD(adv_program, eval_program, gradients, o, input_layer, output_layer, momentum=0.5, step_size=1.0 / 256,
          epsilon=16.0 / 256, iteration=20, pix_num=3*224*224, isTarget=False, target_label=0, use_gpu=True):
    place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace()
    exe = fluid.Executor(place)

    result = exe.run(eval_program,
                     fetch_list=output_layer,
                     feed={input_layer.name: o})

    o_label = result[0][0]  # 两个label 也是一样的

    o_label = np.argsort(o_label)[::-1][:1][0]

    if not isTarget:
        # 无定向攻击 target_label的值自动设置为原标签的值
        print("Non-Targeted attack target_label=o_label={}".format(o_label))
        target_label = o_label
    else:
        print("Targeted attack target_label={} o_label={}".format(target_label, o_label))

    target_label = np.array([target_label]).astype('int64')
    target_label = np.expand_dims(target_label, axis=0)

    adv = o.copy()
    print(adv.shape)
    S = np.zeros(shape=[3,224,224], dtype=np.float32)
    for _ in range(iteration):
        # 计算梯度
        g = exe.run(adv_program,
                    fetch_list=[gradients],
                    feed={input_layer.name: adv, 'label': target_label}
                    )
        g = np.array(g[0][0])

        p = np.zeros(shape=g.shape, dtype=np.float32)


        #print(g.shape)     [3,224,224]
        #更改一半的像素点
        # pix_num = 3*224*224/2

        for pix in range(int(pix_num)):
            #获取最大值的坐标
            id_max = np.argmax(np.abs(g))
            pos = np.unravel_index(id_max, g.shape)
            a, b, c = pos
            #令p上该坐标等于原值
            p[a][b][c] = g[a][b][c]
            #g上该坐标为0, 再次寻找g的最大值
            g[a][b][c] = 0
            #打印输出找到的坐标位置与坐标值
            if pix % 20000 == 0:
                print("the step is {0}, {1} point has found!, it's values is {2}".format(pix, pos, p[a][b][c]))



        #S = S * momentum + (g / np.mean(np.abs(g))) * (1-momentum)
        S = S * momentum + (p / np.mean(np.abs(p))) * (1 - momentum)

        if isTarget:
            adv = adv - np.sign(S) * step_size
        else:
            adv = adv + np.sign(S) * step_size




    print("the current max point difference is {}".format(np.round(np.max(np.abs(adv-o)) * 255)))

    # 实施linf约束
    adv = linf_img_tenosr(o, adv, epsilon)

    return adv

v3

为增大迁移程度,添加了adam优化器

def L_PGD(adv_program, eval_program, gradients, o, input_layer, output_layer, b1=0.9, b2=0.999, step_size=1.0 / 256,
          epsilon=16.0 / 256, iteration=20, pix_num=3*224*224, isTarget=False, target_label=0, use_gpu=True):
    place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace()
    exe = fluid.Executor(place)

    result = exe.run(eval_program,
                     fetch_list=output_layer,
                     feed={input_layer.name: o})

    o_label = result[0][0]  # 两个label 也是一样的

    o_label = np.argsort(o_label)[::-1][:1][0]

    if not isTarget:
        # 无定向攻击 target_label的值自动设置为原标签的值
        print("Non-Targeted attack target_label=o_label={}".format(o_label))
        target_label = o_label
    else:
        print("Targeted attack target_label={} o_label={}".format(target_label, o_label))

    target_label = np.array([target_label]).astype('int64')
    target_label = np.expand_dims(target_label, axis=0)

    adv = o.copy()
    # print(adv.shape)

    M = 0
    V = 0

    for _ in range(iteration):
        # 计算梯度
        g = exe.run(adv_program,
                    fetch_list=[gradients],
                    feed={input_layer.name: adv, 'label': target_label}
                    )
        g = np.array(g[0][0])

        p = np.zeros(shape=g.shape, dtype=np.float32)


        for pix in range(int(pix_num)):
            #获取最大值的坐标
            id_max = np.argmax(np.abs(g))
            pos = np.unravel_index(id_max, g.shape)
            a, b, c = pos
            #令p上该坐标等于原值
            p[a][b][c] = g[a][b][c]
            #g上该坐标为0, 再次寻找g的最大值
            g[a][b][c] = 0
            #打印输出找到的坐标位置与坐标值
            # if pix % 2000 == 0:
            #     print("the step is {0}, {1} point has found!, it's values is {2}".format(pix, pos, p[a][b][c]))

        M = b1 * M + (1 - b1) * p
        V = b2 * V + (1 - b2) * np.square(p)

        M_ = M / (1 - np.power(b1, _ + 1))
        V_ = V / (1 - np.power(b2, _ + 1))
        R = M_ / (np.sqrt(V_) + 10e-9)


        if isTarget:
            adv = adv - np.sign(R) * step_size
        else:
            adv = adv + np.sign(R) * step_size

        adv = adv.astype('float32')


    print("the current max point difference is {}".format(np.round(np.max(np.abs(adv-o)) * 255)))

    # 实施linf约束
    adv = linf_img_tenosr(o, adv, epsilon)

    return adv

v4

为增大迁移程度,添加了以t为概率,增加输入样本的多样性来提高迁移率,经测试t=0.6时,对黑盒效果最好。

def random_transform(adv):
    adv = adv.reshape(3, 224, 224)
    adv = torch.Tensor(adv)
    # adv = transforms.ToTensor()(adv)
    new_size = random.randint(190, 224)
    adv_img = transforms.ToPILImage()(adv)
    resized = transforms.Resize((new_size, new_size))(adv_img)
    adv = transforms.ToTensor()(resized)
    #print(adv.shape)
    diff = 224 - new_size
    for i in range(diff):
        w = random.random()
        h = random.random()
        pad = torch.nn.ZeroPad2d(padding=(round(w), round(1 - w), round(h), round(1 - h)))
        adv = pad(adv)
        #print(adv.shape)
    adv = adv.reshape(1, 3, 224, 224)
    return adv.numpy()


def T(adv, t):
    r = random.uniform(0, 1.0)
    if (r < t):
        #print("已做变换处理")
        return random_transform(adv)
    else:
        #print("未作变换处理")
        return adv


def T_PGD(adv_program, eval_program, gradients, o, input_layer, output_layer, t=0.6, b1=0.9, b2=0.999,
          step_size=1.0 / 256, epsilon=16.0 / 256, iteration=10, pix_num = 3*224*224/30, isTarget=False, target_label=0, use_gpu=False):
    place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace()
    # place = fluid.CPUPlace()
    exe = fluid.Executor(place)

    result = exe.run(eval_program,
                     fetch_list=[output_layer],
                     feed={input_layer.name: o})
    result = result[0][0]
    #print("result:{}".format(result))
    o_label = np.argsort(result)[::-1][:1][0]

    if not isTarget:
        # 无定向攻击 target_label的值自动设置为原标签的值
        print("Non-Targeted attack target_label=o_label={}".format(o_label))
        target_label = o_label
    else:
        print("Targeted attack target_label={} o_label={}".format(target_label, o_label))

    target_label = np.array([target_label]).astype('int64')
    target_label = np.expand_dims(target_label, axis=0)

    adv = o.copy()
    M = 0
    V = 0
    for i in range(iteration):
        # 计算梯度
        g = exe.run(adv_program,
                    fetch_list=[gradients],
                    feed={input_layer.name: T(adv, t), 'label': target_label}
                    )
        g = np.array(g[0][0])
        p = np.zeros(shape=g.shape, dtype=float)

        #print("开始查点")
        for pix in range(int(pix_num)):
            id_max = np.argmax(np.abs(g))
            pos = np.unravel_index(id_max, g.shape)
            a, b, c = pos
            p[a][b][c] = g[a][b][c]
            g[a][b][c] = 0
        #print("查点结束")

        M = b1 * M + (1 - b1) * p
        V = b2 * V + (1 - b2) * np.square(p)

        M_ = M / (1 - np.power(b1, i + 1))
        V_ = V / (1 - np.power(b2, i + 1))
        R = M_ / (np.sqrt(V_) + 10e-9)

        adv = adv + np.sign(R) * step_size
        adv = adv.astype('float32')

    # 实施linf约束
    adv = linf_img_tenosr(o, adv, epsilon)

    return adv

同时读取多个白盒模型进行攻击

baseline中只读取了一个模型进行的攻击,在初赛中,我们就同时读取两个乃至多个进行攻击,要注意的是,每个模型的loss值进行相加,所以需要设定一个比例来确定每个模型的loss值大小。即loss = res * res_loss + (1-res) * mob_loss. res为resnext模型在loss值中所占的比例,一般设为0.8.

代码如下:(变量名称更改过,但逻辑是一样的)

# coding=utf-8

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import argparse
import functools
import numpy as np
import paddle.fluid as fluid
# mport deeplearning_backbone.paddlecv.model_provider as paddlecv
# 加载自定义文件
import models
from attack.attack_pp import FGSM, PGD, M_PGD, G_PGD, L_PGD, T_PGD
from utils import init_prog, save_adv_image, process_img, tensor2img, calc_mse, add_arguments, print_arguments

with_gpu = os.getenv('WITH_GPU', '0') != '0'
#######parse parameters
parser = argparse.ArgumentParser(description=__doc__)
add_arg = functools.partial(add_arguments, argparser=parser)


add_arg('class_dim', int, 121, "Class number.")
add_arg('shape', str, "3,224,224", "output image shape")
add_arg('input', str, "./input_image/", "Input directory with images")
add_arg('output', str, "./final/v3/001/", "Output directory with images")

args = parser.parse_args()
print_arguments(args)

######Init args
global out  # 指定哪一个模型
out = None
image_shape = [int(m) for m in args.shape.split(",")]
class_dim = args.class_dim
input_dir = args.input
output_dir = args.output
model_name1 = "ResNeXt50_32x4d"

model_name2 = "MobileNetV2_x2_0"


model_params = "models_parameters/params"
val_list = 'val_list.txt'
use_gpu = False

#####################double_adv_program#######################
double_adv_program = fluid.Program()

#global Res_ratio  # Res 比重
Res_ratio = 0.8
with fluid.program_guard(double_adv_program):
    input_layer = fluid.layers.data(name='image', shape=image_shape, dtype='float32')
    # 设置为可以计算梯度
    input_layer.stop_gradient = False

    # model definition
    Res_model = models.__dict__[model_name1]()  # Res
    Res_out_logits = Res_model.net(input=input_layer, class_dim=class_dim)
    Res_out = fluid.layers.softmax(Res_out_logits)

    Inception_model = models.__dict__[model_name2]()  # Inception
    Inception_out_logits = Inception_model.net(input=input_layer, class_dim=class_dim)
    Inception_out = fluid.layers.softmax(Inception_out_logits)

    # place = fluid.CUDAPlace(0) if with_gpu else fluid.CPUPlace()
    place = fluid.CPUPlace()
    exe = fluid.Executor(place)
    exe.run(fluid.default_startup_program())

    fluid.io.load_params(executor=exe, dirname=model_params, main_program=double_adv_program)


init_prog(double_adv_program)

# 创建测试用评估模式
double_eval_program = double_adv_program.clone(for_test=True)

with fluid.program_guard(double_adv_program):
    label = fluid.layers.data(name="label", shape=[1], dtype='int64')
    Res_loss = fluid.layers.cross_entropy(input=Res_out, label=label)
    Inception_loss = fluid.layers.cross_entropy(input=Inception_out, label=label)
    loss = Res_loss * Res_ratio + (1 - Res_ratio) * Inception_loss
    gradients = fluid.backward.gradients(targets=loss, inputs=[input_layer])[0]


######Inference
def inference(img, out):
    fetch_list = [o.name for o in out]

    result = exe.run(double_eval_program,
                     fetch_list=fetch_list,
                     feed={'image': img})
    # result = result[0]
    pred_label = [np.argmax(res[0]) for res in result]

    pred_score = []
    for i, pred in enumerate(pred_label):
        pred_score.append(result[i][0][pred].copy())
    return pred_label, pred_score


# untarget attack
def attack_nontarget_by_PGD(adv_prog, img, pred_label, src_label, out=None):
    # pred_label = [src_label, src_label]

    step = 8.0 / 256.0
    eps = 128.0 / 256.0
    while src_label in pred_label:

        # 生成对抗样本
        adv = T_PGD(adv_program=adv_prog, eval_program=double_eval_program, gradients=gradients, o=img,
                  input_layer=input_layer, output_layer=out, step_size=step, epsilon=eps, iteration=10,
                  t = 0.6, pix_num=224*224*3/30, isTarget=False, target_label=0, use_gpu=use_gpu)

        pred_label, pred_score = inference(adv, out)
        print("the current label is {}".format(pred_label))
        print("the current step is {}".format(step))
        #step += 1.0 / 256.0
        step *= 1.5
        if step > eps:
            break


    print("Test-score: {0}, class {1}".format(pred_score, pred_label))

    adv_img = tensor2img(adv)
    return adv_img


####### Main #######
def get_original_file(filepath):
    with open(filepath, 'r') as cfile:
        full_lines = [line.strip() for line in cfile]
    cfile.close()
    original_files = []
    for line in full_lines:
        label, file_name = line.split()
        original_files.append([file_name, int(label)])
    return original_files


def gen_adv():
    mse = 0
    num = 1
    original_files = get_original_file(input_dir + val_list)

    f = open('log.txt', 'w')  # log

    for filename, label in original_files:

        img_path = input_dir + filename
        print("Image: {0} ".format(img_path))
        img = process_img(img_path)

        Res_result, Inception_result = exe.run(double_eval_program,
                                            fetch_list=[Res_out, Inception_out],
                                            feed={input_layer.name: img})
        Res_result = Res_result[0]
        Inception_result = Inception_result[0]

        r_o_label = np.argsort(Res_result)[::-1][:1][0]
        i_o_label = np.argsort(Inception_result)[::-1][:1][0]

        pred_label = [r_o_label, i_o_label]

        print("原始标签为{0}".format(label))
        print("Res result: %d, Inception result: %d" % (r_o_label, i_o_label))

        f.write("原始标签为{0}\n".format(label))
        f.write("Res result: %d, Inception result: %d\n" % (r_o_label, i_o_label))

        if r_o_label == int(label) and i_o_label == int(label):

            global Res_ratio

            Res_ratio = 0.8

            adv_img = attack_nontarget_by_PGD(double_adv_program, img, pred_label, label, out=[Res_out, Inception_out])

            image_name, image_ext = filename.split('.')
            ##Save adversarial image(.png)

            org_img = tensor2img(img)
            score = calc_mse(org_img, adv_img)

            #image_name += "MSE_{}".format(score)
            save_adv_image(adv_img, output_dir + image_name + '.png')
            mse += score

        elif r_o_label == int(label):  # Inception 预测错误
            print("filename:{}, Inception failed!".format(filename))

            Res_ratio = 1.0

            adv_img = attack_nontarget_by_PGD(double_adv_program, img, [r_o_label, 0], label, out=[Res_out])

            image_name, image_ext = filename.split('.')
            ##Save adversarial image(.png)

            org_img = tensor2img(img)
            score = calc_mse(org_img, adv_img)

            #image_name += "MSE_{}".format(score)
            save_adv_image(adv_img, output_dir + image_name + '.png')
            mse += score

        else:
            print("{0}个样本已为对抗样本, name为{1}".format(num, filename))
            score = 0
            f.write("{0}个样本已为对抗样本, name为{1}\n".format(num, filename))
            img = tensor2img(img)
            image_name, image_ext = filename.split('.')
            #image_name += "_un_adv_"
            save_adv_image(img, output_dir + image_name + '.png')
        print("this rext network weight is {0}".format(Res_ratio))
        num += 1
        print("the image's mse is {}".format(score))
        # break
    print("ADV {} files, AVG MSE: {} ".format(len(original_files), mse / len(original_files)))
    #print("ADV {} files, AVG MSE: {} ".format(len(original_files - num), mse / len(original_files - num)))
    f.write("ADV {} files, AVG MSE: {} ".format(len(original_files), mse / len(original_files)))
    f.close()


def main():
    gen_adv()


if __name__ == '__main__':
    main()

在同时读取多个白盒模型生成图片的基础上,使用fgsm进行黑盒攻击,一般step=8,eps=128

训练模型

项目地址: https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/image_classification

训练灰盒命令如下:

export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
export FLAGS_fast_eager_deletion_mode=1
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_fraction_of_gpu_memory_to_use=0.98



python -m paddle.distributed.launch train.py \
    --model=ResNeXt50_32x4d   \
    --batch_size=128 \
    --lr=0.005 \
    --num_epochs=200 \
    --model_save_dir=output/ \
    --use_mixup=False \
    --use_label_smoothing=True \
    --label_smoothing_epsilon=0.1 \
    --reader_thread=4 \
    --class_dim=121 \
    --total_images=32340 \
    --print_step=20 \
    --image_shape=3,224,224 \
    --lr_strategy=cosine_decay \
    --pretrained_model=path_to_pretrain_model/ \
    --save_step=10

需要将对抗图片重命名,并且生成索引文件,代码如下:

import os
import shutil

#重命名图片添加的前缀名称
fix_name = "098"
#对抗图片所存放的路径
files_path = "./final/v2/012/"
#copy到的新路径
new_path = "./final/generate_attacked/"


def get_original_file(filepath):
    with open(filepath, 'r') as cfile:
        full_lines = [line.strip() for line in cfile]
    cfile.close()
    original_files = []
    for line in full_lines:
        label, file_name = line.split()
        original_files.append([file_name, int(label)])
    return original_files

def find_file(filepath):
    for a, b, c in os.walk(filepath):
        for file in c:
            if file.endswith("png"):
                shutil.copyfile(filepath + file, new_path + fix_name + file)
                #os.rename(filepath + file, filepath + file.replace("001", ""))
                print(new_path + fix_name + file + "已生成")

def main():
    find_file(files_path)
    files = get_original_file(files_path + "val_list.txt")
    print(files)
    #生成的索引文件
    with open("forth.txt", "a+") as f:
        for path, label in files:
            f.write("generate_attacked/" + fix_name + str(path) + '\t' + str(label) + '\n')

if __name__=='__main__':
    main()

训练其他正常模型

export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
export FLAGS_fast_eager_deletion_mode=1
export FLAGS_eager_delete_tensor_gb=0.0
export FLAGS_fraction_of_gpu_memory_to_use=0.98

SE_ResNeXt101_32x4d:
python train.py \
        --model=SE_ResNeXt101_32x4d \
        --batch_size=400 \
        --total_images=12000 \
        --class_dim=121 \
        --image_shape=3,224,224 \
        --lr_strategy=cosine_decay \
        --model_save_dir=output/ \
        --lr=0.1 \
        --num_epochs=200 \
        --with_mem_opt=True \
        --l2_decay=1.5e-5

诸如此类,适当改下lr大小即可进行训练

paddle1.5版本和1.6版本我都进行了模型的训练,基本每个系列都有一到两个模型参数在手上

高斯增强

同时我怀疑是否人工加固的灰盒训练样本中含有高斯增强的样本,故生成了几组高斯增江的图片加入样本库,进行灰盒训练(但没什么效果
脚本如下:

import cv2
import skimage.util as ski
from PIL import Image

input_dir = "./input_image/"
output_dir = "./final/gaussian_img/"
fix_name = 'gaussian'


def get_original_file(filepath):
    with open(filepath, 'r') as cfile:
        full_lines = [line.strip() for line in cfile]
    cfile.close()
    original_files = []
    for line in full_lines:
        label, file_name = line.split()
        original_files.append([file_name, int(label)])
    return original_files


def main():
    files = get_original_file(input_dir + 'val_list.txt')
    print(files)
    for file in files:
        filename, label = file
        print(filename)
        img = cv2.imread(input_dir + filename)
        img_gaussion_001 = ski.random_noise(img, mode="gaussian", seed=None, clip=True, mean=0, var=0.003)
        img_gaussion_002 = ski.random_noise(img, mode="gaussian", seed=None, clip=True, mean=0, var=0.005)
        img_gaussion_003 = ski.random_noise(img, mode="gaussian", seed=None, clip=True, mean=0, var=0.006)
        img_gaussion_004 = ski.random_noise(img, mode="gaussian", seed=None, clip=True, mean=0, var=0.007)
        img_gaussion_001 *= 255
        img_gaussion_002 *= 255
        img_gaussion_003 *= 255
        img_gaussion_004 *= 255
        cv2.imwrite(output_dir + fix_name + "13" + filename, img_gaussion_001)
        cv2.imwrite(output_dir + fix_name + "14" + filename, img_gaussion_002)
        cv2.imwrite(output_dir + fix_name + "15" + filename, img_gaussion_003)
        cv2.imwrite(output_dir + fix_name + "16" + filename, img_gaussion_004)


    with open("third.txt", "a+") as f:
        for path, label in files:
            f.write("gaussian_img/" + fix_name + "13" + str(path) + '\t' + str(label) + '\n')
            f.write("gaussian_img/" + fix_name + "14" + str(path) + '\t' + str(label) + '\n')
            f.write("gaussian_img/" + fix_name + "15" + str(path) + '\t' + str(label) + '\n')
            f.write("gaussian_img/" + fix_name + "16" + str(path) + '\t' + str(label) + '\n')

if __name__ == '__main__':
    main()

批量化加模型脚本

import os
# tt为指定的第几个模型
cmd = "python attack_second.py --tt "

for i in range(0, 22):
    cmd += str(i)
    os.system(cmd)
    print("第{}个已完成".format(i))
    cmd = cmd.replace(str(i), '')

根据本地mse和提交到平台的结果中mse判断有几个图片未识别的脚本

# 结果mse
avg_mse = 	10.7737
# 本地mse
haven_mse = 5.099106877508396

a = (avg_mse-haven_mse) * 5.0 * 120.0
b = 128.0 - haven_mse

print("the black model has acc is {}".format(a/b))

注:

  1. 在决赛中最好的成绩是有28个图片没有攻击成功
  2. autodl是通过增加其他可训练模型进行fgsm攻击得到的迁移效果
发布了270 篇原创文章 · 获赞 52 · 访问量 26万+

猜你喜欢

转载自blog.csdn.net/AcSuccess/article/details/104066620