[Paper reading] Untargeted Backdoor Attack Against Object Detection (untargeted backdoor attack against target detection)

1. Paper information

Thesis title: Untargeted Backdoor Attack Against Object Detection (untargeted backdoor attack against target detection)

Year of publication: 2023-ICASSP (CCF-B)

author information:

  • Chengxiao Luo (Tsinghua University Shenzhen International Graduate School)
  • Yiming Li (Tsinghua University Shenzhen International Graduate School)
  • Yong Jiang (Tsinghua University Shenzhen International Graduate School, Pengcheng Laboratory Artificial Intelligence Research Center)
  • Shu-Tao Xia (Tsinghua University Shenzhen International Graduate School, Pengcheng Laboratory Artificial Intelligence Research Center)

2. Paper content

0. Summary

Recent research shows that Deep Neural Networks (DNNs) are vulnerable to backdoor threats when trained using third-party resources such as training samples or backbone networks. The backdoor model has good performance in predicting benign samples, however, its predictions can be maliciously manipulated by adversaries based on activating its backdoor using predefined trigger patterns. Currently, most existing backdoor attacks target image classification in a targeted manner. In this paper, we reveal that these threats can also occur in object detection, posing a threat risk to many mission-critical applications such as pedestrian detection and intelligent surveillance systems. Specifically, based on task characteristics, a simple yet effective backdoor attack is designed in a non-targeted manner. Once the backdoor is embedded into a target model, it can trick the model into losing detection of any object with a trigger pattern. Extensive experiments are conducted on benchmark datasets, demonstrating its effectiveness in digital and physical world settings, as well as resistance to potential defenses.

1. Paper overview

This is a paper on target detection backdoor attacks published at the 2023 ICASSP conference. Different from previous work, which mainly focused on backdoor attacks on image classification tasks, this work focuses on backdoor attacks on target detection tasks (and does not target a specific target).

2. Background introduction

The purpose of object detection is to locate a set of objects in an image and identify their category [1]. It has been widely used in mission-critical applications (such as pedestrian detection [2] and autonomous driving [3]). Therefore, it is necessary to ensure its safety. Currently, the most advanced object detectors are designed based on deep neural networks (dnn) [4, 5, 6], and the training of deep neural networks usually requires a large amount of resources. In order to reduce the training burden, researchers and developers often utilize third-party resources (such as training samples or backbone network backbone), or even directly deploy third-party models. An important question arises: does training opacity introduce new threats to object detection?

3. Author contributions

  • Backdoor threats in object detection revealed. To our knowledge, this is the first backdoor attack targeting this mission-critical system. Different from existing methods (which are mainly designed for classification tasks and are targeted attacks that are associated with specific target labels), this paper focuses on backdoor attacks on target detection tasks, making the poisoned data The trained model exhibits normal behavior on benign samples, but allows targets to escape detection under certain triggering patterns.
  • Backdoor attack is an attack that occurs during the training phase. The author proposes a simple and effective attack method, which is to remove the bounding boxes of some randomly selected objects after adding a predefined trigger pattern. The authors believe that the attack is stealth and can bypass human inspection because some bounding boxes are often missed when the image contains many objects.
  • The authors conduct extensive experiments on benchmark datasets, verifying the effectiveness of our attack and its resistance to potential backdoor defenses.

4. Key charts

Guess you like

Origin blog.csdn.net/m0_38068876/article/details/132852995