[Paper reading] Seeing is Not Believing: Camouflage Attacks on Image Scaling Algorithms (Seeing is believing: Camouflage attacks on image scaling algorithms)

1. Paper information

Thesis title: Seeing is Not Believing: Camouflage Attacks on Image Scaling Algorithms (Seeing is believing: Camouflage attacks on image scaling algorithms)

Year published: 2019-USENIX Security

author information:

  • Qixue Xiao (Department of Computer Science and Technology, Tsinghua University, 360 Security Research Laboratory)
  • Yufei Chen (School of Electronic Information Engineering, Xi'an Jiaotong University, 360 Security Research Laboratory)
  • Chao Shen (School of Electronic Information Engineering, Xi'an Jiaotong University)
  • Yu Chen (Department of Computer Science and Technology, Tsinghua University, Pengcheng Laboratory)
  • Kang Li (Department of Computer Science, University of Georgia, USA)

Paper link: https://www.usenix.org/system/files/sec19-xiao.pdf

2. Paper content

0. Summary

Image scaling algorithms aim to preserve visual features before and after scaling and are commonly used in many vision and image processing applications. In this paper, we demonstrate an automated attack against common scaling algorithms that automatically generates disguised images whose visual semantics change significantly after scaling. To illustrate the threat of this disguise attack, we selected several computer vision applications as target victims, including multiple image classification applications based on popular deep learning frameworks, as well as mainstream web browsers. Our experimental results show that this attack causes different visual results when scaled, causing evasion or data poisoning effects on these victim applications. We also propose an algorithm that can successfully attack well-known cloud-based image services (such as those of Microsoft Azure, Alibaba Cloud, Baidu and Tencent) and cause obvious misclassification effects, even if the details of image processing (such as precise The scaling algorithm and scaling dimension parameters) are hidden in the cloud. To defend against such attacks, this paper proposes several potential countermeasures ranging from attack defense to detection.

1. Paper overview

This is an article about image scaling attacks published at the 2019 USENIX Security Conference.

Assume that there is already a pre-trained model (benign model), and the size of the input layer is 224*224. Since the size of the input image in the inference stage is inconsistent, the model will use the image scaling function, for example: convert a 672*224 image Scaled to 224*224 to meet model needs.

[Note: The attack occurs in the inference stage and attacks the pre-trained model] The attacker needs to carefully design a 672*224 image. The content of the image after scaling is inconsistent with the content before scaling. For example, a 672*224 image of a flock of sheep and a 224*224 image of a wolf are embedded to generate a new 672*224 image. The human eye sees it as "sheep", but this image After scaling, it will turn into a "wolf". Therefore, when this image is used as input, the deep learning model will first scale it, changing 672*224 into 224*224, and identify it as "wolf". However, since the human eye sees "sheep" in this image, and the label is also "sheep", it will be judged that the model recognition error is wrong.

2. Background introduction

Image scaling refers to the operation of resizing a digital image while maintaining the visual characteristics of the image. When scaling an image, the reduction (or enlargement) process produces a new image with a smaller (or larger) number of pixels than the original image.

Image scaling algorithms are widely adopted in various applications. For example, most deep learning computer vision applications use pretrained convolutional neural network (CNN) models, and the input layers must be fixed-size data. The actual input data may have different sizes, so the input data needs to be processed (i.e. image scaling) to become a uniform size. Popular deep learning frameworks, such as Caffe [17], TensorFlow [13] and Torch [26], all integrate various image scaling functions in their data preprocessing modules.

Although scaling algorithms are widely used and effective for normal inputs, common scaling algorithms are not designed with malicious inputs in mind, which may intentionally cause different visual results after scaling, thereby changing the "semantic" meaning of the image. In this article, we will see that attackers can exploit the "data undersampling" phenomenon that occurs when large images are resized to small images, causing "visual cognitive contradictions" between humans and machines on the same image. In this way, attackers can achieve malicious purposes such as evading detection and data poisoning. Worse, unlike adversarial examples, this attack is independent of the machine learning model. The attack does occur before the input is consumed by the model, so this type of attack affects a wide range of applications with various machine learning models.

3. Author contributions

  • This paper reveals the security risks existing in the image scaling process in computer vision applications , and proposes a camouflage attack against the image scaling algorithm (ie: image scaling attack).
  • An attacker would need to analyze the scaling algorithm to decide where to insert pixels with a spoofing effect. This work is very tedious, so the author formalizes the scaling attack as a constrained optimization problem to achieve automatic and efficient generation of camouflage images .
  • The authors demonstrate that the proposed attack is still effective against cloud-based image classification algorithms (such cloud-based image classification algorithms are black-box and cannot see the actual scaling function and related parameters, making it more difficult).
  • To eliminate the threat of scaling attacks, we propose several potential defense strategies from two aspects: attack defense and attack detection.

4. Key charts

Insert image description here

Figure 2 shows an example of a scaling attack. The human eye sees a sheep (shown on the left), but the machine actually sees a wolf (shown on the right).
Insert image description here

Figure 3 is an example of a scaling attack against Baidu image classification service. Enter a picture of "sheep" (as shown on the left), but Baidu's image classification service outputs a result: 93.88% probability is gray wolf (grey wolf), and 1.47% probability is Mexican wolf.
Insert image description here

Figure 4 shows how data is processed in an image classification system.
Insert image description here

Figure 5 shows the automatic production process of attack images.
Insert image description here

Guess you like

Origin blog.csdn.net/m0_38068876/article/details/132848046