论文学习:Seeing is Not Believing:Camouflage Attacks on Image Scaling Algorithms

论文题目:Seeing is Not Believing:Camouflage Attacks on Image Scaling Algorithms

Source: 28th USENIX Security Symposium 2019

Link: https://www.usenix.org/conference/usenixsecurity19/presentation/xiao

main content:

The author discovered ** a security risk hidden in the image processing process-image dimension transformation attack. **The attacker can construct the attack image to cause obvious content and semantic changes of the input image after the size dimension changes, resulting in the cognitive difference between human and machine, so as to achieve the attack effect of deception and escape detection. Unlike adversarial samples for deep learning models, this attack method is not limited to specific models, because it is a necessary function for computer vision applications based on deep learning: image scaling function, this step is before deep learning model training/prediction , So the attack has a greater impact.

In order to verify the effectiveness of the attack method, the author successfully implemented deception attacks on multiple AI vision applications built on popular deep frameworks such as Caffe, Tensorflow, and Torch. In addition, the author also examined the impact of this risk on commercial visual services. Experimental results prove that even a black box system can still obtain the algorithm and related parameters of the attacked object through the test strategy, and launch an image dimension transformation attack. After testing, similar risks have been found in domestic mainstream cloud AI services and machine vision services provided by international vendors such as Microsoft Azure. In addition, the author also found through experiments that some mainstream browsers, such as Firefox and Microsoft Edge (Old) also have this vulnerability, but Chrome does not have this problem.

Attack process:

Innovation:

① This article reveals the hidden safety hazards in the image zooming process in computer vision applications. We have verified and verified the image scaling algorithms commonly used in the current popular deep learning frameworks. The results show that almost all image applications based on the DL framework have security risks.

② This paper formalizes the scaling attack as a constrained optimization problem, and gives the corresponding implementation method, which realizes the automatic and efficient generation of camouflage images.

③In addition, the author proves that the proposed attack is still effective against cloud vision services, even if the implementation details and parameters of the image scaling algorithm of these cloud services are hidden from users (but the images used by these cloud service vendors can be obtained through related tests Related parameters of the scaling algorithm).

④In order to eliminate the threats caused by scaling attacks, the author proposes several potential defense strategies from two aspects: attack prevention and detection.

Possible attack applications:

①Data poisoning: by injecting an image that has been tampered with a scaling attack into a public data set (such as ImageNet), such as putting in a picture that looks like a sheep, and the label is also a sheep, but the actual wolf after scaling is passed through Make data poisoning attacks more concealed, so that data set users cannot discover this data pollution.

②Evasion of content review. Content review is one of the most widely used computer vision applications. Many providers provide content filtering services to prevent offensive content. Attackers may use zoom attacks to avoid these content and spread inappropriate images, which may cause serious problems in online communities. For example, suppose an attacker wants to advertise illegal drugs to users on the iPhone XS. Attackers can use a zoom attack to create a camouflage effect, so that the zoom result on the iPhone XS browser is the expected drug image, while the original size image contains benign content.

③Use the inconsistency between displays to commit fraud. Attackers can use scaling attacks to create deceptive digital contracts. An attacker can create an image document containing a scanned contract, but present different content when zoomed to a different scale. Then, the attacker can let both parties share the same document. If they use different browsers, the displayed content will be different. This inconsistency can lead to potential financial fraud.

Prevention and detection:

Prevention:

①Filter out the input that does not match the DL model input (but this method is very inhumane in practice, and the user's input is random)

② Before performing size scaling, perform some other preprocessing, such as filtering, cropping, etc. mentioned above

Detection:

Detect changes in input features during zooming, such as color histograms and color scattering distribution. If it is a picture that has not been tampered with, the distribution of its input and zoomed output on the two pictures is basically the same. Specific as shown below

[External link image transfer failed. The source site may have an anti-leech link mechanism. It is recommended to save the image and upload it directly (img-FSFf8dX3-1601566091158)(https://pic.downk.cc/item/5f5cacc0160a154a672726ab.jpg)]

Disadvantages:

For black box attacks (that is, most situations in actual combat), that is, for those CV applications based on cloud services, the target scaling algorithm and input size are unknown, and a series of detection sequence images need to be constructed to guess The specific scaling algorithm and input size. (Although this article proposes steps to speculate on the specific scaling algorithm and input size) This will lead to a decrease in the success rate of the attack, an increase in uncertainty, and an increase in the cost of the attack. And for this kind of CV application, the preprocessing of the input picture may not only be image size scaling, but also image cropping, filtering, affine transformation, color transformation and other preprocessing operations. If these steps precede the image size scaling, it is likely to severely reduce the success of the image scaling attack.

to sum up:

The attack algorithm essentially calculates a pair of coefficient matrices forcibly, so that the input image is multiplied by the pair of coefficient matrices, and the target image that the attacker wants can be output through the scaling algorithm. It is actually an inverse operation of the scaling algorithm. The principle of this attack is not difficult, but it can be widely used in various image processing applications, including local CV applications, cloud service CV APIs and even current mainstream web browsers. Any application that may use the image zoom function is possible. Suffer this attack. Although the attack needs to understand the specific details of the scaling function and input size in order to have an efficient attack success rate, and try not to interfere with other image preprocessing steps, that is to say, this attack may not be as efficient in a real environment. Too high, but it has great enlightening significance for thinking about the safety of the CV model and reducing the security risks of AI applications, especially for image applications, whether the overall pipeline is set to specifications.

All, to reduce the security risks of AI applications, especially for image applications, whether the overall pipeline is set to a standard is of great enlightening significance.

Guess you like

Origin blog.csdn.net/qq_40742077/article/details/108898112