Model Inversion Attack Paper Indexpage

Paper [1]:

White-box neural network attack, adversaries have full access to the model. Using Gradient Descent going back to update the input so that reconstructing the original training data.

 About black-box attack, they mentioned using numeric gradient approximation.

Question: If the model does not overfit the dataset, cannot recover the training data.

Paper [2]:

Proposed black-box attack via online ML-as-a-S platform, targeting to extract parameters from simple structures by solving equations. Condifence values is the key to solve these equations.

Question: However, this method seems like brute force, and it would be tough when the type and structure of model are unknown or really complex. Ex. they query 10,000 times to steal a neural network, which will be identified as hacking activity in real environment. (or too expensive to query online service)

 

[1] M. Fredrikson, S. Jha and T. Ristenpart, "Model inversion attacks that exploit confidence information and basic countermeasures," in 2015, . DOI: 10.1145/2810103.2813677.

[2] Florian Tramer, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. Stealing machine learning models via prediction apis. In 25th USENIX Security Symposium, USENIX Security 16, Austin, TX, USA, August 10-12, 2016., pages 601-618, 2016. Presentation: https://www.youtube.com/watch?time_continue=26&v=qGjzmEzPkiI

猜你喜欢

转载自www.cnblogs.com/rhyswang/p/11802350.html
今日推荐