Literature reference

Quantifying and protecting location privacy

Summary

The following two issues are identified as basic issues in computing privacy: (i) quantifying the privacy in different systems consistently; (ii) using an obfuscation mechanism to best protect privacy.

In the statistical (Bayesian) inference problem, the problem of quantitative privacy is raised when calculating the estimation error.In this problem, the opponent combines its observational power, background knowledge and auxiliary channel information to estimate the user’s sensitive information. This allows us to evaluate the privacy of users in different systems and constantly compare the effectiveness of different privacy protection mechanisms. We also formulated the problem of optimizing user privacy, while treating data utility as an interactive optimization problem in which both users and opponents want to maximize their own goals, and these goals conflict with each other. We apply our method to quantify and protect location privacy in location-based services.

introduction

The existing location privacy protection mechanism design methods cannot consistently simulate the user's (privacy and quality of service) requirements and the opponent's knowledge and goals. On the contrary, the protection mechanism is designed temporarily and has nothing to do with the opponent's model. Therefore, there is a mismatch between the goals and results of these protection mechanisms.

There is no systematic method to quantify the privacy protection mechanism. In particular, the assumptions about the adversary model are often incomplete, and there is a risk that the user's location privacy may be incorrectly estimated. It can be said that specific protection mechanisms and evaluation of privacy lack a general analysis framework. Without this framework, it is impossible to design an effective protection mechanism, nor can it be objectively compared.

Usually, the adversary model is usually not properly resolved and formalized, and lacks a good model for the adversary's possible reasoning attacks (while combining his background knowledge). This may lead to incorrect estimation of the location privacy of the mobile user.

We show that some existing indicators (especially entropy and k-anonymity) are not suitable for quantifying location privacy.

Quantifying Location Privacy

Opponent's knowledge

In this section, we provide a model for constructing the adversary's prior knowledge to be used in various reconstruction attacks. The structure of the Knowledge Construction (KC) module is shown in Figure 1. The adversary collected various information about the user's mobility. Generally, such information can be converted into events; perhaps events can be linked to conversions, that is, two events of the same user have consecutive time stamps; perhaps they can be further linked into partial or even complete tracking. The quality of these events of the opponent may vary, for example, they may contain noise. It is conceivable that the information obtained by the opponent (such as the user's home address) obviously cannot be converted into an event. Then, the adversary can create a typical event (or trace) that encodes the event information, that is, the continuous appearance of the user in his home between night and morning.

Guess you like

Origin blog.csdn.net/weixin_42253964/article/details/107556384