Privacy is an important concern for our society where sharing data with partners or releasing data to the public is a frequent occurrence. Some of the techniques that are being used to achieve privacy are to remove identifiers, alter quasi-identifiers, and perturb values. Unfortunately, these approaches suffer from two limitations. First, it has been shown that private information can still be leaked if attackers possess some background knowledge or other information sources. Second, they do not take into account the adverse impact these methods will have on the utility of the released data. In this paper, we propose a method that meets both requirements. Our method, called table-GAN, uses generative adversarial networks (GANs) to synthesize fake tables that are statistically similar to the original table yet do not incur information leakage. We show that the machine learning models trained using our synthetic tables exhibit performance that is similar to that of models trained using the original table for unknown testing cases. We call this property model compatibility. We believe that anonymization/perturbation/synthesis methods without model compatibility are of little value. We used four real-world datasets from four different domains for our experiments and conducted indepth comparisons with state-of-the-art anonymization, perturbation, and generation techniques. Throughout our experiments, only our method consistently shows balance between privacy level and model compatibility.
|Number of pages||13|
|Journal||Proceedings of the VLDB Endowment|
|Publication status||Published - 2018|
|Event||44th International Conference on Very Large Data Bases, VLDB 2018 - Rio de Janeiro, Brazil|
Duration: 2018 Aug 27 → 2018 Aug 31
Bibliographical notePublisher Copyright:
© 2018 VLDB Endowment 21508097/18/4.
All Science Journal Classification (ASJC) codes
- Computer Science (miscellaneous)
- Computer Science(all)