Selecting optimal subset to release under differentially private M-estimators from hybrid datasets

Published

Journal Article

© 1989-2012 IEEE. Privacy concern in data sharing especially for health data gains particularly increasing attention nowadays. Now, some patients agree to open their information for research use, which gives rise to a new question of how to effectively use the public information to better understand the private dataset without breaching privacy. In this paper, we specialize this question as selecting an optimal subset of the public dataset for M-estimators in the framework of differential privacy (DP) in [1]. From a perspective of non-interactive learning, we first construct the weighted private density estimation from the hybrid datasets under DP. Along the same line as [2], we analyze the accuracy of the DP M-estimators based on the hybrid datasets. Our main contributions are (i) we find that the bias-variance tradeoff in the performance of our M-estimators can be characterized in the sample size of the released dataset; (ii) based on this finding, we develop an algorithm to select the optimal subset of the public dataset to release under DP. Our simulation studies and application to the real datasets confirm our findings and set a guideline in the real application.

Full Text

Duke Authors

Cited Authors

  • Wang, M; Ji, Z; Kim, HE; Wang, S; Xiong, L; Jiang, X

Published Date

  • January 1, 2018

Published In

Volume / Issue

  • 30 / 3

Start / End Page

  • 573 - 584

International Standard Serial Number (ISSN)

  • 1041-4347

Digital Object Identifier (DOI)

  • 10.1109/TKDE.2017.2773545

Citation Source

  • Scopus