Differential Privacy Preserving Deep Learning in Healthcare
Differential Privacy Preserving Deep Learning in Healthcare
The remarkable development of deep learning in healthcare domain presents obvious privacy issues, when
deep neural networks are built on users’ personal and highly sensitive data, e.g., clinical records, user
profiles, and biomedical images. In this talk, we concentrate on recent research on differential privacy
preserving deep learning. Differential privacy ensures that the adversary cannot infer any information about
any particular record with high confidence (controlled by a privacy budget) from the released learning
models. In the first part of this talk, we introduce the concept of differential privacy and present several
mechanisms, including Laplace mechanism, exponential mechanism, input perturbation, and functional
perturbation, that have been developed to enforce differential privacy in data mining and machine learning
models. In the second part of this talk, we discuss how to apply and adapt those mechanism to preserve
differential privacy in deep learning models. In particular, we discuss how to achieve differential privacy
by injecting noise into input data, gradient descents of parameters, or loss functions of deep learning models.
Finally we present challenges and findings when applying differential privacy preserving deep learning
models for human behavior prediction and classification tasks in a health social network.
Speaker