Yifei Zou, Senmao Qi, Yiming Zhang, et al., “Conditional machine unlearning: balancing privacy and regulation,” Chinese Journal of Electronics, vol. x, no. x, pp. 1–12, xxxx. DOI: 10.23919/cje.2024.00.343
Citation: Yifei Zou, Senmao Qi, Yiming Zhang, et al., “Conditional machine unlearning: balancing privacy and regulation,” Chinese Journal of Electronics, vol. x, no. x, pp. 1–12, xxxx. DOI: 10.23919/cje.2024.00.343

Conditional Machine Unlearning: Balancing Privacy and Regulation

  • In response to growing concerns about data privacy and the enforcement of the Right to Be Forgotten (RTBF) under laws like GDPR, machine unlearning has emerged as a means to selectively erase data from machine learning models. However, existing methods, which entirely forget requested data, may limit oversight in sensitive sectors, such as finance and healthcare, where regulatory monitoring is essential. In this paper, we introduce Conditional Machine Unlearning (CMU), a framework that balances user privacy with regulatory needs. CMU divides users into regular users, who have full privacy protection, and regulatory users, who retain conditional access to forgotten data. By incorporating a trigger mechanism inspired by backdoor attacks, our approach enables models to fully forget data for regular users while allowing regulated access to authorized entities. This dual-access mechanism ensures privacy compliance without the computational costs of retraining separate models. Experimental results demonstrate that CMU achieves efficient unlearning while preserving essential oversight, offering a scalable solution for privacy and accountability. We believe this framework establishes a critical step toward regulatory-compliant machine learning systems that respect user privacy while maintaining the necessary safeguards for public welfare.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return