Machine Unlearning for Class Removal through SISA-based Deep Neural Network Architectures
Ishrak Hamim Mahi, Siam Ferdous, Md Sakib Sadman Badhon, Nabid Hasan Omi, Md Habibun Nabi Hemel + 2 more
TLDR
This paper introduces a SISA-based deep neural network architecture with replay and gating for efficient class-level machine unlearning in CNNs.
Key contributions
- Proposes a modified SISA framework for class-level unlearning in Convolutional Neural Networks.
- Integrates a reinforced replay mechanism to improve selective forgetting efficiency.
- Utilizes a gating network to further enhance the unlearning process.
- Demonstrates effective class unlearning while maintaining model performance and reducing retraining costs.
Why it matters
This paper addresses critical data privacy and user consent issues in AI systems. It provides an efficient method for removing specific data classes from trained models without costly full retraining, making it vital for privacy-sensitive AI applications.
Original Abstract
The rapid proliferation of image generation models and other artificial intelligence (AI) systems has intensified concerns regarding data privacy and user consent. As the availability of public datasets declines, major technology companies increasingly rely on proprietary or private user data for model training, raising ethical and legal challenges when users request the deletion of their data after it has influenced a trained model. Machine unlearning seeks to address this issue by enabling the removal of specific data from models without complete retraining. This study investigates a modified SISA (Sharded, Isolated, Sliced, and Aggregated) framework designed to achieve class-level unlearning in Convolutional Neural Network (CNN) architectures. The proposed framework incorporates a reinforced replay mechanism and a gating network to enhance selective forgetting efficiency. Experimental evaluations across multiple image datasets and CNN configurations demonstrate that the modified SISA approach enables effective class unlearning while preserving model performance and reducing retraining overhead. The findings highlight the potential of SISA-based unlearning for deployment in privacy-sensitive AI applications. The implementation is publicly available at https://github.com/SiamFS/ sisa-class-unlearning.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.