Backdoor Attacks and Defenses in Federated Machine Learning
Open Access
Author:
Wu, Chen
Graduate Program:
Computer Science and Engineering
Degree:
Doctor of Philosophy
Document Type:
Dissertation
Date of Defense:
October 08, 2024
Committee Members:
Chitaranjan Das, Program Head/Chair Danfeng Zhang, Major Field Member Guohong Cao, Major Field Member Sencun Zhu, Chair & Co-Dissertation Advisr Prasenjit Mitra, Dissertation Co-Advisor Dinghao Wu, Outside Unit & Field Member
Federated learning has emerged as a promising paradigm for collaborative machine learning without centralizing data, thereby preserving privacy. However, this decentralized approach introduces new security challenges, particularly in the form of backdoor attacks and difficulties in unlearning attackers' influence on the global model. This dissertation addresses these challenges through the following three key contributions:
1. Mitigating Backdoored Models through a Federated Pruning Method:
We propose a novel post-training defense mechanism that employs federated pruning to remove redundant neurons and "backdoor neurons", which are the neurons that trigger misbehavior upon recognizing backdoor patterns while remaining inactive on clean data. An optional fine-tuning process is introduced to restore any loss in test accuracy on benign datasets. Additionally, by limiting extreme values of inputs and neural network weights, we further mitigate backdoor effects. Experiments against state-of-the-art distributed ba