Federated learning (FL) is a type of distributed machine learning that enables multiple participants to collaboratively build machine learning models without transferring data outside their local devices, thereby ensuring data privacy and security. However, free-riding (FR) attacks pose significant threats by sending false, erroneous, or malicious model updates to the central server, attempting to extract private information from other devices during the federated learning process. This results in privacy leakage and reduced model accuracy. Traditional defenses measures against FR attacks typically employ auditing methods to identify malicious clients, but these methods are ineffective when multiple FR clients collude to inflate each other’s scores mutually. This paper proposes a novel defense method against collusion-based FR attacks. We first design a grouping mechanism based on gradient norm to group clients and then update the groups using an inter-client audit system. Finally, the correlation analysis of all groups is carried out to eliminate the attack group to ensure the security of the training process. This method defends against standard FR attacks and effectively detects attackers in collusion scenarios. Experimental results demonstrate that our method significantly improves the detection of malicious clients and enhances model accuracy by 10–20% compared to existing methods. Moreover, the proposed defense mechanism maintains its efficacy even in large-scale client environments, where more than 50% of the clients may be compromised by attackers.
Loading....