Innovations in decentralized federated learning

A Decentralized Federated Learning Based on Node Selection and ...

Federated learning represents a significant shift in the landscape of machine learning, enabling many clients or devices to collaboratively train models while substantially enhancing the privacy of the data involved.

Unlike conventional centralized learning approaches, where raw training data sharing is necessary, federated learning allows participants to keep their individual data localized, exchanging only model updates. This method protects sensitive information and reduces the risks associated with central data storage.

In the typical model of federated learning, known as server-assisted federated learning, a central server plays a crucial role. It coordinates the training process by sending out the current global model to clients and integrating their local models to iteratively improve the overall model.

However, this centralization introduces several drawbacks. First, the communication load is immense; the server must handle vast amounts of data sent back and forth from potentially thousands of clients. Second, reliance on a single server introduces a vulnerability; if the server fails due to hardware or software issues, the entire training process grinds to a halt.

Decentralized federated learning and vulnerabilities

To mitigate these issues, decentralized federated learning has been proposed as an alternative. In this framework, the central server is removed, and clients communicate directly with one another, sharing models peer-to-peer. This architecture not only reduces the single point of failure but also lessens the communication overhead associated with a central server. Our study was published in the Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security.

It is particularly beneficial in environments where trust in a single central entity is problematic, or where entities such as hospitals or financial institutions are reluctant to share data due to privacy concerns or regulatory restrictions. Figure 1 illustrates the distinctions between server-assisted federated learning and decentralized federated learning.

Decentralized federated learning, however, introduces new challenges, particularly vulnerability to poisoning attacks. In such attacks, malicious clients inject false data into the system, aiming to corrupt the collectively trained models. This is particularly problematic because each client in a decentralized network has a limited view of the overall system, making it difficult to detect anomalies or malicious activities independently.

Novel defense mechanism to secure decentralized federated learning

Addressing these security concerns, we have developed a novel mechanism called BALANCE, a Byzantine-robust aggregation rule designed specifically for the decentralized federated learning environment. BALANCE is unique in that it does not require the network of clients to be fully connected, which is a common limitation in many existing defense strategies. Instead, it operates on the principle that each client can use its local model as a benchmark to assess the trustworthiness of models received from its neighbors.

Under the BALANCE rule, each client evaluates incoming models by comparing them against its own model. If an incoming model significantly deviates from the client’s model in a way that suggests potential tampering or malicious intent, it is automatically rejected. This local evaluation strategy allows each client to defend itself independently against potential security threats, enhancing the overall robustness of the network against coordinated attacks.

The introduction of BALANCE into decentralized federated learning offers a promising solution to one of the most pressing security issues facing this emerging field. By empowering each client to act as its own gatekeeper, BALANCE ensures that the integrity of the training process is maintained, even in a highly distributed and decentralized environment.

This story is part of Science X Dialog, where researchers can report findings from their published research articles. Visit this page for information about Science X Dialog and how to participate.

More information:
Minghong Fang et al, Byzantine-Robust Decentralized Federated Learning, Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security (2024). DOI: 10.1145/3658644.3670307

Dr. Minghong Fang is a tenure-track assistant professor in the Department of Computer Science and Engineering at University of Louisville.

Citation:
Securing the future of AI: Innovations in decentralized federated learning (2025, January 13)

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :