Byzantine Client#
The ByzantineClient
class simulates malicious participants in a federated learning setup. These clients perform adversarial attacks to disrupt the training process by introducing corrupted updates, enabling the evaluation of robust aggregation methods and defenses against Byzantine behavior.
Key Features#
Simulated Attacks: Implements various attack strategies, such as Inner Product Manipulation (IPM) and A Little Is Enough (ALIE), to evaluate the robustness of aggregation techniques.
Configurable Behavior: Allows customization of the type and intensity of attacks through a flexible parameterization system.
Integration with Federated Learning: Easily integrates into federated learning workflows alongside honest clients and the central server.
- class byzfl.ByzantineClient(params)[source]#
Bases:
object
- Initialization Parameters:
params (dict) – A dictionary containing the configuration for the Byzantine attack. Must include:
- “f”: int
The number of faulty (Byzantine) vectors to generate.
- “name”: str
The name of the attack to be executed (e.g., “InnerProductManipulation”).
- “parameters”: dict
A dictionary of parameters for the specified attack, where keys are parameter names and values are their corresponding values.
- apply_attack(honest_vectors)[source]#
Applies the specified Byzantine attack to the input vectors and returns a list of faulty vectors.
Calling the Instance
- Input Parameters:
honest_vectors (numpy.ndarray, torch.Tensor, list of numpy.ndarray, or list of torch.Tensor) – A collection of input vectors, matrices, or tensors representing gradients submitted by honest participants.
- Returns:
list – A list containing f faulty vectors generated by the Byzantine attack, each with the same data type as the input.
Examples
Initialize the ByzantineClient with a specific attack and apply it to input vectors:
>>> from byzfl import ByzantineClient >>> attack = { >>> "name": "InnerProductManipulation", >>> "f": 3, >>> "parameters": {"tau": 3.0}, >>> } >>> byz_worker = ByzantineClient(attack)
Using numpy arrays:
>>> import numpy as np >>> honest_vectors = np.array([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]]) >>> byz_worker.apply_attack(honest_vectors) [array([-12., -15., -18.]), array([-12., -15., -18.]), array([-12., -15., -18.])]
Using torch tensors:
>>> import torch >>> honest_vectors = torch.tensor([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]]) >>> byz_worker.apply_attack(honest_vectors) [tensor([-12., -15., -18.]), tensor([-12., -15., -18.]), tensor([-12., -15., -18.])]
Using a list of numpy arrays:
>>> import numpy as np >>> honest_vectors = [np.array([1., 2., 3.]), np.array([4., 5., 6.]), np.array([7., 8., 9.])] >>> byz_worker.apply_attack(honest_vectors) [array([-12., -15., -18.]), array([-12., -15., -18.]), array([-12., -15., -18.])]
Using a list of torch tensors:
>>> import torch >>> honest_vectors = [torch.tensor([1., 2., 3.]), torch.tensor([4., 5., 6.]), torch.tensor([7., 8., 9.])] >>> byz_worker.apply_attack(honest_vectors) [tensor([-12., -15., -18.]), tensor([-12., -15., -18.]), tensor([-12., -15., -18.])]
- apply_attack(honest_vectors)[source]#
Applies the specified Byzantine attack to the input vectors.
- Parameters:
honest_vectors (numpy.ndarray, torch.Tensor, list of numpy.ndarray, or list of torch.Tensor) – A collection of input vectors, matrices, or tensors representing gradients submitted by honest participants.
- Returns:
list – A list containing f faulty (Byzantine) vectors generated by the attack, each with the same data type as the input. If f = 0, an empty list is returned.
Notes#
The number of adversarial vectors generated is determined by the “f” parameter.
Attack strategies can be extended or modified by customizing the attack implementation in the library.