Bucketing#
- class byzfl.Bucketing(s=1)[source]#
Description#
Apply the Bucketing pre-aggregation rule [1]:
\[\mathrm{Bucketing}_{s} \ (x_1, \dots, x_n) = \left(\frac{1}{s}\sum_{i=1}^s x_{\pi(i)} \ \ , \ \ \frac{1}{s}\sum_{i=s+1}^{2s} x_{\pi(i)} \ \ , \ \dots \ ,\ \ \frac{1}{s}\sum_{i=\left(\lceil n/s \rceil-1\right)s+1}^{n} x_{\pi(i)} \right)\]where
\(x_1, \dots, x_n\) are the input vectors, which conceptually correspond to gradients submitted by honest and Byzantine participants during a training iteration.
\(\pi\) is a random permutation on \(\big[n\big]\).
\(s > 0\) is the bucket size, i.e., the number of vectors per bucket.
- Initialization parameters:
s (int, optional) – Number of vectors per bucket. Set to 1 by default.
Calling the instance
- Input parameters:
vectors (numpy.ndarray, torch.Tensor, list of numpy.ndarray or list of torch.Tensor) – A set of vectors, matrix or tensors.
- Returns:
numpy.ndarray or torch.Tensor – The data type of the output will be the same as the input.
Examples
>>> import byzfl >>> agg = byzfl.Bucketing(2)
Using numpy arrays
>>> import numpy as np >>> x = np.array([[1., 2., 3.], # np.ndarray >>> [4., 5., 6.], >>> [7., 8., 9.]]) >>> agg(x) array([[4. 5. 6.] [4. 5. 6.]])
Using torch tensors
>>> import torch >>> x = torch.tensor([[1., 2., 3.], # torch.tensor >>> [4., 5., 6.], >>> [7., 8., 9.]]) >>> agg(x) tensor([[5.5000, 6.5000, 7.5000], [1.0000, 2.0000, 3.0000]])
Using list of numpy arrays
>>> import numpy as np >>> x = [np.array([1., 2., 3.]), # list of np.ndarray >>> np.array([4., 5., 6.]), >>> np.array([7., 8., 9.])] >>> agg(x) array([[4. 5. 6.] [4. 5. 6.]])
Using list of torch tensors
>>> import torch >>> x = [torch.tensor([1., 2., 3.]), # list of torch.tensor >>> torch.tensor([4., 5., 6.]), >>> torch.tensor([7., 8., 9.])] >>> agg(x) tensor([[5.5000, 6.5000, 7.5000], [1.0000, 2.0000, 3.0000]])
Note
The results when using torch tensor and numpy array differ as it depends on random permutation that are not necessary the same
References