Static Clipping#
- class byzfl.Clipping(c=2.0)[source]#
Description#
Apply the Static Clipping pre-aggregation rule:
\[\mathrm{Clipping}_{c} \ (x_1, \dots, x_n) = \left( \min\left\{1, \frac{c}{\big|\big|x_1\big|\big|_2}\right\} x_1 \ \ , \ \dots \ ,\ \ \min\left\{1, \frac{c}{\big|\big|x_n\big|\big|_2}\right\} x_n \right)\]where
\(x_1, \dots, x_n\) are the input vectors, which conceptually correspond to gradients submitted by honest and Byzantine participants during a training iteration.
\(\big|\big|.\big|\big|_2\) denotes the \(\ell_2\)-norm.
\(c \geq 0\) is the static clipping threshold. Any input vector with an \(\ell_2\)-norm greater than \(c\) will be will be scaled down such that its \(\ell_2\)-norm equals \(c\).
- Initialization parameters:
c (float, optional) – Static clipping threshold. Set to 2.0 by default.
Calling the instance
- Input parameters:
vectors (numpy.ndarray, torch.Tensor, list of numpy.ndarray or list of torch.Tensor) – A set of vectors, matrix or tensors.
- Returns:
numpy.ndarray or torch.Tensor – The data type of the output will be the same as the input.
Examples
>>> import byzfl >>> agg = byzfl.Clipping(2.0)
Using numpy arrays
>>> import numpy as np >>> x = np.array([[1., 2., 3.], # np.ndarray >>> [4., 5., 6.], >>> [7., 8., 9.]]) >>> agg(x) array([[0.53452248, 1.06904497, 1.60356745], [0.91168461, 1.13960576, 1.36752692], [1.00514142, 1.14873305, 1.29232469]])
Using torch tensors
>>> import torch >>> x = torch.tensor([[1., 2., 3.], # torch.tensor >>> [4., 5., 6.], >>> [7., 8., 9.]]) >>> agg(x) tensor([[0.5345, 1.0690, 1.6036], [0.9117, 1.1396, 1.3675], [1.0051, 1.1487, 1.2923]])
Using list of numpy arrays
>>> import numpy as np >>> x = [np.array([1., 2., 3.]), # list of np.ndarray >>> np.array([4., 5., 6.]), >>> np.array([7., 8., 9.])] >>> agg(x) array([[0.53452248, 1.06904497, 1.60356745], [0.91168461, 1.13960576, 1.36752692], [1.00514142, 1.14873305, 1.29232469]])
Using list of torch tensors
>>> import torch >>> x = [torch.tensor([1., 2., 3.]), # list of torch.tensor >>> torch.tensor([4., 5., 6.]), >>> torch.tensor([7., 8., 9.])] >>> agg(x) tensor([[0.5345, 1.0690, 1.6036], [0.9117, 1.1396, 1.3675], [1.0051, 1.1487, 1.2923]])