Attacks#
Welcome to the Byzantine Attacks module of the library, which provides implementations of various attack strategies targeting machine learning vectors, typically gradients. This module focuses on the execution of specific attacks designed to manipulate or disrupt the aggregation process of vectors (or gradients) submitted by honest participants.
Explore this module to understand and experiment with a variety of Byzantine attack strategies, enhancing the robustness of your machine learning systems.