Abstract: FL is vulnerable to model poisoning attacks due to the invisibility of local data and the decentralized nature of FL training. The adversary attempts to maliciously manipulate local model ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results