Skip to content

Vinayak2002/Poisoning_Attacks_in_FL

Repository files navigation

Poisoning Attacks on Federated Learning Model


Technologies used:

Python
TensorFlow
Anaconda

What is Federated Learning:

Federated Learning was introduced by Google where they used this approach to improve their GBoard suggestions. FL enables the training of a machine learning model over a distributed network of devices with the datasets specific to the devices. This approach ensures that the server has no access to the user’s private data which is used for the training of the local model.


Attacks on Federated Learning:

Poisoning and Inference Attacks are two such attacks that heavily impact the performance and privacy of the Federated Learning approach. Poisoning Attacks are initiated by a malicious participant who has intentions to corrupt the central model which is present on the server by introducing incorrect local model parameters to the central server.


What have we done?

We have simulated FL for Digit Recognintion Model using python language and recorded the impact on results.

Made with ❤️ at IIIT NAYA RAIPUR

About

Simulation of FL in python for Digit Recognition ML model. Simulated poisoning attacks and studies their impact.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages