You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using FedML on several vehicles to train object detection models. Due to privacy issues, we do not want vehicles to share training data directly, instead, we share the weights outputted by the model.
However, I recently read that it is possible to infer training data from weights if the attacker knows what model is used. Is this possible with the FedML framework, if so, how does FedML protect against such kinds of attacks?
The text was updated successfully, but these errors were encountered:
Hi,
We are using FedML on several vehicles to train object detection models. Due to privacy issues, we do not want vehicles to share training data directly, instead, we share the weights outputted by the model.
However, I recently read that it is possible to infer training data from weights if the attacker knows what model is used. Is this possible with the FedML framework, if so, how does FedML protect against such kinds of attacks?
The text was updated successfully, but these errors were encountered: