Skip to content

Latest commit

 

History

History
23 lines (15 loc) · 1.46 KB

federated_learning_comm.md

File metadata and controls

23 lines (15 loc) · 1.46 KB

September 2019

tl;dr: Seminal paper on federated learning: distributed machine learning approach which enables model training on a large corpus of decentralized data.

Overall impression

FL solves the problem of data privacy (critical for hospitals, financial institutes, etc).

In FL, the training data is kept locally on users’ mobile devices, and the devices are used as nodes performing computation on their local data in order to update a global model.

FL has its own challenge compared to distributed machine learning, due to the data imbalance, non iid data and large number of devices under unreliable connection.

Key ideas

  • The main challenge in FL is the effective communication uplink for neural networks to the central server. This paper proposed two method to compress the communication.
  • Structured update: enforce the update to the original matrix to be a low rank matrix (easy to compress) or with a random mask --> I like the random mask idea better, as only a random seed is required to generate such a random mask
  • Sketched updates: first computes an unconstrained update, and then compress it via quantization.

Technical details

  • Summary of technical details

Notes

  • We can also use a common dataset and knowledge distillation for communication. The communicated is the consensus over the public dataset. (Daliang Li's idea)