Speeding up DNN training for image classification, with OpenMPI
-
Updated
Oct 27, 2021 - C++
Speeding up DNN training for image classification, with OpenMPI
Sample PHT implementations efforts from the PHT German team
[WIP] elastic training implemented with MXNet
Page for fedsim
Just a casual project that provides a minor solution to the distributed learning of block non-i.i.d. data
This is a distributed training framework for continual and incremental learning for multi-label multi-class image tasks
Codes and experiments for the paper "Max-Discrepancy Distributed Learning: Fast Risk Bounds and Algorithms"
CycleSL: Server-Client Cyclical Update Driven Scalable Split Learning
subMFL: Compatible subModel Generation for Federated Learning in Device Heterogeneous Environment
The smartphones of the people probably carry the most valueable but also private data. Since using data promisses to be one of the best ways to fight back against COVID-19, it is highly desirable to get access. By using a Federated Learning approach with PySyft it is possible to learn from the private data right on the smartphone, with the data …
Collaborative Machine Learning approach to train a mode that classifies a person as smoker or non-smoker based on the user data. The distributed approach of training is done with secure model transmissions to central cloud location where Amazon EC2 instance aggregates the new model based on new training received in Homomorphically Encrypted forms
The repository focuses on conducting Federated Learning experiments using the Intel OpenFL framework with diverse machine learning models, utilizing image and tabular datasets, applicable different domains like medicine, banking etc.
Dist-DGL running on wsl2, minikube with single machine
Distributed machine learning using processes
Reinforcement learning using PPO on the 3D Ball environment for Unity ML-Agents. Using MPI to do distributed training across separate processes
Training and deploying sentiment analysis models with deep learning using Amazon SageMaker. A BERT model was trained using distributed training with the help of Hugging Face.
Add a description, image, and links to the distributed-learning topic page so that developers can more easily learn about it.
To associate your repository with the distributed-learning topic, visit your repo's landing page and select "manage topics."