Implementation of Local Updates Periodic Averaging (LUPA) SGD
-
Updated
Dec 25, 2019 - Python
Implementation of Local Updates Periodic Averaging (LUPA) SGD
Implementation of Redundancy Infused SGD for faster distributed SGD.
Communication-efficient decentralized SGD (Pytorch)
This repository contains the code that produces the numeric section in On the Use of TensorFlow Computation Graphs in combination with Distributed Optimization to Solve Large-Scale Convex Problems
Implementation of (overlap) local SGD in Pytorch
Distributed Linear Programming Solver on top of Apache Spark
optopy is a prototyping and benchmarking Python framework for optimization, both static and dynamic, centralized and distributed
Implementation of consensus algorithms using row-stochastic weights over directed graphs
We present UDP-based aggregation algorithms for federated learning. We also present a scalable framework for practical federated learning. We empirically evaluate the performance by training deep convolutional neural networks on the MNIST dataset and the CIFAR10 dataset.
We present an algorithm to dynamically adjust the data assigned for each worker at every epoch during the training in a heterogeneous cluster. We empirically evaluate the performance of the dynamic partitioning by training deep neural networks on the CIFAR10 dataset.
We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while maintaining the performance of vanilla SGD. We empirically evaluate the performance of the compression methods by training deep neural networks on the CIFAR10 dataset.
Pomodoro: Progressive Decomposition Methods with Acceleration
MATLAB implementation of the paper "Distributed Optimization of Average Consensus Containment with Multiple Stationary Leaders" [arXiv 2022].
dccp is a simple python package that implements DiPOA algorithm.
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
Implemented FedAvg & FedProx: Decentralized Optimization Algorithms for Neural Networks for an Image Classification Task- Distributed Optimization and Learning(DOL) Course Project
Scalable, structured, dynamically-scheduled hyperparameter optimization.
This library is an implementation of the algorithm described in Distributed Trajectory Estimation with Privacy and Communication Constraints: a Two-Stage Distributed Gauss-Seidel Approach.
MATLAB implementation of the paper "Online Distributed Optimal Power Flow with Equality Constraints" [arXiv 2022].
Add a description, image, and links to the distributed-optimization topic page so that developers can more easily learn about it.
To associate your repository with the distributed-optimization topic, visit your repo's landing page and select "manage topics."