Distributed training (multi-node) of a Transformer model
-
Updated
Apr 10, 2024 - Python
Distributed training (multi-node) of a Transformer model
Unofficial implementation of "TTNet: Real-time temporal and spatial video analysis of table tennis" (CVPR 2020)
Acceleration of a classification model for thoracic diseases
Unofficial implementation for Sigmoid Loss for Language Image Pre-Training
A Tiny Version of the Original ultralytics/yolov5
You Only Look Once: Unified, Real-Time Object Detection
This is a simulator for access strategies for distributed caching. The simulator considers a user who is equipped by several caches, and receives from them periodical updates about the cached content. The problem and algorithms implemented here are detailed in the paper: I. Cohen, G. Einziger, R. Friedman, and G. Scalosub, “Access Strategies for…
This repository is intended to be a template for starting new projects with PyTorch, in which deep learning models are trained and evaluated on medical imaging data.
Different template codes for Deep Learning with PyTorch.
demo for pytorch-distributed
Helmet Detector based on the CenterNet.
Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training with torch's DDP.
A simple API to launch Python functions to run on multiple ranked processes, mpify is designed to enable interactive multiprocessing experiments in Jupyter/IPython, such as distributed data parallel training over multiple GPUs.
Add a description, image, and links to the distributed-data-parallel topic page so that developers can more easily learn about it.
To associate your repository with the distributed-data-parallel topic, visit your repo's landing page and select "manage topics."