Skip to content

philtrade/Ddip

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Interactive PyTorch DDP Training in FastAI Jupyter Notebooks

Ddip ("Dee dip") --- Distributed Data "interactive" Parallel is a little iPython extension of line and cell magics to bring together fastai lesson notebooks [1] and PyTorch's Distributed Data Parallel [2]. It uses ipyparallel [3] to manage the DDP process group.

Platform tested: single host with multiple Nvidia CUDA GPUs, Ubuntu linux + PyTorch + Python 3, fastai v1 and fastai course-v3.

Features:

"Distributed training doesn’t work in a notebook..."

-- FastAI's tutorial on How to launch a distributed training

Ddip was conceived to address the above, with the following features:

  1. Switch execution easily between PyTorch's multiprocess DDP group and local notebook namespace.

  2. Takes 3 - 5 lines of iPython magics to port a Fastai course v3 notebook to train in DDP.

  3. Reduce chance of GPU out of memory error by automatically emptying GPU cache memory after executing a cell in the GPU proc.

  4. Extensible, to support future versions of fastai.

Summary of speedup observed in FastAI notebooks when trained with 3 GPUs.

Installation:

Current version: 0.1.1

pip install git+https://github.com/philtrade/Ddip.git@v0.1.1#egg=Ddip

Overview:

Control DDP and cell execution destination using % and %% magics:

  • %load_ext Ddip, to load the extension.
  • %makedip ..., to start/stop/restart a DDP group, and an app, e.g. fastai_v1.
  • %%dip {remote, local, everywhere} ..., where to execute the cell.
  • %autodip {on,off}, to automatically prepend %%dip to subsequent cells.
  • %dipush, and %dipull, to pass objects between the notebook and the DDP namespaces.

How to run DDP with in FastAI notebooks with Ddip:

References:

  1. FastAI Course v3

  2. On Distributed Training:

  1. On ipyparallel: