Autograd and Fork
Suraj Subramanian edited this page Jul 24, 2023
·
1 revision
TLDR: Use spawn
instead of fork
.
Autograd engine relies on threads pool, which makes it vulnerable to fork
. We detect such situations and warn users to use spawn
method of multiprocessing.
So this code will work
import multiprocessing as mp
ctx = mp.get_context('spawn')
simple_autograd_function()
with ctx.Pool(3) as pool:
pool.map(simple_autograd_function, [1, 2, 3])
When this code will fail
import multiprocessing as mp
ctx = mp.get_context('fork')
simple_autograd_function()
with ctx.Pool(3) as pool:
pool.map(simple_autograd_function, [1, 2, 3])
See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods for more details.
- Install Prerequisites
- Fork, clone, and checkout the PyTorch source
- Install Dependencies
- Build PyTorch from source
- Tips for developing PyTorch
- PyTorch Workflow Git cheatsheet
- Overview of the Pull Request Lifecycle
- Finding Or Reporting Issues
- Pre Commit Checks
- Create a Pull Request
- Typical Pull Request Workflow
- Pull Request FAQs
- Getting Help
- Codebase structure
- Tensors, Operators, and Testing
- Autograd
- Dispatcher, Structured Kernels, and Codegen
- torch.nn
- CUDA basics
- Data (Optional)
- function transforms (Optional)