nn Basics
Joel Schlosser edited this page Jul 1, 2021
·
3 revisions
- Understand what
torch.nn
is - Understand what a module is
- Understand how modules are used to build and train neural networks
- Understand how to author a module in PyTorch
- Understand how to test modules in PyTorch
torch.nn
is the component of PyTorch that provides building blocks for neural networks. Its core abstraction is nn.Module
, which encapsulates stateful computation with learnable parameters. Modules integrate with the autograd system and are generally trained using optimizers provided in torch.optim
.
Read through the following links:
Work through the lab.
- Install Prerequisites
- Fork, clone, and checkout the PyTorch source
- Install Dependencies
- Build PyTorch from source
- Tips for developing PyTorch
- PyTorch Workflow Git cheatsheet
- Overview of the Pull Request Lifecycle
- Finding Or Reporting Issues
- Pre Commit Checks
- Create a Pull Request
- Typical Pull Request Workflow
- Pull Request FAQs
- Getting Help
- Codebase structure
- Tensors, Operators, and Testing
- Autograd
- Dispatcher, Structured Kernels, and Codegen
- torch.nn
- CUDA basics
- Data (Optional)
- function transforms (Optional)