The lecture-specific learning objectives for the course are presented below.
- Compare and contrast the representations of integers and floating point numbers
- Explain how double-precision floating point numbers are represented by 64 bits
- Identify common computational issues caused by floating point numbers, e.g., rounding, overflow, etc.
- Calculate the "spacing" of a 64 bit floating point number in Python
- Write code defensively against numerical errors
- Use
numpy.iinfo()
/numpy.finfo()
to work out the possible numerical range of an integer or float dtype
- Explain the difference between a model, loss function, and optimization algorithm in the context of machine learning
- Explain and implement the gradient descent algorithm
- Apply gradient descent to linear and logistic regression
- Use
scipy.optimize.minimize()
to minimize a function
- Explain and implement the stochastic gradient descent algorithm
- Explain the advantages and disadvantages of stochastic gradient descent as compared to gradient descent
- Explain what are epochs, batch sizes, iterations, and computations in the context of gradient descent and stochastic gradient descent
- Describe the difference between
numpy
andtorch
arrays (np.array
vs.torch.Tensor
) - Explain fundamental concepts of neural networks such as layers, nodes, activation functions, etc.
- Create a simple neural network in PyTorch for regression or classification
- Explain how backpropagation works at a high level
- Describe the difference between training loss and validation loss when creating a neural network
- Identify and describe common techniques to avoid overfitting/apply regularization to neural networks, e.g., early stopping, drop out, L2 regularization
- Use
PyTorch
to develop a fully-connected neural network and training pipeline
- Describe the terms convolution, kernel/filter, pooling, and flattening
- Explain how convolutional neural networks (CNNs) work
- Calculate the number of parameters in a given CNN architecture
- Create a CNN in
PyTorch
- Discuss the key differences between CNNs and fully connected NNs
- Load image data using
torchvision.datasets.ImageFolder()
to train a network in PyTorch - Explain what "data augmentation" is and why we might want to do it
- Be able to save and re-load a PyTorch model
- Tune the hyperparameters of a PyTorch model using Ax
- Describe what transfer learning is and the different flavours of it: "out-of-the-box", "feature extractor", "fine tuning"
- Describe what an autoencoder is at a high level and what they can be useful for
- Describe what a generative adversarial network is at a high level and what they can be useful for
- Describe what a multi-input model is and what they can be useful for