Skip to content

This is a course on Deep Learning with TensorFlow for self-study

Notifications You must be signed in to change notification settings

mathispink/deep-learning-wtf

Repository files navigation

Deep Learning wtf (with TensorFlow) Course

This is a compiled and revised repository of my teaching material for a course on deep learning with TensorFlow in 2021/22 and 2022/23 and is now meant as a self-contained resource for self-study.

Sessions 14 and 15 as well as exercises 08-11 are not yet available in this repository. They will be added when I find time to add content that is suitable for self-study.

How to study: It is best to go through the material in sequential order, first going through the session's notebook, supplementary material and possibly read some of the recommended background reading, and then do the exercise designated to that session if there is one (see table below).

Course contents:

Session Content Exercise Supplementary material Recommended or seminal readings
00 Basic tensor operations in TensorFlow + Prerequisites Exercise0 - Mathematics for Machine Learning book
01 From biological neurons to logic gates, to activation functions to universal function approximation (build your first ANN from scratch) Exercise01 deriving matrix multiplication gradients 1, deriving matrix multiplication gradients 2 McCulloch & Pitts (1943)
02 Learning in ANNs: Gradient Descent, Backpropagation, and Automatic Differentiation (build your first ANN from scratch, including backpropagation and training loop) Exercise02 - Rumelhart, Hinton & Williams (1986)
03 Basic usage of TensorFlow's automatic differentiation: The GradientTape context manager - - TensorFlow's autodiff guide, TensorFlow's advanced autodiff guide
04 Modules, Layers, and Models. An introduction to the Keras Subclassing API - - TensorFlow's intro to modules
05 Keras metrics for keeping track of losses, accuracies etc. - - -
06 Loss functions and optimizers Exercise03 - Kingma & Ba (2015), Bishop (2006), chapters 3+4
07 Putting it together: Using TensorBoard to log training data and implementing a subclassed model using keras metrics and a custom training loop. - - -
08 Convolutional Neural Networks (incl. interactive widget) Exercise04 - Goodfellow, Bengio & Courville (2016), chapter 9, Krizhevsky, Sutskever & Hinton (2012), Simonyan & Zisserman (2014), He, Zhang, Ren et al. (2015), Huang, Liu, van der Maaten et al. (2017)
09 Regularization: Avoiding overfitting with L1/L2 penalties, dropout, normalization and data augmentation Exercise05 - Goodfellow, Bengio & Courville (2016), chapter 9, Srivastava, Hinton, Krizhevsky et al. (2014), Bishop (2006), chapter 5.5
10 Optimization difficulties: Vanishing and exploding gradients. Weight initialization, normalization and residual/skip connections as partial solutions Exercise06 - He, Zhang, Ren et al. (2015), Ba, Kiros & Hinton (2016), Goodfellow, Bengio & Courville (2016), chapter 8
11 Recurrent Neural Networks: From unrolled recurrence to dynamically unrolled custom recurrent cells Exercise07 - Hochreiter & Schmidhuber (1997), Cho, van Merrienboer & Gulcehre (2014), Elman (1990), Sherstinsky (2020)
12 Autoencoder Exercise08 - -
13 Generative Models Exercise09 - -
(14) Transformers and NLP Exercise10 - -
(15) Deep Reinforcement Learning Exercise11 - -