Skip to content

davidrpugh/introduction-to-recurrent-neural-networks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Binder

Introduction to Recurrent Neural Networks

There is strong demand for deep learning (DL) skills and expertise to solve challenging business problems both globally and locally in KSA. This course will help learners build capacity in core DL tools and methods and enable them to develop their own applications that use recurrent neural networks. This course covers the basic theory behind RNN algorithms but the majority of the focus is on hands-on examples using PyTorch.

Learning Objectives

The primary learning objective of this course is to provide students with practical, hands-on experience with state-of-the-art machine learning and deep learning tools that are widely used in industry.

This course covers portions of Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow and Machine Learning with PyTorch and Scikit-Learn. The following topics will be discussed.

  • Processing Sequences using Recurrent Neural Networks (RNNs)
  • Natural Language Processing using Attention and Transformers

Lessons

The lessons are organizes into modules with the idea that they can taught somewhat independently to accommodate specific audiences.

Module 0: Recap of Deep Learning Fundamentals

Tutorial Open in Google Colab Open in Kaggle
Univariate Time Series Forecasting with RNNs Google Colab Kaggle
Multivariate Time Series Forecasting with RNNs Google Colab Kaggle
  • Consolidation of previous days content via Q/A and live coding demonstrations.
  • The morning session will focus on the theory behind Transformers and Attention, by covering portions of by covering portions of Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow and [Machine Learning with PyTorch and Scikit-Learn]
  • The afternoon session will focus on applying the techniques learned in the morning session using PyTorch, followed by a short assessment on the Kaggle data science competition platform.

Assessment

Student performance on the course will be assessed through participation in a Kaggle classroom competition.

Repository Organization

Repository organization is based on ideas from Good Enough Practices for Scientific Computing.

  1. Put each project in its own directory, which is named after the project.
  2. Put external scripts or compiled programs in the bin directory.
  3. Put raw data and metadata in a data directory.
  4. Put text documents associated with the project in the doc directory.
  5. Put all Docker related files in the docker directory.
  6. Install the Conda environment into an env directory.
  7. Put all notebooks in the notebooks directory.
  8. Put files generated during cleanup and analysis in a results directory.
  9. Put project source code in the src directory.
  10. Name all files to reflect their content or function.

Building the Conda environment

After adding any necessary dependencies that should be downloaded via conda to the environment.yml file and any dependencies that should be downloaded via pip to the requirements.txt file you create the Conda environment in a sub-directory ./envof your project directory by running the following commands.

export ENV_PREFIX=$PWD/env
mamba env create --prefix $ENV_PREFIX --file environment.yml --force

Once the new environment has been created you can activate the environment with the following command.

conda activate $ENV_PREFIX

Note that the ENV_PREFIX directory is not under version control as it can always be re-created as necessary.

For your convenience these commands have been combined in a shell script ./bin/create-conda-env.sh. Running the shell script will create the Conda environment, activate the Conda environment, and build JupyterLab with any additional extensions. The script should be run from the project root directory as follows.

./bin/create-conda-env.sh

Ibex

The most efficient way to build Conda environments on Ibex is to launch the environment creation script as a job on the debug partition via Slurm. For your convenience a Slurm job script ./bin/create-conda-env.sbatch is included. The script should be run from the project root directory as follows.

sbatch ./bin/create-conda-env.sbatch

Listing the full contents of the Conda environment

The list of explicit dependencies for the project are listed in the environment.yml file. To see the full lost of packages installed into the environment run the following command.

conda list --prefix $ENV_PREFIX

Updating the Conda environment

If you add (remove) dependencies to (from) the environment.yml file or the requirements.txt file after the environment has already been created, then you can re-create the environment with the following command.

$ mamba env create --prefix $ENV_PREFIX --file environment.yml --force

Using Docker

In order to build Docker images for your project and run containers with GPU acceleration you will need to install Docker, Docker Compose and the NVIDIA Docker runtime.

Detailed instructions for using Docker to build and image and launch containers can be found in the docker/README.md.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published