Don't waste time on setting up a deep learning environment while you can get a deep learning environment with everything pre-installed.
- TensorFlow
- PyTorch
- Numpy
- Scikit-Learn
- Pandas
- Matplotlib
- Seaborn
- Plotly
- NLTK
- Jupyter notebook/lab
- conda
- mamba (faster than conda) 1
Variant | Tag | Conda | PyTorch | TensorFlow | Image size |
---|---|---|---|---|---|
Conda | conda |
✔️ | ✔️ | ✔️ | |
No Conda | no-conda , latest |
❌ | ✔️ | ✔️ | |
PyTorch | pytorch |
❌ | ✔️ | ❌ | |
PyTorch (Nightly) | pytorch-nightly |
❌ | ✔️ | ❌ | |
Tensorflow | tensorflow |
❌ | ❌ | ✔️ |
You can see the full list of tags https://hub.docker.com/r/matifali/dockerdl/tags.
- Docker
- nvidia-container-toolkit 2
- Linux, MacOS, or Windows with WSL2
docker run --gpus all --rm -it -h dockerdl matifali/dockerdl bash
docker run --gpus all --rm -it -h dockerdl -p 8000:8000 matifali/dockerdl code-server --accept-server-license-terms serve-local --without-connection-token --quality stable --telemetry-level off
Connect to the server using your browser at http://localhost:8000.
docker run --gpus all --rm -it -h dockerdl -p 8888:8888 matifali/dockerdl jupyter notebook --no-browser --port 8888 --NotebookApp.token='' --ip='*'
docker run --gpus all --rm -it -h dockerdl -p 8888:8888 matifali/dockerdl jupyter lab --no-browser --port 8888 --ServerApp.token='' --ip='*'
Connect by opening http://localhost:8888 in your browser.
git clone https://github.com/matifali/dockerdl.git
Modify the corresponding [Dockerfile
] to add or delete packages.
Following --build-arg
are available:
Argument | Description | Default | Possible Values |
---|---|---|---|
USERNAME | User name | coder | Any string or $USER |
USERID | User ID | 1000 | $(id -u $USER) |
GROUPID | Group ID | 1000 | $(id -g $USER) |
PYTHON_VER | Python version | 3.10 | 3.10, 3.9, 3.8 |
CUDA_VER | CUDA version | 11.7.1 | 11.7.0, 11.8.0 etc. |
UBUNTU_VER | Ubuntu version | 22.04 | 22.04, 20.04, 18.04 |
TF_VERSION | TensorFlow version | latest | any version from Pypi3 |
Note: Not all combinations of
--build-arg
are tested.
Build an image with default settings and your own username and user id.
docker build -t dockerdl:latest /
--build-arg USERNAME=$USER /
--build-arg USERID=$(id -u $USER) /
--build-arg GROUPID=$(id -g $USER) /
-f conda.Dockerfile .
Build an image with Python 3.8, TensorFlow 2.6.0, CUDA 11.5.0, Ubuntu 20.04 and no:conda
docker build -t dockerdl:latest /
--build-arg USERNAME=$USER /
--build-arg USERID=$(id -u $USER) /
--build-arg GROUPID=$(id -g $USER) /
--build-arg PYTHON_VER=3.8 /
--build-arg CUDA_VER=11.5.0 /
--build-arg UBUNTU_VER=20.04 /
--build-arg TF_VERSION=2.6.0 /
-f noconda.Dockerfile .
Follow the instructions here.
- install vscode.
- Install Docker extension.
- Install Python extension.
- install Remote Development extension.
- Follow the instructions here.
If you find any issue please feel free to create an issue and submit a PR.
- Please give a star (⭐) if using this has helped you.
- Help the flood victims in Pakistan by donating here.
Footnotes
-
mamba is a fast, drop-in replacement for the conda package manager. It is written in C++ and uses the same package format as conda. It is designed to be a drop-in replacement for conda, and can be used as a drop-in replacement for the conda command line client. ↩
-
This image is based on nvidia/cuda and uses nvidia-container-toolkit to access the GPU. ↩
-
Pypi is the Python Package Index. It is a repository of software for the Python programming language. ↩