Skip to content

okwrtdsh/anaconda3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


anaconda3

Anaconda3, Jupyter Notebook, OpenCV3, TensorFlow and Keras2 for Deep Learning

Available tags

Anaconda3, Jupyter, OpenCV3

Tag Size / Layers CUDA Toolkit (Linux x86_64 / Windows x86_64) CUDNN
latest, cpu - -
10.0-cudnn7 CUDA 10.0 ( >= 410.48 / 411.31 ) 7
9.2-cudnn7 CUDA 9.2 ( >= 396.26 / 397.44 ) 7
9.1-cudnn7 CUDA 9.1 ( >= 387.26 / 388.19 ) 7
9.0-cudnn7 CUDA 9.0 ( >= 384.81 / 385.54 ) 7
8.0-cudnn6 CUDA 8.0 ( >= 375.26 / 376.51 ) 6

Tensorflow

Tag Size / Layers Base Tensorflow
tf-cpu, tf cpu 1.12.0
tf-10.0-cudnn7 10.0-cudnn7 1.12.0
tf-9.2-cudnn7 9.2-cudnn7 1.12.0
tf-9.1-cudnn7 9.1-cudnn7 1.7.0
tf-9.0-cudnn7 9.0-cudnn7 1.7.0
tf-8.0-cudnn6 8.0-cudnn6 1.4.1

Keras (Tensorflow Backend)

Tag Size / Layers Base Tensorflow Keras
keras-cpu, keras tf-cpu 1.12.0 2.2.4
keras-10.0-cudnn7 tf-10.0-cudnn7 1.12.0 2.2.4
keras-9.2-cudnn7 tf-9.2-cudnn7 1.12.0 2.2.4
keras-9.1-cudnn7 tf-9.1-cudnn7 1.7.0 2.1.5
keras-9.0-cudnn7 tf-9.0-cudnn7 1.7.0 2.1.5
keras-8.0-cudnn6 tf-8.0-cudnn6 1.4.1 2.1.3

Pytorch

Tag Size / Layers Base Pytorch
pytorch-cpu, pytorch cpu 1.0.0
pytorch-10.0-cudnn7 10.0-cudnn7 1.0.0
pytorch-9.2-cudnn7 9.2-cudnn7 1.0.0
pytorch-9.1-cudnn7 9.1-cudnn7 1.0.0
pytorch-9.0-cudnn7 9.0-cudnn7 1.0.0
pytorch-8.0-cudnn6 8.0-cudnn6 1.0.0

Mxnet

Tag Size / Layers Base Mxnet
mxnet-cpu, mxnet cpu mxnet
mxnet-10.0-cudnn7 10.0-cudnn7 mxnet-cu100
mxnet-9.2-cudnn7 9.2-cudnn7 mxnet-cu92
mxnet-9.1-cudnn7 9.1-cudnn7 mxnet-cu91
mxnet-9.0-cudnn7 9.0-cudnn7 mxnet-cu90
mxnet-8.0-cudnn6 8.0-cudnn6 mxnet-cu80

How to Use

CPU

  1. Run with docker (image: okwrtdsh/anaconda3:keras-cpu)
$ docker run -v $(pwd):/src/notebooks -p 8888:8888 -td okwrtdsh/anaconda3:keras-cpu
  1. Open http://localhost:8888 in web browser

GPU

  1. Run with nvidia-docker (image: okwrtdsh/anaconda3:keras-10.0-cudnn7)
$ nvidia-docker run -v $(pwd):/src/notebooks -p 8888:8888 -td okwrtdsh/anaconda3:keras-10.0-cudnn7
  1. Open http://localhost:8888 in web browser

CPU (docker-compose)

  1. docker-compose.yml (image: okwrtdsh/anaconda3:keras-cpu)
version: '3'
services:
  jupyter:
    image: okwrtdsh/anaconda3:keras-cpu
    ports:
      - '8888:8888'
    volumes:
      - ./notebooks:/src/notebooks
  1. Run with docker-compose
$ docker-compose up -d
  1. Open http://localhost:8888 in web browser

GPU (docker-compose)

  1. docker-compose.yml (image: okwrtdsh/anaconda3:keras-10.0-cudnn7)
version: '3'
services:
  jupyter:
    image: okwrtdsh/anaconda3:keras-10.0-cudnn7
    ports:
      - '8888:8888'
    volumes:
      - ./notebooks:/src/notebooks
  1. Run with nvidia-docker
# Run with nvidia-docker-compose (nvidia-docker v1)
$ nvidia-docker-compose up -d
# Run with docker-compose (nvidia-docker v2)
$ docker-compose up -d
  1. Open http://localhost:8888 in web browser

Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g. for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with --ipc=host or --shm-size command line options to nvidia-docker run.

nvidia-docker
$ nvidia-docker run --ipc=host -v $(pwd):/src/notebooks -p 8888:8888 -td okwrtdsh/anaconda3:pytorch-10.0-cudnn7
docker-compose
version: '3'
services:
  jupyter:
    image: okwrtdsh/anaconda3:pytorch-10.0-cudnn7
    ipc: host
    ports:
      - '8888:8888'
    volumes:
      - ./notebooks:/src/notebooks
# Run with nvidia-docker-compose (nvidia-docker v1)
$ nvidia-docker-compose up -d
# Run with docker-compose (nvidia-docker v2)
$ docker-compose up -d

Releases

No releases published

Packages

No packages published