Skip to content

Some useful dockerfiles for DeepLearning and Computer Vision

License

Notifications You must be signed in to change notification settings

edmBernard/DockerFiles

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DockerFiles

Some Dockerfiles to install Opencv, ffmpeg and Deeplearning framework. I also use them as a reminder to complicated framework installation.

Requirements

Most of these docker use the Nvidia runtime for Docker

to Use Nvidia runtime as default runtime add this in /etc/docker/daemon.json

{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

Building images

With docker api

sudo docker build --runtime=nvidia -t image_name -f dockerfile_name .

or (if nvidia is the default runtime)

sudo docker build -t image_name -f dockerfile_name .

With Make

I made a Makefile to automate the build process.

make IMAGE_NAME

The image is the concatenation of the Library name and the tag (ex: opencv and _gpu is create by make opencv_gpu)

Note1: make accept NOCACHE=ON argument to force the rebuild of all images
Note2: As image depends from each other, make will automatically build images dependency. (ex: if you build opencv_cpu image, pythonlib_cpu and ffmpeg_cpu will be create as well by the command make opencv:cpu)

List of all available images

Library Tag Description
all
_cpu
_gpu
_alpine
all images
all cpu images
all gpu images
all alpine images
pythonlib _cpu
_gpu
my standard configuration with all library I use
ffmpeg _cpu
_gpu
with ffmpeg compiled from source with x264, h265 and nvencode on gpu images
opencv _cpu
_gpu
with opencv compiled from source
redis _cpu
_gpu
with redis compiled from source
mxnet _cpu
_gpu
with mxnet compiled from source and mkl
with mxnet compiled from source and gpu support
nnvm _cpu
_gpu_opencl
_cpu_opencl
with nnvm, tvm compiled from source
with nnvm, tvm and opencl compiled from source and gpu support
with nnvm, tvm and opencl compiled from source
tensorflow _cpu
_gpu
with tensoflow
pytorch _cpu
_gpu
with pytorch and pytorch/vision
numba _cpu
_gpu
with numba
jupyter _cpu
_gpu
a jupyter server with pass as password
vcpkg _cpu with vcpkg installed
alpine _redis
_pythonlib
_node
_dotnet
_vcpkg
_rust*
some usefull image based on alpine to have small memory footprint

* : rust lib proc-macro don't work on musl. If you need it use the following image to cross compile

Create container (with CPU Only)

docker run -it --name container_name -p 0.0.0.0:6000:7000 -p 0.0.0.0:8000:9000 -v shared/path/on/host:/shared/path/in/container image_name:latest /bin/bash
Unfold
sudo docker run -it             # -it option allow interaction with the container
--name container_name           # Name of the created container
-p 0.0.0.0:6000:7000            # Port redirection (redirect host port 6000 to container port 7000)
-p 0.0.0.0:8000:9000            # Port redirection (redirect host port 8000 to container port 9000)
-v shared/path/on/host:/shared/path/in/container    # Configure a shared directory between host and container
image_name:latest               # Image name to use for container creation
/bin/bash                       # Command to execute

Note: Don't specify ports if you don't use them. As you can't have containers listenning the same host port. (cf. "Alias to create Jupyter server" for random port assignation).

Create container (with GPU support)

NV_GPU='0' docker run -it --runtime=nvidia --name container_name -p 0.0.0.0:6000:7000 -p 0.0.0.0:8000:9000 -v shared/path/on/host:/shared/path/in/container image_name:latest /bin/bash
Unfold
NV_GPU='0'                      # GPU id give by nvidia-smi ('0', '1' or '0,1' for GPU0, GPU2 or both)
sudo docker run -it             # -it option allow interaction with the container
--runtime=nvidia                # Allow docker to run with nvidia runtime to support GPU
--name container_name           # Name of the created container
-p 0.0.0.0:6000:7000            # Port redirection (redirect host port 6000 to container port 7000)
-p 0.0.0.0:8000:9000            # Port redirection (redirect host port 8000 to container port 9000)
-v shared/path/on/host:/shared/path/in/container    # Configure a shared directory between host and container
image_name:latest               # Image name to use for container creation
/bin/bash                       # Command to execute

Note: Don't specify ports if you don't use them. As you can't have containers listenning the same host port. (cf. "Alias to create Jupyter server" for random port assignation in a range).

Advance use

Open new terminal in running container

docker exec -it container_name /bin/bash

Alias to create Jupyter server

CPU version

alias jupserver='docker run -it -d -p 0.0.0.0:5000-5010:8888 -v $PWD:/home/dev/host jupyter_cpu:latest'

Note: If host port is a range of ports and container port a single one, docker will choose a random free port in the specified range.

GPU version

alias jupserver='docker run -it -d -p 0.0.0.0:5000-5010:8888 -v $PWD:/home/dev/host jupyter_gpu:latest'

Note: If host port is a range of ports and container port a single one, docker will choose a random free port in the specified range.

Alias to create a isolated devbox

alias devbox='docker run -it --rm -v $PWD:/home/dev/host mxnet:latest'

Fixed version

Sometime update in library can break compatibility with other module. In certain Dockerfile there is fixed version to keep older version. Other tools can be download with last version so I need to change manually version at each update. Most of the time, I try to keep last version for all tools. In some case last version fix bug or the reason I fixed the version without I know it.

Tools version Docker image Description
cuda 11.1 all gpu images --
cudnn 8 all gpu images --
opencv 4.5.4 opencv --
ffmpeg 4.3.2 ffmpeg api break should be fix in opencv soon
pytorch 1.9.1 pytorch --

Script

The generate.py script available in script folder allow three things.

  • generate.py amalgamation: generate Dockerfile for each end image without dependency. It generate a dockerfile with all depency expanded.
  • generate.py makefile: update makefile with all images found in folders. Useful after amalgamation generation.
  • generate.py concatenate: allow to concatenate dockerfile. For example, if you want to add jupyter support on pytorch images. generate.py concatenate --filename ../super/pytorch/Dockerfile.jupyter --base pytorch_cpu -- jupyter_cpu will generate a new dockerfile that depends of pytorch_cpu and add jupyter_cpu installation. This image will be available - after makefile update - via make pytorch_jupyter

Example :

./generate.py concatenate --filename ../super/jupyter/Dockerfile.mxnet --base mxnet_cpu_mkl -- jupyter_cpu
./generate.py concatenate --filename ../super/jupyter/Dockerfile.opencv --base opencv_cpu -- jupyter_cpu
./generate.py concatenate --filename ../super/jupyter/Dockerfile.pythonlib --base pythonlib_cpu -- jupyter_cpu

tester