Skip to content

Latest commit

 

History

History
43 lines (29 loc) · 1.34 KB

README.md

File metadata and controls

43 lines (29 loc) · 1.34 KB

Dockerized Deep QA

Instructions

Step 1: Build deepqa:latest image, from the root directory:

docker build -t deepqa:latest .
# Or the GPU version:
docker build -t deepqa:latest -f Dockerfile.gpu .

Step 2: Run the data-dirs.sh script and indicate the folder were you want the docker data to be stored (model files, dataset, logs):

cd DeepQA/docker
./data_dirs.sh <base_dir>

Warning: The data/ folder will be entirely copied from DeepQA/data. If you're not running the script from a fresh clone and have downloaded a big dataset, this can take a while.

Step 3: Copy the model you want (ex: the pre-trained model) inside <base_dir>/model-server.

Step 4: Start the server with docker-compose:

DEEPQA_WORKDIR=<base_dir> docker-compose -f deploy.yml up
# Or the GPU version:
DEEPQA_WORKDIR=<base_dir> nvidia-docker-compose -f deploy.yml up

After the server is launched, you should be able to speak with the ChatBot at http://localhost:8000/.

Note: You can also train a model with the previous command by replacing deploy.yml by train.yml.

For the GPU version

Note that the GPU version require Nvidia-Docker. In addition, to install nvidia-docker-compose:

pip install jinja2
pip install nvidia-docker-compose