Skip to content

zanyarz/zodiac

Repository files navigation

ZoDIAC: Zoneup Dropout Injection Attention Calculation

This repo includes the code for the old version of ZoDIAC paper. Preprint with DOI available at Research Square. Arxiv preprint version is available here.

Note:

The code will be updated soon upon acceptance of the updated manuscript with updated results to publish modifications in the updated version of the work in details.

Requirements

Python 3.6 and above.

PyTorch 1.6 (along with torchvision)

Other requirements are handled by the DockerFile.

Prepare data

See details in data.md.

Prepare Envinronment

With docker installed on the machine, you can run the following command to prepare the environment from project root directory.

$ docker build -t zodiac:self-critical .

We also need to mount the data folder to the docker container. First move the data folder to any folder you want from the root directory. Then, run the following command.

$ export ZODIAC_DATA_DIR="/path/to/data"

Then you can run the following command to run the docker image.

$ docker run --runtime=nvidia --name zc1 -it \
        -v $ZODIAC_DATA_DIR:/workspace/self-critical/data \
        --shm-size 32G -p 777:777 zodiac:self-critical /bin/bash \

Lastly, when inside the docker container, we need to define an ID for the code tor run as the following:

$ export ID="zc1"

Of course, for other runs, you can use other IDs.

Training & Evaluation

For training and evaluation we have created shell scripts available in shcmd folder. You can use them directly from the project directory.

Training

For training Transformer model we use the following command:

$ sh shcmd/train.sh --caption_model transformer

For training ZoDIAC-Sigmoid model we use the following command:

$ sh shcmd/train.sh --caption_model zodiac-sigmoid

For training ZoDIAC-Tanh model we use the following command:

$ sh shcmd/train.sh --caption_model zodiac-tanh

Evaluate on Karpathy's test split

For evalutaing all models use the following command:

$ sh shcmd/eval.sh

Evaluate on COCO test set

$ sh shcmd/eval.sh --input_json cocotest.json

Ensemble Evaluation

First modify the shcmd/eval_ensemble.sh based on the IDs of multiple runs for ensemble evaluation. Then, run the following command:

$ sh shcmd/eval_ensemble.sh

Acknowledgements

Thanks to the self-critical repo by Ruotian Luo.

About

The Code For Our ZoDIAC Paper Submitted to SN-AIRE

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages