Skip to content

moinnadeem/CDSSM

Repository files navigation

Convolutional Deep Semantic Similarity Model

This repository implements CDSSM, which seeks to rank documents given some inputs.

Be sure to read the paper before continuing, as my implementation has begun to differ from the paper.

Instructions

Entry point: clsm_pytorch.py

Relevant Files

A lot of these evidences have been preprocessed into pickle format; the following files / parameters are useful to speed up compute / training time.

  • claims_dict.pkl is used to get a mapping of claims to preprocessed representations, similarly.
  • feature_encoder.pkl and encoder.pkl are used to preprocess text on the fly. They contain a mapping of the trigrams to one-hot vectors, and characters to trigrams respectively. Make sure you have these.
  • The data folder contains all data input needed to run the documents.

These documents are stored in /usr/users/mnadeem/CDSSM_github. The following commands should be able to copy them locally:

cp /usr/users/mnadeem/CDSSM_github/claims_dict.pkl .
cp /usr/users/mnadeem/CDSSM_github/feature_encoder.pkl .
cp /usr/users/mnadeem/CDSSM_github/encoder.pkl .
cp -r /usr/users/mnadeem/CDSSM_github/data/ .

Create the following folders: models and predicted_labels:

mkdir models
mkdir predicted_labels

To run: python3 clsm_pytorch.py --data data/large ARGS

Speedups

  • Running it with the --sparse-evidences flag: this loads a dictionary of preprocessed matricies rather than building it on runtime; that speeds up training significantly.
  • claims_dict.pkl is used to get a mapping of claims to preprocessed representations, similarly.

Notes:

  • I normally run it on a Titan X, and each 2% of the batch takes 20-30s, or around 16 minutes per epoch.
  • I normally use 1 GPU, and haven't noticed a performance speedup with several GPUs.

About

Convolutional Deep Semantic Similarity Model

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published