Skip to content

HsiaoYetGun/Toy-Model-for-NLI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Toy-Model-for-NLI

My toy model for natural language inference task. This code is implemented by TensorFlow.

Details

I utlized biLSTM and a structured self attention to encode both premise and hypothesis as 2D-representation, then use a decomposable attention to capture the important information between premise and hypothesis. Finally, I employed a method like ESIM to combine these information.

Model Architecture

model

References

  1. A Structured Self-Attentive Sentence Embedding proposed by Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, Yoshua Bengio (ICLR 2017)
  2. A Decomposable Attention Model for Natural Language Inference proposed by Aparikh, Oscart, Dipanjand, Uszkoreit. (EMNLP 2016)
  3. Enhanced LSTM for Natural Language Inference proposed by Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen. (ACL 2017)

Dataset

The dataset used for this task is Stanford Natural Language Inference (SNLI). Pretrained GloVe embeddings obtained from common crawl with 840B tokens used for words.

Requirements

  • Python>=3
  • NumPy
  • TensorFlow>=1.8

Usage

Download dataset from Stanford Natural Language Inference, then move snli_1.0_train.jsonl, snli_1.0_dev.jsonl, snli_1.0_test.jsonl into ./SNLI/raw data.

# move dataset to the right place
mkdir -p ./SNLI/raw\ data
mv snli_1.0_*.jsonl ./SNLI/raw\ data

Data preprocessing for convert source data into an easy-to-use format.

python3 Utils.py

Default hyper-parameters have been stored in config file in the path of ./config/config.yaml.

Training model:

python3 Train.py

Test model:

python3 Test.py

Results

Fune-tuning ...