Skip to content

captainjtx/SimpleText

Repository files navigation

Neural Text Simplification Using Tensorflow

This project is an exploration of dopting tensorflow's neural machine translation model (nmt) to text simplification task. It is similar to Neural Text Simplification which is based on OpenNMT. An interactive demo is served at simpletext.xyz.

Quick Start:

  1. Clone the repository to your local machine recursively:
git clone --recursive https://github.com/captainjtx/SimpleText.git
  1. Install the python packages:
cd SimpleText
pip install -r requirements.txt
  1. Download pretrained model into local directory (./model):
mkdir model
python script/download_models.py
  1. Run inference on one of the pretrained models (seq2seq, 2-hidden layer LSTM with Attention, dropout 0.25, more info). Default input is test/complex.tt, default output is test/inference.txt :
mkdir test
cat "Science Fantasy is a genre where elements of science fiction and fantasy co-exist." > test/complext.txt
./script/test_attention.sh
less test/inference.txt

Retrain the model

Our models are trained on the Wikipedia corpus. We performed a further data cleaning on the model to focus only on sentences that are shorter than the original ones (thresholding at 80%). After that, subword tokenization (byte-pair encoding (bpe)) was performed to tackle the out-of-vocabulary problem. A nice jupyter notebook was provided to walk throught the complete preprocessing, including downloading the dataset, thresholding the sentence reduction and subword segmentation.

  1. Open jupyter notebook:
jupyter notebook
  1. Open WikNet_Explore.ipynb and run step by step.

  2. Train on the generated dataset using nmt:

./script/train_nmt_attention_bpe.sh