Skip to content

jes-moore/bertVersusAll

Repository files navigation

bertVersusAll

For Using the Traditional Embed-Encode-Attend-Predict Model

For Using the Bert Model (Minimal BERT)

  1. Install: pip install -U bert-serving-server bert-serving-client

  2. Install other needed libraries and download data

    • pip install keras_metrics keras pandas numpy sklearn
  3. Serve Model:

    • bert-serving-start -model_dir ~/bert/models/cased_L-12_H-768_A-12/ -num_worker=2 -max_seq_len=250 num_workers can be used to increase num_workers (Model is loaded n-times) max_seq_len specifies the largest entry size to the bert model (suggest 250 for concept work)

About

A project to compare Bert versus a encode-embed-attend-predict architecture for NLP Classification

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published