Skip to content

idastani7/Spam-Message-Detection

Repository files navigation

Spam-Message-Detection

In this implementation , I tried to identify spam and ham messages with Bert and I obtained %93 accuracy........ The BERT model was proposed in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. It’s a bidirectional transformer pretrained using a combination of masked language modeling objective and next sentence prediction on a large corpus comprising the Toronto Book Corpus and Wikipedia.

About

In this implementation , I tried to identify spam and ham messages with Bert and I obtained %93 accuracy

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published