Skip to content
#

toxic-comment-classification

Here are 133 public repositories matching this topic...

Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at contact@unitary.ai.

  • Updated May 16, 2024
  • Python

The models are used to classify the toxic comments as toxic, severely toxic, insult, threat, obscene, & identity hate. By data collection & preprocessing to classify toxic comments with the help of lemmatization, lexicon normalization, & TF-IDF algorithm, we train & test the models using ML algorithms & evaluate using ROC curves & hamming score.

  • Updated Dec 29, 2023
  • Jupyter Notebook
DeTox

Improve this page

Add a description, image, and links to the toxic-comment-classification topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the toxic-comment-classification topic, visit your repo's landing page and select "manage topics."

Learn more