Unfortunately, toxicity is a common occurrence in online activities. This model is designed to counteract the negative effects of toxicity by detect toxicity and assigning it a score. By utilizing Tensorflow and Keras, we are able to achieve this goal.
HypePhilosophy/toxicity-detection-model
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Detection of toxic words/phrases using RNN ML algorithms
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published